Ready.


A Proof of Plagiarism.

On this page, we concentrate on proving the plagiarism of an unbelievable Ph.D. dissertation.

The Statement: The claimed contributions of the UnPhD had already been published by others. In the very rare instances where some trivial variation is attempted, the suggested content is missing and/or the consequences are not handled. Such false claims are only restating the research ideas/questions that had been published by others. No original questions, no original answers. Only a merger of a very few papers, which has resulted in a subset of some single papers - when the false claims in the UnPhD-merger are omitted.

The Proof Procedure: The UnPhD text is not essential for the readers of the discussion. That is a result of being unoriginal. I correlate the claims in the abstract of the UnPhD with the prior art. This provides the relevant background information. And then, the errors-and-omissions are screaming in the UnPhD, at the seams. We will discuss that, when the missing or unworkable/gotchaful parts are neglected, the part left in the UnPhD is already a subset of each single of the two (or,three) of the original papers.

The Prior Art
The first part of the proof on this page is constructive; We will do a (database-like) query of four original papers, published a few years before the UnPhD. I select, for this presentation, especially those parts in the papers, that the UnPhD is claiming to be contributing itself. For each claim in the abstract of the UnPhD, we will compare the content that had been published previously, with the content the UnPhD includes.

A well-known tutorial paper, P-77, on Petri nets, and two others extending Petri nets, N&N-73, V&D-78, each of which may provide a basis for the claims of plagiarism and/or non-contribution, by oneself, are included in the references of the UnPhD, as part of literature overview. But then again, not cited at any relevant point. As a result, the UnPhD is looking like it is introducing the insight at the point. The fourth paper, D-80, is not cited at all, but also is relevant.

The Non-Operation and/or Erroneous Entries
Then in the second part, against each claim, we conclude that any contribution is a myth, when the prior art is taken into account. This does include both the imported chunks, the bits and pieces, and the resultant islands-of-merger in the UnPhD.

Benefit-of-Doubt Leads to a Full Cycle
At those points the UnPhD cannot provide a workable result, and/or does not pronounce anything, we may be able to guess ourselves, and fill the holes. But that only leads us to the first part of the proof. In other words, one or more of the original papers had something [almost] exactly like that, but with workable results.

The result.
There is a full page for discussing the excessive amount of errors in the UnPhD. Here, we list only some; especially, those errors that point to being stuck with taking from others. When the range and the amount of errors are considered (in addition to the plagiarism from existing research), the UnPhD obviously does not fulfill even the requirements for an undergraduate project, given that the result does not work.

Subset Identification

First, briefly, I will explain what I mean when I say that the UnPhD is essentially nothing, when the prior art is taken into account. And the even stronger term that it is a subset of even single papers, especially the N&N-73 and V&D-78, but also, actually, that of P-77.

This entry may further help those who already know one or more of the reference papers, or know a bit about Petri nets, to more easily join the discussion, from the base they already know.






Part I: The Prior Art (in the 1970s)

There are basically two strategies. We employ both.

Compare and contrast the UnPhD with the individual papers. This helps to spot some similarities, and some times to a point that an error or redundancy that did not make sense in the UnPhD, starts to make sense when the relevant context/assumptions in the source paper is observed. We may also infer, how much work is done, if at all, beyond the already published, and well-known, literature.

Compare and contrast the source papers among themselves, and see how the UnPhD differs at the contrasting points. When not original, it may either take from one of them, or attempt a mix. When the source papers have different contexts/assumptions, the chunks of ideas imported from different papers can thrash each other.

Merge Clashes

A merge clash happens where the published literature is not taken one-to-one. Taking one-to-one, when referenced, is undertandable, but of course, that cannot be the major part of a Ph.D. work. The novel aspects should be underlined, and only those should be the claimed contributions.

When the foremost presented features are not original, and also not referenced, and when the seams of the merger are a source of faults, the plagiarism must be announced.

1.1 The UnPhD as a Subset of the Peterson-1977

The Peterson (1977) paper, P-77, is, and was, a well-known tutorial on Petri nets, with some expressed insights. The UnPhD is only a few inferior choices made from among the previously published literature. It attempts also to include features from the papers in the next sections. But it fails. When we, of necessity, discard such haphazard and/or missing features, then the Peterson tutorial also turns out to be a superset of the UnPhD. This section points to some of the reasons of concluding such merge-attempts a failure. The later sections will point to the (unreflecting) plagiarism as a source of such failure.

And let me repeat a point: The UnPhD does not cite any of these three papers, except in the literature overview parts. When reading the UnPhD, at any point of discussion, or feature presentation, you would not be able to tell that these papers had published the ideas/features before.

Modeling Software with Petri Nets, Hierarchically

In P-77, there is a subsection with the name "Modeling of Software" (pp.233-234). On p.234, there is the figure 15 that shows a program fragment translated into Petri net representation. The critical-section in the software is enclosed with semaphore request P(mutex), and the semaphore release V(mutex) operations. Peterson approach is to represent a program-fragment as a Petri net place enclosed between two transitions. This tells us that the code may take some unknown amount of time, which is the time the place holds a token. When the transition after the place fires, that means the execution-completion event has occurred. This is one side of it. And the other side is symmetrical. As a result, the figure is representing two figures which get into their own critical sections, when the other process is not in its critical section.

On pp. 230-231 of P-7, there is the idea of modeling a system hierarchically. It says, on p.231 "An entire net may be replaced by a single place or transition for modeling at a more abstract level (abstraction) or places and transitions may be replaced by subnets to provide more detailed modeling (refinement)." It also has the figure 11, on p.231 which shows this capability. A very simple tool for modeling, indeed.

The UnPhD uses rectangles, in which program-fragments are written. The rectangles are macros taking the place of transitions. The text in a rectangle may also be only a name, later to be expanded. This may sound familiar. e.g:Compare flowcharts. In addition to the sentence I quoted in the previous paragraph, the Peterson paper also points to the flowchart-resemblence of Petri nets.

Of course, a macro can not be "taking no time," as a transition was. The timing-related statement of the UnPhD, for the input case is erroneus: It declares that a macro will start execution as soon as it is enabled. However, the fact is that, the UnPhD cannot guarantee that behavior at the same time when claiming such rectangles are built-up of the regular Petri net transitions. They have no such "as-soon-as-enabled" acting requirements.

In the UnPhD, the timings for the exit from a macro, are altogether bizarre, both for verification, as well as for modeling purposes. Peterson suggests macros only for modeling, and the examples in the Peterson paper follow such a macro with a single place (or a single transition). That way, you may go on with not caring for the issue of time, at all.

In the UnPhD case, it explicitly tells that the output places from a macro may get placed tokens over the execution-time of a macro, gradually. In other words, they need not be placed all at once. This breaks the very simplistic verification algorithm the UnPhD suggests. When partial results can overlap with the partial results of other components, and/or when they get into loops, and may reactivate even the component itself, we cannot apply the basic Petri net verification algorithms, by neglecting the partial-result occurrences. Only saying "repeat the Petri net verification for the individual subcomponents" certainly does not suffice.

Furthermore, when such multiple levels are to be automatically verified, you would need some yet-another-specification-language, which I call vertical specification. It is more usually called interface-definition, or external-specification for the components, including the bits that tell us, at what token combinations it will be started for a test.

The name I chose, the vertical specification, is to point to the "complexity-reduction" hoax in the UnPhD. By analogy, you could run the traveling salesman problem in linear time and get the optimal result - "only" if you knew, beforehand, what the final solution of optimal partitioning is. By chopping a large (horizontal) Petri net graph into many levels, the complexity may surely decrease, but it may also increase. This is, in general, the issue of interface complexity when a macro has external links. But the term "vertical specification" sounds very much fitting, when turning around among the pages, as a human reader, checking for the links. How would a machine do it? Where are the examples, and the rules for such specifications, in the UnPhD? Peterson suggests abstraction/refinement only for modeling. N&N-73 suggests expansion of the graph before the final verification. And the V&D-78 paper standardizes a macro to be preceded by only one input place, and also followed by a single place. The UnPhD only says (p.95) the issue of partitioning and assignment is "complex and still not much understood."

It is not telling anything beyond Peterson, indeed. Peterson already tells of the hierarchy-idea and also has sections on verification. You prepare your graph and may verify it. At some step, you may expand it a bit, and re-verify. Of course, expanding a place would be a much better idea than expanding a transition, because that does not change the Petri net behavior for verification. (But let me verify this before anybody gives me a Ph.D. for it :-)) Shrinking and expanding places, to make the graph size manageable, is the already sensible modeling tool, and also better fits with the Peterson-modeling of the code-fragment as a place.

This may actually be what a lot of the readers of Peterson naturally have done after reading both of the sections.The UnPhD only makes the transition-scalability into a claim, by implicitly, yet falsely assuming/telling that arbitrary level of scalability (as in the V&D-78 case) could be readily applied to any Petri net graph, and does not warn the readers of any of the fault-proneness with it, and its algorithm cares for no specific situation. A Ph.D. work could probably make it machine-processable, i.e., NOT relying on the humans to prepare everything. But obviously, that is not the UnPhD.

(There are already approaches for the traveling-salesman problem, with a bit of success.)

The UnPhD has no such discussions, and no examples of such a (machine-processable) specification. Even the textual commentary is very much lacking; You have to go back and forth among the pages, and then, can even find the faulty figures in the UnPhD itself. It only ignores all such issues. How could we take it as a contribution beyond P-77, when the UnPhD is taking only one of the macro-defining possibilities in P-77, and does not take care about any of the consequences - especially, for verification. For modeling, the idea of partial output is very distracting. We would most usually expect a "subcomponent" to have a better-defined interface. (See also, V&D-78, and N&N-73 approaches, later.)

Then, there are a set of macros, the UnPhD calls "input/output transfer specification." Some of these correspond to some very primitive Petri net idioms (e.g: "and-input" is a transition with two input places.). Others are altogether faulty, as we will discuss on this page a bit, and also listed, in full, on another page that discusses the UnPhD's Figures and Examples: Faulty, Vague, and/or Trivial

The Data: For Your Eyes Only, If That.

Remember that the UnPhD uses rectangles with program-fragments in it. This can also be a macro with only a name, but no code in it, later to be expanded - all the same. This all is, textual commentary put into a rectangle. The next imported idea (although not from the Peterson paper), finds a place in the UnPhD figures as the names of the data-items or data-types put into boxes and some arcs/arrows pointing between the code-fragment boxes, and the data-name boxes.

It is only a duplication of the textual information. e.g: If a rectangle reads "i := i + 1" and another rectangle reads "i" and a double-sided arrow points between them, you have only redundant information. Even for a name-only macro, one could very easily write the data-item names that it may use later, just below the name of the macro.

And this certainly does not show "all the dependencies" of a data-item, either. Although the UnPhD claims to have a "single graph for both the control and data", when you look at the nth level in a hierarchy, all the data in the upper-levels has turned into "external data item" pointers. Then, you certainly cannot see where other macros or macros-within-macros are referring to that data-item.

And then, data is completely ignored at the Petri net verification phase.

We will later discuss the N&N-73 and V&D-78 appraches to data. They also precede the UnPhD in time, yet supercede in content.

The Wrapper/Theme

What is left? Nothing. The P-77, on p.234 lists distributed systems among those that have been mentioned as possible subjects for modeling with Petri nets. The UnPhD carries the words "distributed software systems" to its title but does essentially nothing about it. Not a single issue/difficulty of such systems is addressed and/or dealt with, other than pointing at what Petri nets were already doing, i.e., independent token-flow in different parts. None of the redundancy, fault-tolerance, voting-algorithms, security issues, etc. The UnPhD only mentions partitioning-and-assignment-to-nodes but leaves it by saying the issue is "still complex and not much understood."

All in all, the whole relevance is that a few distributed-systems terms are used as a wallpaper, some wrapper/theme. Not really an application of Petri nets to study the particular needs of a field. The example at the end of the UnPhD is only a representation of a mutual-exclusion algorithm from CACM, and it has a lot of faults in it, too, including a deadlock at the uppermost level, and so on.





1.2 The UnPhD as a Subset of Noe&Nutt-1973

The Noe and Nutt (1973) paper on Macro E-nets is the second paper we discuss. Macro E-nets is a conservative utilization of the E-nets (evaluation nets), which is an extension of Petri nets, itself. The macro capability in N&N-73 is mainly intended for bottom-up design. The authors suggest identifying often-used components, and making them into macros.

N&N-73 approach is to verify the graph by, first expanding all the macros, and then feeding the expanded full graph to any regular E-net verifier. There, we understand that the resulting graph size was not their concern, unlike for V&D-78 we discuss in the next section. They are focusing on the modeling issue, and the intuitive appeal of the resulting graphs (p.721).

If the graph-size were a concern, then a regular E-net user, with orr without macros, might just employ the strategy discussed in the Peterson case, about Petri nets. i.e: The designer may, on-the-fly, expand and shrink some parts of the graph, to a single place. The [macro-]place may or may not be followed by a T-transition, the inclusion of which would provide further precision, because with E-nets, the transition delay and action times are exlicitly timed. As a result, once a graph-chunk is well-tested/trusted, the execution time might simply be loaded to a T-transition (or to any of the other, more sophisticated, primitives, or macros). A primitive, like T-transition is, need not be expanded any further. Indeed, this is basically what happens at the very first abstraction phase. The home-to-office travel may be represented with only a T-transition, if we are not interested to represent the details of what may happen on the road.

As stated, N&N-73 does not deal with multi-step verifying of graphs, except as a bottom-up design and/or as identifying macro-candidates at some stage of design, and then re-using them. It is the V&D-78 paper, which we will discuss in the next section, that focuses on multi-step verification to deal with large graphs, as well as employing macros for top-down design. The UnPhD, in its pages, includes discussion of both ideas, without citing these sources, at those points of discussion). But then, when comparing-and-contrasting with the V&D-78 paper, we identify it as a plagiarism-without-understanding, and the result is fault-prone. When we omit the false claim of "algorithmically" multi-step verifiability from the UnPhD, the N&N-73 paper is surely a superset of the UnPhD.

The UnPhD's so-called "input/output transfer specification" macros resemble very much the E-net primitives. In deed, if some decision-primitive does not exist in E-nets, it is faulty and/or fault-prone in the UnPhD. Period. The representation is also look-alike and this is demonstrated by a figure in the UnPhD (and UnPaper), the three parts (b,c,d) of which correspond to the three parts (again, b,c,d) of the E-net primitives figure in the N&N-73.

I list the operators here. They are shown on the UnPhD graphs in the style of SARA (UCLA graphs), with symbols between input/output arrows to/from a transition.

  • and-input

    Corresponds to J(oin)-transition of E-nets, and to and-input-logic of SARA (UCLA graphs), shown with an asterisk between the incoming arcs.

  • ++-priority-input

    Faulty implementation. Would have corresponded to the X-transition of E-nets. The implementation is only probabilistically (50%, or so) "priority precedence" for the "higher priority" input.

  • OR-input

    Faulty implementation. The result is a strict turn-taking enforcement. Could have been the same as the inclusive-OR-input-logic of SARA.

  • ++-input

    Redundant/faulty implementation. As it is, it corresponds to a "[FIFO] queue-place" (with no priority), p.722 of N&N-73. It has redundant implementation, and lets an overloaded (unsafe) place, which is, implicitly and trivially, a queue. The redundancy does appear to have resulted from thoughtless copying (as if it were a "generalized" case) from a figure of V&D-78. See the discussion in the next section.)

  • xor-input

    Should better be termed a "deadlock transition" because when both inputs are enabled, it is permanently deadlocked. (Unless, some other process removes tokens, and leaves only at one of them. This is also usual deadlock-breaking case.) It uses a pair of mutual inhibitor-arcs, for this result. The UnPhD contains no justification for this choice of featured-macro. It simply states the behavior. When a special need, if ever, occurs the user could easily cross two inhibitor arcs to achieve the deadlock result. Only a haphazard, choice of a basic macro.



  • sequential-enabling output

    Faulty implementation. Unlike what an example figure suggests, it does not provide a sequential order of execution. The full-reversal of execution order is possible. (This may be because of the false assumption stated in the UnPhD that, a macro would "start execution as soon as enabled." But that cannot be true, as long as those macros are built-up from the regular Petri net transitions, which may wait as long as they may, and also may altogether choose not to fire.

  • and-output

    Corresponds to the F(ork)-transition of E-nets. It is, of course, the regular Petri net transition that has two output places.

  • xor-output

    Corresponds to the X-transition of E-nets (but nondeterministic, data being discarded), and to the OR-output-lgic of SARA (UCLA graphs).

  • or-output

    Is different from X-transition in that, it may also result in both of the output places being enabled. (this is also nondeterministic, data being discarded)

And that does suggest the origin of placing those little data-boxes everywhere around the graph, just where, in the E-nets case, stays the resolution location, pointing to the data-dependency. In deed, there are nothing not already in N&N-73, except for the "xor-input" and "or-output" - which are not only trivial, but also unlikely to be used anywhere except in toy-problems (very abstract algorithm demonstrations).

e.g. OR-output enables undeterministically, many nodes at once. This rarely may happen, for example, in a server, without other management being made for the process start-ups. Repeated use of the same macro would make much more sense, instead of representing n-copies of the same macro, in parallel. The very-trivial standardized interface-macro of "(inclusive) or-output", leads to difficulty and duplication elsewhere. And as usual, comes with no justification, why it was such an important primitive to include, and give that name.

In the N&N-73 case, the resolution-predicate makes the decisions, to break undeterminism, by employing data, and reflects the decision in the resolution place. In the UnPhD case, the data is discarded at the Petri net verification phase (which leaves "xor-output" and "or-output" non-deterministic). In case data were not discarded, given that the UnPhD does not discuss any further integration of data, we can only expect that everything relevant to data-manipulation would be hidden behind a predicate, attached to a macro-transition. All the same with the N&N-73 approach.

The names may also sound like Pascal, but neither the resulting program structure, nor the definition of words like "xor-input" sound like anything you may have encountered before. In a Pascal if-statement, when the "and-input" is false, the program skips to another point of execution. Here, it simply waits until the "and" is true. The "xor-input", once true, gets into a gridlock/deadlock, altogether. In ther words, to a newbie, it may at first sound like familiar, especially when the program-fragments are also seen in the rectangles there, but the actual workings-of-things is different, and you could as easily, maybe easier, map the structure of your Pascal code to the Petri nets, or E-nets. No need to correct some faulty macros yourself to meet the sort of semantics.

Restricting, and Missing

The named but omitted inclusion of Guttag's version of abstract data types, then, is only an absurd feature. It is only overspecifying, in this context. Whatever the data-representation may be, they are only somewhere behind some predicates, or "data-transfer specifications", which are heard of, but not seen in the UnPhD. The UnPhD does not include any discussion of how the very separate formalisms would be inter-related (Petri nets, and algebraic data specification) - "except" naive displays of rectangle-partitioning, even which also have their shares of faults and/or omissions, as we will discuss.

To summarize, the macro idea was with N&N-73 already. The i/o transfer macros also, both in style, and (for most of the workables) also in behavior, are like E-nets. The data is already employed by N&N-73, and even discarded by the UnPhD. Furthermore, given that the UnPhD' naive algorithm for verification is already broken, in itself, and losing even Petri net verifiability, the UnPhD is left only as a full subset of N&N-73.

A few examples may have very trivial but working figures (e.g: the "or-output"). Yet, there are so many unworking, faulty and/or fault-prone ones. This is especially noteworthy, because the UnPhD claims to be, in its title, on "modeling and verification."

As a final note, unrelated to the E-nets case, but for you to visualize the UnPhD figures better: The placement of those "input/output transfer specification" macros is eactly like in the UCLA graphs. i.e.: Putting a '+' is "or", a '*' is "and". The UnPhD also has the others, but not workable. And the UnPhD's xor-output behavior corresponds to the UCLA-graphs' "or-output" logic of choosing one of the alternatives. You can keep that in mind by concatenating X-transition of E-nets and or-output of UCLA graphs - in case you cared. The UCLA sense is in the daily-language sense, of exclusive or, already (as in either ... or ...). The UnPhD is apparently differentiating it as exclusive versus inclusive or, but then, I doubt whether the definition of "or-output" has much practical usage, to warrant separate-definition, especially as it is quite easy to build with Petri nets, too.






1.3 The UnPhD as a Subset of Valette&Diaz's

The Valette and Diaz (1978) paper is titled "Top-down formal specification and verification of parallel control systems. It is, probably, a lesser known paper but closer to what the UnPhD is trying to do. Indeed, the UnPhD does not deal with any verification or modeling issue that the V&D-78 is not dealing with. And given that the UnPhD also fails in the delivery of the suggested features, it is a clear subset of the V&D-78.

The V&D-78 does not carry the term "distributed systems" to its title, but it is listed, in the first page, as a motivation for the study with Petri nets. In particular, the representation of parallelism by the Petri nets, is found relevant. The UnPhD uses the theme of distributed systems in its title, abstract, and as a wall-paper, in a lot of rather vague figures, probably only resolvable by reading the caption, but it does not deal with any further, of the issues that make the distributed systems difficult to manage. In other, words, the UnPhD is neither improving the representation, nor the application, any further than the V&D-78 already had done, w.r.t. "distributed systems". The page-filling, and still missing and/or faulty content of that wall-paper theme will also be discussed, on this page.

Multi-Step Verifiability

The V&D-78 states its motivations for doing multi-step verification as both being able to verify larger graphs, and also to facilitate top-down design.

For this, V&D-78 suggests a macro form of a transition with a single input, and a single output place. Potentially, any and every transition that fits this structure may be expanded as a macro.

At this point, we can make observations and discover the source of some of the meaningless errors in the UnPhD. The errors-introduced-because-of-thoughtless-copying.

For example, as stated in the previous section, one of the very trivial and very basic macros in the UnPhD, the "++-input transfer spec" was with a redundant place. There was no sense of it. Observing a figure in the V&D-78 paper suggests the source of the error-introduced-in-transit: The V&D-78, observing that any transition surrounded by single places, may be a macro, is using a mutual-exclusion place to avoid the overlap of the transition activation times in that case. The UnPhD, without reflecting, has directly copied it. In the UnPhD case, within that macro, with the ordinary Petri nets being used for that example, there can be no time-overlap whatsoever (with the instantaneous firing behavior, the UnPhD also acknowledges in its Petri nets definition). And the redundancy/error is repeated in the UnPaper, too. (It is published late in the 1983. A year after, 982 the UnPhD has a title granted.)

As another example is the verifiability claims itself. V&D-78, when claiming that would work, it is also setting the structural requirement in the previous paragraph. The UnPhD tells of no restrictions. Just takes the Peterson-style, unrestricted scalability, and hopes the V&D-78-style infinite scalability. But that does break the very naive multi-step verification procedure, then, because we now may have a variety of different output configurations, the basic Petri net verification procedure is not tuned for verifying. The resulting state-space is different, and larger, than if the macro/transition was firing insstantaneously and/or depositing all the output tokens at once. The UnPhD, in deed, explicitly states that such gradual token placement at the output may be, and the time overlap among component/macro run-times may also be. But there is neither a modification in the algorithm to deal with it, nor any suggetsion of a proof that the algorithm would work (which cannot be, any way). The algorithm, by the way, is that once the highest level is verified, there is a recursion to verify each macro within itself. This would work in the V&D-78 case. Not work in the UnPhD case.

Two of the problems: The stated expanded state space, and also the unspecified start-token requirements. The latter is also a non-issue with the V&D-78 choice of a single input place. But the algorithm for the UnPhD, should at least have some lines to tell where to place tokens, where not. It does not deal even with that. No discussion; no reflection. By comparison, a Petri net, usually, may be verified from a few start states with the hand-picked start-token configurations. As a designer, this may be chosen.

The second may be easier to solve, if the state of the upper levels is cared, but then, that could still be a full-graph tracing. i.e., the higher level calls macro-x, then acts with that new configuration, and calls macro-y. But this would be equivalent to a complete verification of the (fully-expanded) graph.



The changes:

Has merged the operators and transitions of V&D, which were already one-to-one. V&D had separated. Data and Petri nets come together only when relevant. Likewise, for Noe&Nutt. The V&D has two separate graphs: One is in the sense of token-flow. The other graph is event-driven (e.g: interrupt-driven). The graphs are meaningful by themselves, and the events (Petri net transitions) in the control-graph correspond to some operator(s) in the data-graph. In the example figure, the relationships are all one-to-one, except in trivial cases where no operator is declared for an event. The UnPhD has taken this as a base, and pulled the data graph, into the control-graph by merging at those already-one-to-one event-operator links. This only makes the combined graph more crowded, and lumps together two different ideas of event-sequencing, and event-data-dependency. If five different office workers, each, deposit a token into a coffee-dispenser, and insert their ID-cards to a machine, in the latter case, the correspondence may be one-to-one, but the tokens-counting data-item would not care at all who deposited each token at what order of their sequences. On the other hand, the token-collection may be an important operation, and the coffee-machine's owners may prefer to know who opened it when, and how many tokens there were.

In the most usual case, with real examples, when the operators and events are not one-to-one, there will be an explosion in the combined graph size, with no further meaning represented.

Keeping the event-sequencing separate from the individual transaction executions is one of the oldest abstractions of computer programming. It is usually called functions, procedures, subprograms, or the like. The UnPhD discards such. Although it echoes the N&N text w.r.t. bottom-up design, in addition to the V&D w.r.t. top-down design, the merger makes both of them senseless.

And the net result is that even the UnPhD itself does not follow its own ideal of a single graph showing all the relationships at once." It falls back to what the Peterson tutorial already did, and to even less. The only difference is that the UnPhD has replaced the transitions with rectangles that have written program fragments in them, if not a name-only subcomponent/macro.

When you expand the contents of a name-only subcomponent/macro, just below the uppermost layer, you see the external data items only as pointers to data-item names. No indicators of what happens with the data elsewhere. The focus is lost. How is that a single graph? Only at the most abstract level? Where is the contribution of the Ph.D.?

And it is even below the pre-existing standard of Petri nets. Peterson shows the semaphore operations with Petri nets, in its software-modeling example, implemented with Petri nets. The example at the end of the UnPhD, only points to a data-object "sv" which the commentary in the text says, is a "semaphore variable." That means degenerating to nothing, especially given that the UnPhD totally discards the data-relationships at the Petri net verification phase (unlike N&N and V&D). As such, we are having here a Petri net based Ph.D.-example that totally ignores a semaphore variable from verification considerations of an application.

The UnPhD only superimposes the two distinct idea of possible-execution-sequences, with data-dependency relationships. Is this progress? The UnPhD does not explain why it is. But the further case is clear that even itself is not being able to apply it. The picture gets especially absurd when we notice that the approaches, in all these papers, is a multi-level one. When you concentrate at an internal-level within a subcomponent/macro, the global/external data shared with other (sub)components is not showing the big picture of the data-dependency, any way. In other way, it is an arbitrary change, with no stated justification except "a single graph" being, and indeed, even that is not true. What single graph? The graph is already multi-levels. And when we concentrate at any particular level, within a component, any external data has already turned into pointers data-item names. In such a case, the N&N and V&D ways of knowing-what-it-does is more informative than seeing a variety of data-item names being pointed at. And very important source of confusion in the UnPhD approach is that, the data, as can be easily imagined, would rarely, if ever, get modified in the highest level. In each case, some internal component/macro will do that. Then, the picture is such that, at the actual points of modification, we only see the data-item names being pointed at, but we even do not know which other components are accessing it, in what ways. That information is totally hidden. If the term data-dependency would not even make clear a race condition, or the like, what is it good for? In contrast, the unmerged control-graph versus the data-graph as in the V&D, has the data-graph show the operators and datapreviously published,






N.B: This page is in the editing. (Under construction. Re-arrangement.)...

... The sections below this line may not be fine-reading, yet.

-------------------------------------------------------------






Part 2: Refuting the UnPhD's Claims of Novelty, Item By Item

Abstract or Surreal?

Here is the list of claims from the abstract of the UnPhD:

  1. A method and a model for design and analysis of Distributed Software Systems
  2. Based on modified Petri nets; represent both structure and behavior of a DSWS
  3. A flowchart-like, single graph, representing both events and data
  4. Emphasis 1: partial order, hierarchical structure
  5. Emphasis 2: data types, data objects
  6. Emphasis 3: local control, and distributed system state
  7. A Macro Is Known to Others Through Its External-Specification
  8. Linking of macros through Petri net places, and through data objects
  9. Inside Macros: partially-ordered submacros, and local data
  10. May Expand/Contract Macros At Any Desired Amount
  11. Hierarchical: A Macro, After Its External-Specification
  12. Insensitive to granularities of concurrency and distribution
  13. Analyzable after transforming into a Petri net
  14. The application to the "design" of a small distributed software system

Those are the separate claims. We will discuss each one in its own section. The sequencing of the presentation, however, will not be in the listed order. I present starting with those topics that can be told without the others being told before, and so on. In other words, in the telling, there is a topological sort of the sections, suggested by the partial order of dependence among the sections.

This is the sequence of the discussion:

  1. Based on modified Petri nets; represent both structure and behavior of a DSWS
  2. Emphasis 1: partial order, hierarchical structure
  3. Emphasis 2: data types, data objects
  4. Emphasis 3: local control, and distributed system state
  5. Insensitive to granularities of concurrency and distribution
  6. May Expand/Contract Macros At Any Desired Amount
  7. A Macro Is Known to Others Through Its External-Specification
  8. Hierarchical: A Macro, After Its External-Specification
  9. Inside Macros: partially-ordered submacros, and local data
  10. Linking of macros through Petri net places, and through data objects
  11. A flowchart-like, single graph, representing both events and data
  12. Analyzable after transforming into a Petri net
  13. A method and a model for design and analysis of Distributed Software Systems
  14. The application to the "design" of a small distributed software system

2.1 Based on modified Petri nets; to represent both the structure and behavior of a DSWS

We understand the word structure as the graph that shows all the possible sequences and branchings of events and conditions, in event-modeling. The term behavior is understood, as the run-time activity, or the token-flow, for these Petri-nets based event-modeling systems.





2.2 Emphasis 1: partial order, hierarchical structure





2.3 Emphasis 2: data types, data objects

Abstract or Surreal? Both V&D and N&N employ predicates to deal with data-dependencies - which is some implicit form of abstract data-typing (the combination of data-items with the acting-on predicates, with some standardized interface for the result). The Unbelievable-PhD-text imitates suggesting, instead, using the explicit, algebraic approach of the tutorial paper by MIT's Guttag (1980). O.K. Do it and let's see. But the problem is that the content is missing. The Unbelievable-PhD-text only gives a summary of the tutorial paper, in a page or two. Neither discussing the specific improvements that would be brought by that particular choice, nor how to apply it in this context along with Petri nets, etc. Only some rectangles with type names in them, in a few sample figures.

In other words, there are not any worked-out examples that apply the suggested (algebraic) approach in this context. Not the least of a full method for others to be able to do it, with detailed care for the steps and the consequences.

Abstract or Surreal?

A very obvious one is the claimed introduction of abstract data types. This claim is told at several points in the UnPhD, and there is even a special type of box drawn in the figures - although, that only happens to be in the sample figures of shape-demonstration. There are not really any examples of how it would be used in a Petri net context. No methodical treatment, not even any examples. All and only a telling of the rectangle-and-lines, and also how we could divide a rectangle into parts, or add up (which is the same hierarchical division idea of Petri nets, as in Peterson, and other papers, but the abstract-data part of it is not explained otherwise.)

At the end, if something had been "introduced," I guess, it would something like the resolution predicate of N&N, or the operators of V&D. i.e., resolving indeterminacy by referring to the data-specifics. And then, we would also carry this section to a page that discusses the plagiarism aspects in the paper. Because, if wrapping the whole thing in a predicate, and only giving a summary of a tutorial paper of MIT's Guttag (1980), in a page or two, what are you claiming to be introducing? Both components were totally implemented already. Not to mention the demands of algebraic specification. Unlike N&N, and V&D, who have taken the practical approach of directly resolving with predicates, if you go on and also explicitly specify what will be behind those predicates, there may be some discussion needed to tell how you will manage the synchronizing two altogether different specification approaches. We can only guess by reflecting on how we convert Pascal programs to Petri nets, and vice versa. i.e: If a stack pops, you may make it a Petri net transition, but then what is the sense of specifically naming and summarizing a tutorial? The UnPhD is mute. It actually, then, only falls in the category of a grossary, some page-filler that only exists to get/divert attention.





2.4 Emphasis 3: local control, and distributed system state





2.5 May Expand/Contract Macros At Any Desired Amount

it differentiates among macros as being thick rectangles, not thin ones.

And, indeed, except such very few figures where it tries to show how the macros could be implemented with the ordinary Petri nets, it does not use any ordinary transitions in the examples. and transitions with theThe instantaneous, in that case.the figure is an example of their own, and their graphs does not care for a transiA Bsimply does not work. has its E-net primitives, the V&D has predicate, shown by rhombus,





2.6 A Macro Is Known to Others Through Its External-Specification

At V&D-78, the macro-facility is standardized to be a macro-transition with only one input place and one output place. and cite a paper for a proof correct operation.





2.7 Hierarchical: A Macro, After Its External-Specification





2.8 Inside Macros: partially-ordered submacros, and local data





2.9 Linking of macros through Petri net places, and through data objects

Event-activation Decision-making

N&N and V&D both have general-operators, with standardized results, for conflict-resolution. The UnPhD approach is like the V&D approach in the sturucture and it is both like the N&N and V&D in the expressed content. In other words, it has pulled in the predicates from the data-graph to stand before transitions. The V&D already declares the event-predicate correspondences as a label in the event-graph. This looks like the E-net primitives in the graph. Indeed, a figure in the UnPhD (and the UnPaper) already show the correspondence. The middle three items in the figure correspond to the figure in N&N that shows the E-net primitives, and they are look-alikes. Furthermore, when some of those so called "input/output transfer specifications" are not directly taken from one of the two sources, it is either faulty, or very trivial, or is very unusual and does not come with any justification either.

Both rigid and haphazard, at once!

The N&N and V&D approach is to keep the predicates-and-operators free at what they do, but standardize the resulting behavior. It fits well with the idea of tokens, or cash. You could buy fruits at the market, whether you earned it as a taxi-driver or a computer programmer. The UnPhD "approach" restricts the operators in five input, four-output styles, which display non-standard behavior. Successive activations of the same input-macro would give different results, because of faulty implementations. The definitions of those "chosen" i/o macros are also not really the ones I would like to use at all. And they come with no statements justifying the particular choices, either. Only, what they do is listed. e.g: When both of the inputs to the xor-input macro are enabled, it gets into deadlock, and must be resolved only with some external process removing some tokens from one of the (or, in general, from all but one of the) input places. How general is its applicability? Why choose it? And why name it "xor"? It could be some name as "deadlock" or "gridlock" macro.

A decision-maker from a programming language we know, would transfer to an alternative, not block it. The UnPhD, by putting program fragments into rectangles, and using names like "xor", may appear, at first, like Pascal, C, or such. That could be a style, but not really original to have Petri nets imitate Pascal. (A high schooler could also do that. Hence, not a Ph.D.-level achievement.) But then, when the names clash, that really is a problem. It looks like, but is not. Interference between concepts.

Copycat Behavior with Its Absurdity

The thoughtless-copying is quite obvious. When something looks absurd in the UnPhD, you may trace the intended-meaning in one of the source papers. For example, when it attempts to implement the "++"-input-transfer macro with the basic Petri net transitions. The net effect is two-transitions-depositing-to-an-unbounded-place. Functionally, like the FIFO queue of the N&N, but with a name like "another version of inclusive or, with possible re-activation" already confusing a bit.

The absurdity is a redundant place that, within the macro, "mutually-excludes" two transitions. (A non-macro transition takes no time. ) Reading V&D suggests the source of it. The fig.4 (p.190) in V&D has a part like that. But the difference is that, with V&D, any transition with a single input place and a single output place can be expanded as a macro, and both of the mutually-excluded V&D transitions are like that. As such, in the V&D case, it is fine representation.

In other words, the UnPhD has not kept in mind the assumptions, and the overall structure of the paper(s) it takes from. As a result, even such a small macro-definition has only been a pointer to a copying, and not really meaningful otherwise. An unbounded place, pointed by two or more transitions does not need any macros, at all.

Not to mention that, even if it were an error, the UnPhD (and then, the UnPaper) case would only be a duplication of an error. without showing it with another shape. i.e: For V&D, it may take some time, and lead to some overlaps. (The particular transitions also fit to the expectations of such macro-transitions in that paper, with a single input place and a single output place for each.) As such a fine-represntation in V&D case, has turned into a redundant one in the UnPhD case, because the figure is exactly where it attempts to show us how its "++"-input transfer macro is to be implemeted with the ordinary Petri nets, NOT with macros.





2.10 Insensitive to granularities of concurrency and distribution





2.11 A flowchart-like, single graph, representing both events and data

Peterson (1977) tutorial, on its first page, an analogy/insight is stated: "The Petri net graph models the static properties of a system, much as a flowchart represents the static properties of a computer program."





2.12 Analyzable after transforming into a Petri net





2.13 A method and a model for design and analysis of Distributed Software Systems

The details of operations will be discussed later, at relevant entries. For this, overall, "a method, a model" entry, let it suffice that, it can pull its strings, and does not crash.

Prior art From Valette&Diaz (1978) on Top-Down Design, with Petri nets:
On its first page, lists among its motivations "describing in an easy way the communications and synchronizations in distributed control systems." The representation of parallelism.

In its later sections, the paper also includes data, and an expansion/compaction ability for the Petri net graph, for both gradual design, and verification purposes.

It is workable. The later sections will show that the UnPhD is not making anything work beyond V&D-78.

Prior art From Peterson (1977) tutorial on Petri Nets:
On pp.233-234, there is a subsection, part of modeling with Petri nets: "Modeling of Software." The discussion includes an example of transforming two processes that use a piece of program that use semaphores for mutual exclusion, into a Petri net graph. See its references, too.

On p.234, lists resource allocation, operating systems, and distributed computer systems among the many that have been mentioned as possible subjects of Petri net modeling. At least, the suggestion is already there.

Does not list distributed systems, or software systems, in words. But ...

It is workable. The later sections will show that the UnPhD is not making anything work beyond N&N-73.

And even worse!, the less-feature of losing even Petri-net verification because of a very naive method being suggested, for the hierarchical, multi-level verification, leaves the system as only a wrapper, which even the Ph.D. recipient does not follow faithfully, in its figures. (As would be expected, because that crowds the figures to a max, and the whole thing falls back to the label/pointers suggestions of the previous paper(s).) The reason for confusion, in that case, appears to be that he believed it could be done without the requiremments in the other references. But when that is false, the method crashes down in several ways.

The title and the abstract of the Unbelievable-PhD-text stresses "being for distributed software systems." But, that claim only refers to the well-known Petri net ability of modeling parallelism, and asynchronous operation (independently acting subparts). All three papers are relevant. V&D, especially, tells of this as a motivation, in its first page, but clearly tells that it is w.r.t. parallelism, and does not carry the word "distributed" to the title or to the abstract.





2.14 The application to the "design" of a small distributed software system

Where is it? There is only a mutual-exclsusion algorithm, from a CACM article, which is converted to the representation suggested by the UnPhD. It takes quite a bit of listing of the errors (which I do in another page). And it is deadlocked. It cannot be the failure of the algorithm, most possibly; it is very obvious, and even such deadlocks are very glaringly there. Not to mention that, this is the example which was supposed to demonstrate operations of all those unexplained abstract-data-types, etc. How do they interact with Petri nets really? It is "The End" but the reader leaves the pages with this question in one's mind - among the others.

List other features, too?






Discussion

In summary, the Unbelievable-PhD-text is a merger of Petri net extensions of two preceding research papers (N&N and V&D), with a very little make-up. And then, when even that merger and make-up leads to obvious faults and unhandled consequences, even the basic advantages of the Petri nets can get lost.

All in all, what the Unbelievable-PhD-text contains can be listed as: A merger of two preceding research papers (N&N and V&D), and a make-up. The "data-relevant" aspect of the make-up itself is a merger of V&D paper's two graphs into a crowded whole (also very similar to N&N, especially with respect to input-data) yet not providing any method to deal with the resulting combined graph - unlike what V&D (and N&N) do, even though it is V&D preference to keep the two graphs separate. The input/output transfer specifications turn out to be only a faultful, undiscussed/unjustified translation (with little modification, when any) from N&N (and being faulty and vague, exactly at those points of differences); and the idea of using macros for design and verification ease are from N&N and V&D, respectively, anyway.

Unoriginal. So much so that, at times, we can even tell the copying step-by-step by comparing the figures, and/or something that does not make sense with the haphazard renaming in the Unbelievable-PhD-text starts to make sense when you notice the same component in the original paper and read the original explanation for it.

previously published with there- the three original papers, and concentrate at the intersection of the result with the UnPhD. (Like a database querywe will have more, not less, than reding the UnPhD. The second part will show that nothing else is well-formed. And such differences are few, trivial, and hesitantly applied even in the UnPhD, anyway - which makes discussing the second part, as well as the first part, easy for those who do not have the text of the UnPhD available.

The main structure of the re-merging of the three papers, will be on the structure provided by the Petri nets tutorial by Peterson (1977). The second paper, which will add some further insights and some examples, is the paper on Macro E-nets by Noe and Nutt (1973). A third, maybe optional, paper is, maybe less heard of, but still with some expressed insights. It is the paper by Valette and Diaz (1978) on top-down formal specification and verification, using Petri nets. It is optional, in the sense that, the features it suggests are already disfunctional in the UnPhD, because of faults and omissions, but it lets us guess what the UnPhD probably attempts to do - if we need guessing at all, at a point the Ph.D. recipient oneself has not produced it.

In the UnPhD, the style of plagiarism is not one of totally not-mentioning the references. Two of the references were quite well-known papers, anyway. UnPhD refers them as part of its literature overview, at some point. But then, never again tells us, at any relevant point, that some idea is actually re-published from somewhere else. You need to sort and reference, and the end result is that nothing else is left - except some false claims, and a lot of errors in the examples, very much unsuggestive of a Ph.D. work.

And how about false claims? And how about "non-material errors" that come about because of some such false claims? Such has happened, because while merging the other (research/tutorial) papers, or making claims to integrate some tutorial, some crucial part or another has been left out, and/or the assumptions of the merged papers can clash, and the result can undo the even established tools of the field. In other words, they are Missing-In-InAction (MIIA). You can read further, on the page The UnPhD as Only False Claims and Grossary..

And even, less! Some of what is already implemented in the previous literature, and even explained to be the reason of including those details, have not been put to use, not integrated, and the end result is an act of cut-and-paste but for the sake of nothing. Hence, some of the discussion in it already undoes itself, even without comparing with the references.

Such less-features even include the very data-inclusion, because the data, at the end, is divorced from the behavior, anyway, and totally sent away to a short reference to a tutorial paper, which is not relevant to Petri nets, at all.











Corresponding sections in the first edition of the, monolithic, discussion

0. Unoriginal

0.0 The questions are not original.

0.1 The answers are not original.

That version of the document may be reached at the old page discussing the UnPhD



Any Questions?: . . (Request Content . . . . . Correct Errors . . . . . Submit Case Study . . . . . Report Content Similarity.)

Written by: Ahmed Ferzen/Ferzan R Midyat-Zilan (or, Earth)
Copyright (c) [2002,] 2003 Ferzan Midyat. All rights reserved.