By the middle of 1980, a few well-known papers had already published both the questions, and the solutions, before copycat82, which only has a lot of errors, and it certainly plagiarizes. The ideas in the prior-art research papers, are rather close to each other, at several points, though, and it appears rather trivial to merge them (or, to "morph" two, or more).
The word "trivial" needs to be qualified, then. Let's refine our vocabulary with the terms "easy" versus "ignorant-proof." For example, with these net formalisms, to compose ideas from different papers, should be very easy, as far as you know what you are doing, but if you are ignorant in the field, you should not attempt to cut-and-paste, because concurrent-systems-modeling is not "ignorant-proof." Furthermore, pursuing a Ph.D. degree, in a field, suggests a claim of competence, and even to try the limits, and improve - whereas copycat82 is the vice versa. Most people would probably, never imagine that anybody would commit such an immense number and variety of errors in a dissertation - and without any new ideas, at all./p>
The only utility of such a text is, then, in finding out what to warn against, when you are teaching about the field. Even that may sound absurd, though, because that would be a lot of fuss, about what would be rarely committed, possibly. If you list all such "imposible errors" along with the more probable, then you should categorize them, such that your readers/students do not get lost among trivia. For example, most easily, sort them, as "frequent," "possible," "impossible." (And please do stop those people who keep committing the "impossible" errors, from getting a title such as a "Ph.D." or "engineer" or "surgeon" etc., to the extend you can stop it.) Whatever other error-lists, and links (cf. a search engine directory) you may publish, this aspect should always get included, in each, whenever relevant.
For example, we can easily map a few elements between the E-nets and VD78 formalisms, although mostly, they study different strategies. This means, they could contribute to each other, if a minimum of care is observed. But, at all such points, copycat82 fails. It steadily makes the wrong choices. You could always correct them, yourself. But the result is trivially, then, either E-nets, or VD78.
The E-net and VD78 mappability (to some extent), is not such a strange thing, because, all those (prior art) papers (NN73, VD78, etc.) have some similar start points (concurrency literature, Petri nets), but each contribute their own, too. E-nets study, mainly, the modeling issues, and they simulate deterministically. VD78 swims at the other side of the river, mostly - with a three-levels strategy.
And next, some of these contributions are very easily portable to other net-formalisms, whereas some improvements are very specific to that (prior-art) paper, in question. For example, the net-macros (NN73), for E-nets, are very easily portable to many other net formalisms, exactly (e.g. to FSA, Petri nets, UCLA graphs, etc.)
Similarly, both strategies for reducing subnets are easy. That is, both the VD78 (beforehand, with its criteria), and/or the SARA/UCLA (afterwards, with its listed reduction heuristics) strategies, are usable with E-nets, and probably elsewhere, too. That way, a subnet, is reduced as a(n opaque) T-transition. This is a valuable improvement, very especially if a nondeterministic reachability-test is intended, because reducing subnets, reduces the marking-space complexity/dimensionality.
With deterministic test/simulation, it is again valuable, even if not necessary, as it helps reduce the load on rote memory, and/or help reduce documentation-need, when the subnet is virtually reduced to a simple element - instead of "a macro, with such and such a semantics that must be remembered."
NN73 (p.724), refers to a "transition procedure involving statistical terms", as an alternative to macros, but they do not discuss it further. Such a statistical-modeling idea, fits the E-nets preference to deterministically simulate, better, than the VD78 atomization. In a simulation study, with statistically-modeled/summarized elements, each run could be a(n experimental) statistical sampling, of the (probabilistic) behavior of the net. Then, you may run, for example, 100 rounds of the experiment, and process the resulting data with ANOVA, linear regression, etc.
Research is for some (positive) contribution. Not the vice versa. The copycat82 did NOT invent Petri nets, and did NOT develop either of the Petri net analysis, or the modeling techniques that are associated with Petri nets. What was the basis of its claims, to obtain a Ph.D. title, if nothing is new? Is it the unciting of the references, at those points where citing is due? Or else, may only a redrawing of the figures, and dumping everything in the formal listings (of E-nets) into rectangles, within that "single graph," be taken as a novelty?
Think of, graph-clutter (with dumped-everything in a graph), and also the
visual non-discrimination (with rectangles-stand-for-everything), and next, consider the
torn-apart-arcs (when that graph, is partitioned into modules). Then, reflect about the
existence of so many design errors, in copycat82, itself. i.e: Even itself has fallen prey to
the chaos that its own cut-and-pastes has brought up. Although the prior-art tools were usable,
copycat82 is not, as its own examples demonstrate. Who else was supposed to use that, if
After reading [about] the prior art, it is obvious that, copycat82 has not contributed anything. No original questions, and no original answers. It only republishes. i.e: This suggests the non-contribution of copycat82, by contra-position: That is, given that literature, what is the claim of copycat82, beyond that? The answer is: nothing. This nullifies any claims of worth about copycat82.
A point-by-point listing of sources, and questioning the arbitrary changes, and noticing the faults, lead only to a conclusion that, copycat82 has plagiarized, and even cannot provide a few working examples, with what had been already published.