G Stewart von Itzstein

Bachelor of Computer and Information Science (Honours)

A thesis submitted for the Degree of
Doctor of Philosophy


31st December 2004


South Australia




Reconfigurable Computing Laboratory

School of Computing and Information Science

University of South Australia

1      Introduction.. 1

1.1       Introduction. 2

1.2       Motivation. 3

1.3       Structure of Thesis. 7

1.4       Research Contribution. 8

2      Literature Review... 9

2.1       Introduction. 10

2.2       Abstractions of Concurrency. 11

2.2.1     Abstraction. 11

2.2.2     The Hierarchy of Abstractions of Concurrency. 12

2.3       Process Calculi and the Join Calculus. 17

2.3.1     Introduction. 17

2.3.2     Join Calculus. 18

2.3.3     Join Syntax. 18

2.3.4     Join Semantics. 19

2.3.5     Synchronous Names. 20

2.3.6     Join Calculus vs. Other Calculi 21

2.3.7     Summary. 21

2.4       The Object Oriented Paradigm.. 22

2.4.1     Advantages of the Object Oriented Paradigms. 24

2.4.2     History of Object Oriented Languages. 25

2.4.3     Summary. 26

2.5       Concurrent Object Oriented Languages. 27

2.5.1     Actor versus Non-Actor Based Concurrent Languages. 29

2.5.2     Integrating Concurrency into Object Oriented Languages. 30

2.5.3     Defining Mainstream Languages. 31

2.5.4     Implementing Concurrent Object Oriented Languages and Extensions. 32

2.5.5     Categorization of Concurrent Object Oriented Languages. 37

2.5.6     Intra-Process Communications. 41

2.6       Related Work. 43

2.6.1     Similarities between Join Java and Polyphonic C#. 43

2.6.2     Differences between Join Java and Polyphonic C#. 43

2.6.3     Summary. 45

2.7       Conclusion. 46

3      Join Java Syntax and Semantics. 47

3.1       Introduction. 48

3.2       Java Language Semantics and its Deficiencies. 49

3.2.1     Java Concurrency. 49

3.2.2     Synchronization. 51

3.2.3     Wait/Notify. 53

3.3       Principles for Improving Java Concurrency Semantics. 55

3.3.1     High-Level Requirements. 55

3.3.2     Extension Decisions in Intra-Process Communications. 57

3.3.3     Concurrency Semantic Choice. 57

3.4       Join Java Language Semantics. 59

3.4.1     Changes to the Language. 59        Join Methods. 60        Join Method Pattern Matching. 60        Dynamic Channel Formation. 61        Matching Behaviour 62        Synchronization. 62        Asynchronous Methods. 62

3.4.2     Type System.. 62

3.4.3     Relation between Join Java and Java. 63        Inheritance. 63        Overloading. 63        Interfaces. 63        Polymorphism.. 65        Inheritance Anomaly. 65        Dead Locks. 66

3.4.4     Summary. 66

3.5       Conclusion. 67

4      Implementation.. 68

4.1       Introduction. 69

4.2       Compiler Choice. 70

4.3       Translator. 71

4.3.1     The Extensible Compiler Architecture. 71        Extensible Abstract Syntax Trees. 72        Syntactic Analysis. 74        Extension Semantic Analysis. 74        Translation. 75        Backend. 77

4.3.2     Changes to the Extensible Compiler for Join Java. 78        Extended Abstract Syntax Tree. 80        Syntactic Analyser 83        Semantic Analyser 84        Translation. 85        Silent Semantic Analyses. 98

4.4       Pattern Matcher. 99

4.4.1     Application Programmer Interface for the Pattern Matcher. 99        joinMatcher class. 101        JoinInterface Interface. 102        returnStruct class. 103        JoinTranslatedException class. 103

4.4.2     Approaches to Pattern Matching. 103        Tree Structured Implementation. 104        Pre-calculated Table Lookup Implementation. 106        Optimizations. 107

4.4.3     Summary. 109

4.5       Issues Identified in the Prototype. 110

4.5.1     Multi-Directional Channels. 110

4.5.2     Lock Parameter Association. 111

4.6       Conclusion. 112

5      Design Patterns. 114

5.1       Introduction. 115

5.2       Patterns for Synchronization. 116

5.2.1     Scoped Locking. 116

5.2.2     Strategized Locking. 117

5.2.3     Thread Safe Interfaces. 122

5.2.4     Double Check Locking Optimization. 124

5.3       Patterns for Concurrency. 126

5.3.1     Active Object 126

5.3.2     Futures. 127

5.3.3     Monitor Object 129

5.3.4     Half-Sync/Half-Async. 130

5.3.5     Leader/Follower. 135

5.4       Simple Concurrency Mechanisms. 137

5.4.1     Semaphores. 137

5.4.2     Timeouts. 139

5.4.3     Channels. 141

5.4.4     Producer Consumer. 142

5.4.5     Bounded Buffer. 147

5.4.6     Readers Writers. 148

5.4.7     Thread Pool 151

5.4.8     Other Patterns. 153

5.5       Conclusion. 156

6      Concurrency Semantics. 158

6.1       Introduction. 159

6.2       State Charts and Diagrams. 160

6.2.1     State Diagrams. 160

6.2.2     State Charts. 163

6.3       Petri-Nets. 166

6.3.1     Structures. 166

6.3.2     Unbounded Petri nets. 167        An Example in Join Java. 167

6.3.3     Bounded Petri nets. 168        An Example in Join Java. 169

6.3.4     Partial Three Way Handshake Petri net 170

6.4       Conclusion. 174

7      Evaluation.. 175

7.1       Introduction. 176

7.2       Performance. 177

7.2.1     Pattern Benchmarks. 177

7.2.2     Low Level Benchmarks. 179

7.2.3     Compilation Speed. 179

7.2.4     Performance Factors. 180

7.3       Extension Evaluation. 183

7.3.1     Modularity. 183

7.3.2     Expressive Power. 184        Priority Constraints and Exclusion. 185        Concurrent Pattern Evaluation. 185

7.3.3     Ease of Use. 189

7.4       Conclusion. 190

8      Conclusion.. 191

8.1       Introduction. 192

8.2       Contributions. 195

8.2.1     Support for Higher-Level Abstractions at Language Level 195

8.2.2     Introduction of Dynamic Channel Creation Semantics into Java. 196

8.2.3     Improved Thread Integration in Java. 196

8.2.4     Implementation of Process Calculus Semantics into a Production Language. 197

8.2.5     Implementation of a Set of Patterns in a New Concurrent Language Join Java. 197

8.2.6     Close Integration into the Syntax and Semantics of the Base Language. 197

8.2.7     Reduction of Dependency on Low-Level Concurrency Primitives. 198

8.2.8     Reduction of Reliance on the Low-Level Synchronization Keywords in Java. 198

8.2.9     Parameterized Monitors/Parameterized Threads. 199

8.2.10        Investigation of Pattern Matchers and Potential Optimizations. 199

8.3       Future Work. 200

8.3.1     Back-Outs and Lock Checking. 200

8.3.2     Multi-Directional Channels. 200

8.3.3     Inheritance. 200

8.3.4     Expanding Pattern Matching. 201

8.3.5     Hardware Join Java. 201

8.4       Concluding Remark. 202

9      References. 203

Figure 1.  Erroneous Use of the Synchronized Keyword. 5

Figure 2.  Foundation of Concurrency Abstraction. 14

Figure 3.  Extended Concurrency Abstraction Hierarchy. 15

Figure 4.  Complete Concurrency Abstraction Hierarchy. 16

Figure 5.  Funnel Variant of Join Calculus Syntax. 18

Figure 6.  Example Join Calculus Expression. 19

Figure 7.  Addition of B to CHAM.. 20

Figure 8.  Operational Semantics of the Join Calculus. 20

Figure 9.  A partial family tree of object-oriented languages. 25

Figure 10.  Evolution of Actor Based Languages. 30

Figure 11.  Using subclassing in standard Java to achieve multithreading. 50

Figure 12.  Using interfaces in standard Java to achieve multithreading. 50

Figure 13.  Accessor Mutator Implementation. 51

Figure 14.  Omission of Synchronized Keyword. 52

Figure 15.  Dangerous Modification of Monitor Object 53

Figure 16.  A Join Java Method. 59

Figure 17.  Join Java Language Extension Syntax. 60

Figure 18.  A Join Java Class Declaration. 60

Figure 19.  Shared Partial Patterns. 61

Figure 20.  Channel Example Code. 62

Figure 21.  Thread Example. 63

Figure 22.  Polymorphic Join Java Fragments. 64

Figure 23.  Interfaces in Join Java. 64

Figure 24.  Using Interfaces in Join Java. 65

Figure 25.  Polymorphism in Join Java. 65

Figure 26.  Deadlock in Join Java. 66

Figure 27.  Stages of Extensible Compiler without Translation. 73

Figure 28.  Abstract Syntax Tree before Syntactic Analysis. 74

Figure 29.  Abstract Syntax Tree after Semantic Analysis. 76

Figure 30.  Abstract Syntax Tree after Translation before Silent Semantic Analysis. 77

Figure 31.  Abstract Syntax Tree after Silent Semantic Analysis. 78

Figure 32.  Structure of the Join Java Compiler 79

Figure 33.  Simple Join Java Hello World Program.. 80

Figure 34.  Standard Java Abstract Syntax Tree Example. 81

Figure 35.  Join Java Abstract Syntax Tree Example. 82

Figure 36.  Join Java Additions to Java Grammar 83

Figure 37.  Hello World Initializer Method in Translated Code. 86

Figure 38.  Hello World Dispatch Method in Translated Code. 87

Figure 39.  Hello World Notify Translated Code. 88

Figure 40.  Hello World Join Method Translated Code. 89

Figure 41.  Asynchronous Method Source Code. 90

Figure 42.  Asynchronous Method Translated Code. 91

Figure 43.  Asynchronous Join Java Pattern Source Code. 92

Figure 44.  Asynchronous Join Java Pattern Translated Code. 92

Figure 45.  Synchronous Join Java Source Code. 93

Figure 46.  Synchronous Join Java Translated Code. 94

Figure 47.  Base Type Join Java Source Code. 95

Figure 48.  Base Type Join Java Translated Code. 96

Figure 49.  Repeated Join Fragment Usage Source Code. 96

Figure 50.  Repeated Join Fragment Usage Translated Code. 97

Figure 51.  Internal Representation of Tree Pattern Matcher 104

Figure 52.  Pre-calculated Pattern Matcher 105

Figure 53.  Symmetry before Example. 107

Figure 54.  Symmetry after Example. 107

Figure 55.  Example Multi-Directional Program.. 109

Figure 56.  Lock Parameter Association Example Code. 110

Figure 57.  Join Java Scoped Locking Implementation. 116

Figure 58.  Java Scoped Locking Implementation. 116

Figure 59.  Join Java Strategized Locking Library Code. 118

Figure 60.  Join Java Strategized Locking User Code. 119

Figure 61.  Java Strategized Locking Implementation Code. 119

Figure 62.  Java Strategized Locking Implementation Code. 120

Figure 63.  Java Strategized Locking Use Code. 121

Figure 64.  Join Java Thread Safe Interface Code. 122

Figure 65.  Java Thread Safe Interface Code. 123

Figure 66.  Join Java Double Check Locking Optimization Code. 123

Figure 67.  Java Double Check Locking Optimization Code. 124

Figure 68.  Join Java Active Object Code. 125

Figure 69.  Java Active Object Code. 126

Figure 70.  Join Java Futures Code. 127

Figure 71.  Java Futures Code. 127

Figure 72.  Join Java Monitor Object Code. 128

Figure 73.  Java Monitor Object Code. 129

Figure 74.  Join Java Half-Sync/Half-ASync Test Code. 130

Figure 75.  Join Java Half-Sync/Half-ASync Services Code. 130

Figure 76.  Join Java Half-Sync/Half-ASync Queue and External Source Code. 131

Figure 77.  Java Half-Sync/Half-Async Test Code. 132

Figure 78.  Java Half-Sync/Half-Async Service Code. 132

Figure 79.  Java Half-Sync/Half-Async Queue Source Code. 133

Figure 80.  Java Half-Sync/Half-Async External Source Code. 133

Figure 81.  Join Java Leader/Follower Code. 134

Figure 82.  Java Leader/Follower Code. 135

Figure 83.  Join Java Code Emulating Semaphores. 137

Figure 84.  Java Code Emulating Semaphores. 137

Figure 85.  Join Java Timeout 139

Figure 86.  Java Timeout 140

Figure 87.  Join Java Uni-Directional Channel 141

Figure 88.  Java Uni-Directional Channel 141

Figure 89.  Join Java Producer/Consumer Code. 142

Figure 90.  Join Java Producer/Consumer Support Code. 143

Figure 91.  Java Producer/Consumer Code. 144

Figure 92.  Java Producer/Consumer Support Code. 145

Figure 93.  Join Java Bounded Buffer 145

Figure 94.  Java Bounded Buffer 147

Figure 95.  Join Java Reader Writers Source part 1. 148

Figure 96.  Join Java Reader Writers Source part 2. 148

Figure 97.  Java Reader Writers Source part 1. 149

Figure 98.  Java Reader Writers Source part 2. 150

Figure 99.  Join Java Thread Pool Source. 151

Figure 100.  Java Thread Pool Source. 152

Figure 101.  Event Loop Concurrency. 154

Figure 102.  Simple State Transition Diagram.. 160

Figure 103.  State Transition. 160

Figure 104.  State Diagram.. 161

Figure 105.  State Diagram Join Java Code. 162

Figure 106.  State Chart 163

Figure 107.  State Transition Join Java Code. 164

Figure 108.  Simple Petri Net 165

Figure 109.  Non-Bounded Petri Net before Transition. 166

Figure 110.  Non-Bounded Petri Net after Transition. 167

Figure 111.  Non-Bounded Petri Net Join Java Code. 167

Figure 112.  Bounded Petri Net Example. 168

Figure 113.  Bounded Petri Net Join Java Code. 169

Figure 114.  Partial Three Way Handshake. 170

Figure 115.  Petri Net Join Java Method. 171

Figure 116.  TCP Handshake Class (part a) 171

Figure 117.  TCP Handshake Class (part b) 172

Figure 118.  Join Java vs. Java Benchmark Speed (Java = 100%) 177

Figure 119.  Patterns for Synchronization Java vs. Join Java. 185

Figure 120. Patterns for Concurrency Java vs. Join Java. 186

Figure 121. Simple Concurrency Java vs. Join Java. 187



Table 1.  Summary of Concurrency Integration of Languages. 38

Table 2.  Summary of Mainstream vs. Non-Mainstream Languages. 39

Table 3.  Summary of Actor vs. Non Actor Based Languages. 40

Table 4.  Summary of Communication Mechanisms for Concurrent Languages. 41

Table 5.  Pattern Matcher API Summary. 101

Table 6.  Join Interface API Summary. 101

Table 7.  Return Structure API Summary. 102

Table 8.  Join Java vs. Java Benchmarking Results. 177

Table 9.  Join Java Cafe Style Results. 178

Table 10.  Java Cafe Style Results. 178

Table 11.  Compilation Speed Join Java vs. Java. 179

Table 12.  Patterns for Synchronization Lines of Code. 185

Table 13.  Patterns for Concurrency Lines of Code. 186

Table 14.  Simple Concurrency Mechanisms Lines of Code. 187





There are a number of people and organizations that have contributed to both the research and to the author’s capability to complete this thesis.

A number of organizations contributed to my research.  Firstly, Sun Microsystems for the initial equipment I used for my development of the compiler prototype.  The Ecole Polytechnique Federale De Lausanne (EPFL) that partly funded my visits to Switzerland.  The Sir Ross & Sir Keith Smith Trust for additional travel assistance.  The School of Computer and Information Science at the University of South Australia for time and material support especially during the final part of my PhD.  I would also like to thank the Australian Government and the Australian people for funding me whilst undertaking both my undergraduate and postgraduate education.

To my family, Dallas Hohl and my brother in law Mark Hohl thanks for being there for me.  To Dad thanks for all those days I should have been out building fences.  I would also like to thank Uncle Mark for being a role model whilst I was studying. 

To Ben Avery and Ron Graml thank you for being such great friends.  To my friends and colleagues at the university Grant Wigley for our political discussions, Dr Wayne Piekarski for being pushy, Phillipa Osborne for the chats, Justin Taylor for the moral support, Greg Warner, Malcolm Bowes to technical support.  Frank Fursenko for those coffees in the bar.  Jim Warren for his supervision of my honours thesis.  To my old flat mates Wayne Jewell and James Abraham thanks for putting up with me.

I would also like to thank my old boss, Professor Brenton Dansie and my new boss, Professor Andy Koronios.  I would also like to thank the general staff within the department without you the school would not function.  Thank you for all the assistance with my teaching and research.  Jo Zucco and Kirsten Wahlstrom thank you for giving me the time to finish my write up.

I would also like to thank Dr Christoph Zenger and Professor Konstantin Laufer who marked my thesis.  Thank you for the extensive feedback that contributed to the quality of the final product.  Also to Sue Tyerman, for the last minute editing.  I would also like to thank Professor Martin Odersky for the initial direction for this thesis, Dr Matthias Zenger for his significant technological assistance.  Finally, the author wishes to especially thank Dr David Kearney for his supervision.



But like all systems, it has a weakness.  The system is based on the rules of a building. One system built on another.  If one fails, so must the other.  (Wachowski 2003)

Concurrency expression in most popular programming languages is still quite low level and based on constructs such as semaphores and monitors, that have not changed in twenty years.  Libraries that are emerging (such as the upcoming Java concurrency library JSR-166) show that programmers demand that higher-level concurrency semantics be available in mainstream languages.  Communicating Sequential Processes (CSP), Calculus of Communicating Systems (CCS) and Pi have higher-level synchronization behaviours defined implicitly through the composition of events at the interfaces of concurrent processes.  Join calculus, on the other hand has explicit synchronization based on a localized conjunction of events defined as reduction rules.  The Join semantics appear to be more appropriate to mainstream programmers; who want explicit expressions of synchronization that do not breach the object-oriented idea of modularization.  Join readily expresses the dynamic creation and destruction of processes and channels which is sympathetic to dynamic languages like Java.


The research described here investigates if the object-oriented programming language Java can be modified so that all expressions of concurrency and synchronization can use higher-level syntax and semantics inspired by the Join calculus.  Of particular interest is to determine if a true integration can be made of Join into Java.  This work seeks to develop a true language extension not just a class library.  This research also investigates the impact of the Join constructs on the programming of well-known concurrency software patterns including the size and complexity of the programs.  Finally, the impact on the performance of programs written in the language extension is also studied.


The major contribution of the thesis is the design of a new superset of Java called Join Java and the construction and evaluation of the first prototype compiler for the language.  The Join Java language can express virtually all published concurrency patterns without explicit recourse to low-level monitor calls.  In general, Join Java programs are more concise than their Java equivalents. The overhead introduced in Join Java by the higher-level expressions derived from the Join calculus is manageable.  The synchronization expressions associated with monitors (wait and notify) which are normally located in the body of methods can be replaced by Join Java expressions (the Join methods) which form part of the method signature.  This provides future opportunities to enhance further the inheritance possibilities of Join Java possibly minimizing the impact of inheritance anomalies.


I declare that this thesis does not incorporate without acknowledgment any material previously submitted for a degree or diploma in any university.  I also declare that to the best of my knowledge it does not contain any materials previously published unless noted below, or written by another person except where due reference is made in the text

Some of the material in this thesis has been published in the following papers. 

A brief introduction of the ideas covered in this thesis appeared in

Itzstein, G. S. and Kearney, D (2001). Join Java: An Alternative Concurrency Semantic for Java, University of South Australia Report ACRC-01-001.

Parts of the concurrency applications chapter appeared in;

Itzstein, G. S. and Kearney, D (2002). Applications of Join Java. Proceedings of the Seventh Asia Pacific Computer Systems Architecture Conference ACSAC'2002. Melbourne, Australia, Australian Computer Society: 1-20.

Some of the implementation chapter appeared in;

Itzstein, G., Stewart and Jasiunas, M (2003). On Implementing High Level Concurrency in Java. Advances in Computer Systems Architecture 2003, Aizu Japan, Springer Verlag.

Parts of the design patterns chapter appeared in;

Itzstein, G. and Kearney, D. The Expression of Common Concurrency Patterns in Join Java - 2004 International Conference on Parallel and Distributed Processing Techniques and Applications.  Nevada, United States of America.

G Stewart Itzstein



Small opportunities are often the beginning of great enterprises.



1.1      Introduction

This chapter introduces the content of the research reported in this thesis.  The chapter is divided into three sections.  The first section gives a motivation for the work.  An overview of the research is given and indicates possible deficiencies in the expression of concurrency.  The second section describes the structure of the thesis.  Finally, the last section describes the research contributions of the thesis.

1.2      Motivation

Given the long history of object-oriented and concurrent programming (Dahl and Dijkstra 1972; Hoare 1985; Milner 1989) it might be expected that all the important issues have been thoroughly investigated.  In fact, the concurrency implementations in modern mainstream languages are still quite primitive.  At the same time, concurrency has become increasingly more critical to modern software and operating systems.  This is reflected at all levels of computer systems architecture from hardware multi-threading (Intel 2004) to multi-threaded programming languages where concurrency is more closely integrated into the language syntax.  With this increased provision of concurrency, it would be expected that modern languages such as C++ and Java would have incorporated a high-level set of language elements to better express concurrency.  However, this is not the case.  For example, C++ uses operating system level threads and a semaphore library that is not integrated into the language.  In Java even though concurrency appears to be more closely integrated, its expression of synchronization is still quite low-level, being loosely based on monitors (Hoare 1974; Buhr, Fortier et al. 1995).  Hansen states that the monitor implementation of Java (its claimed “high-level” expression of concurrency) is not even a true reflection of his original vision of monitors (Hansen 1999).  He argued that Java synchronization should be used by default rather than left up to the programmer.  Another author, Holub also made the following observation regarding Java.

“The Java programming language's threading model is possibly the weakest part of the language. It's entirely inadequate for programs of realistic complexity and isn't in the least bit Object Oriented.”(Holub 2000).

Process calculi on the other hand are designed to model concurrency rather than implement it (Milner 1989).  These process calculi have a strong research heritage on the expression and formal meaning of concurrency.  The deficiencies identified by Holub and Hansen within mainstream object-oriented languages might be overcome by using ideas from process calculi research. 

Concurrency has been implemented in many languages.  The first purpose built concurrent programming language was Simula (Dahl and Nygaard 1966).  Simula also became the first language that introduced some of the ideas of object-oriented programming.  Simula used a crude concurrency mechanism of co-routines that helped simulate concurrency.  Later languages such as Ada (Mapping and Team 1994) improved upon this simple mechanism.  Strangely, however, C++ supplied less support for concurrency instead leaving implementations to the outside environment[1].    

The authors of Simula incorporated a primitive expression of concurrency via co-routines because they recognized that the world that was to be simulated was itself concurrent.  Since then there has been continuing interest in supporting parallelism in a number of different programming paradigms including that of object-oriented.  Yet “modern languages” like Java support concurrency and synchronization using technology that is nearly 30 years old.  These low-level primitives such as semaphores (Dijkstra 1968) and monitors (Hoare 1974) even though very flexible, are not easy to scale up to large applications nor are they sympathetic to the object-oriented paradigm.  This has been recognized in the research community with a number of higher level concurrency implementations being proposed such as JCSP (Welch 1999), JavaTrevini (Colby, Jagadeesan et al. 1998) and JSR-166 (Lea 2002).  Yet libraries do not entirely solve the problem as they don’t closely integrate with the existing language semantics encouraging poor programming practice that the semantics was supposed to prevent.

Whilst imperative non object-oriented languages concurrency requirements could be satisfied using the low-level concurrency structures, object-oriented languages extensively use the structure of the program to represent information about the modelled domain.  Programmers that use object-oriented languages make use of objects within their programs to model the problem they are solving.  Consequently, if the language does not support concurrency within the object-oriented framework it will present a serious impediment to the design of quality programs.  This is more critical now as object-oriented programming have become one of the most popular programming paradigms for system development (Andrews 1998).  Languages such as C++ (Stroustrup 1983) and later Java (Gosling and McGilton 1996) have the attention of a large proportion of mainstream programmers at the current time (JobNet 2004).  For instance in 1998 just two years after Java was first proposed the language was being used in 40% of IT companies (Andrews 1998).   Java has also been used as a teaching language in 44% of universities (Hardgrave and Douglas 1998).  Java has found wide scale acceptance in a number of real world application domains.  For example Java’s ability to act as an embedded language for small devices has been leveraged by Nokia for an API for mobile phones (Nokia 2003).

To see how easy it is to get into trouble using the low-level concurrency constructs in an object-oriented language like Java, a simple example is now presented relating to the use of a synchronized block.  For example, consider the code in Figure 1 below.

public class Breaksync {

  Object lockingObject = new Object();


  public void broken() {

     synchronized(lockingObject) {


          //various code


         lockingObject = new Object();

          //from this point code is no longer protected 

          //code here that modifies supposed

          //protected data could corrupt the data structure






Figure 1.  Erroneous Use of the Synchronized Keyword

The monitor lock reference in Java can be changed half way through a synchronized section of code.  This problem is fundamentally caused by the fact that all non-base types in Java are represented by references rather than the object identity itself.  Usage of the lock is via a variable reference that can be arbitrarily changed.  This means that the user may change what the reference type is pointing to and thus replace the monitor at will.  Code in Java can potentially look safe however, when executing is completely unprotected from multiple threads accessing it.  Any successful language extension should solve this problem by hiding the lock from the user.  In this way, the user cannot accidentally replace the lock at runtime.  This problem is only one of the more direct examples.  In other situations, problems can be extremely hard to locate and cause infrequently problems at runtime.

Object-oriented languages like Java (described in more detail from page 22) contain methods that change or report the state of an object or class of objects.  These methods act as the message passing mechanisms between the objects in the program space.  This design lends itself to encapsulation of data and restricts data access between objects.  According to the object-oriented paradigm no data should be shared between objects other than those provided by the message passing methods.  In Java’s concurrency capabilities however, there exists no explicit message passing mechanism for thread communications.  Threads communicate via shared variables with the synchronized keywords providing protection of the data. 

In this thesis, message passing in Java between threads via a novel method of compound method signatures is introduced.  That is when all methods from a compound method signature are called the associated body of the compound method is started and the parameters can be passed between fragments.  The mechanism is introduced at the language level rather than a library.  Intra-process communication is achieved via multiple method signatures sharing a method body.  The actual communication is achieved using parameters and return types.  If multiple compound methods share fragments, dynamic message passing channels are created.  By implementing the synchronization in this way a thread message passing mechanism with very low cognitive overhead to the programmer is achieved in almost exactly the same way as message passing between objects.  This innovation also demonstrates how a formal method can co-exist with the object-oriented paradigm.

1.3      Structure of Thesis

The rest of the thesis is organized as follows.  In Chapter 2, the literature is reviewed.  In Chapter 3, existing semantics of Java are reviewed and then the syntax and semantics of Join Java is introduced.  Problems in the current semantics of Java are also covered.  In Chapter 4, the implementation of the syntax and semantics of Join Java is described.  The focus of this chapter is the technical challenges that have been overcome in the implementation.  In particular, Chapter 4 describes the pattern matcher that is at the heart of the runtime component of Join Java.  Chapter 5 examines how design patterns in concurrency and synchronization can be expressed in Join Java.  Chapter 6 applies Join Java to classical concurrency applications.  It is shown how Join Java can express most of these problems in a simple compact form.  Chapter 7 benchmarks the performance of Join Java and shows that further optimizations are possible to overcome the remaining bottlenecks in the performance.  The impact of boxing and unboxing in Java is examined for its effect on the performance of the language extension.  Chapter 8 provides conclusions and suggests further research.

1.4      Research Contribution

The Join Java language introduces several features that do not have clear analogues in the original Join calculus formulation and other Join calculus-based systems.  Imperative object-oriented versions of the Join calculus such as Join Java have much in common with the “joint action” approach taken in Disco (Kurki-Suonio and Jarvinen 1989; Jarvinen and Kurki-Suonio 1991).  This language is fully message-based while most other implementations tend to be predominantly state-based.  This thesis makes three main contributions to the state of the art in concurrent object-oriented languages.  These contributions are;

1.      Provision of high-level synchronization constructs into a mainstream language.  These high-level constructs allow programmers to better model parallel behaviour in their programs without low-level synchronization mechanisms.

2.      Provision of a safer intra-thread message passing mechanism via a guard-style method signature implementation.  This high-level mechanism allows the dynamic creation of channels at runtime.

3.      Improvement of thread integration in Java through a new return type that effectively adds asynchronous methods to Java.  A side benefit of this is to allow parameters to be passed to threads at the time of their creation.

This dissertation explores how an expression of concurrency borrowed from process algebra can be incorporated into a mainstream object-oriented programming language.  The extension has the aim of increasing the level of abstraction in which concurrent programs can be written.




Men who love wisdom should acquaint themselves with a great many particulars.



2.1      Introduction

In this chapter, the literature relevant to the expression of concurrency is reviewed.  It is intended to mention all the key researchers, define the key concepts, and identify open questions in the literature that the work of this thesis aims to fill.

The survey is organized into five sections.  In section one, concurrency as a general concept is covered.  How the expression of concurrency has progressively become more abstract is reviewed.  Some of the more popular abstractions associated with concurrency are presented.  Process calculi are examined in section two for inspiration in deriving newer higher-level concurrency constructs for mainstream programming languages.  This section also gives a detailed introduction to the Join calculus.  In section three, an overview of the object-oriented paradigm with its motivations and history is presented.  In section four, the options for implementing concurrency in object-oriented languages are examined.  A number of categorizing attributes of concurrent object-oriented languages are also identified.  Finally, a number of examples of real object-oriented languages that claim to support concurrency are then categorized using the identified attributes. 

2.2     Abstractions of Concurrency

In this section, the fundamentals of the representation of the concepts of concurrency are examined.  In the first part of this section, abstraction is defined.  A definition from (Kafura 1998) is adopted and applied to the domain of concurrency.  In the second part, the basic principles that form the foundation for any concurrent system are examined.  The third part builds on these foundations by looking at higher-level concurrency abstractions such as monitors, semaphores and shared variables.  It presents a common hierarchy that has emerged from the literature that illustrates the levels of abstraction.  It also notes how results from design patterns literature present an even higher level of concurrency abstraction.

2.2.1     Abstraction

This thesis has adopted Kafura’s definition of abstraction

Abstraction is a design technique that focuses on the essential aspects of an entity and ignores or conceals less important or non-essential aspects. (Kafura 1998)

The key concept in abstraction is information hiding.  Information hiding implies a layer at which some specific information is not visible and a layer below which that information is visible.  This implies a strong relationship between abstraction and layering. 

Kafura also sets out four general properties that make a good abstraction.  Kafura firstly says that abstraction is concerned with attributes and behaviour.  The attributes are the characteristics of the abstracted entity.  Behaviours are the operations that the abstracted entity normally performs.  The four properties that operate on the abstraction are;

1.      The first property of good abstraction is that the abstraction is well named.  That is if the abstraction has a good name the users of the abstraction will have an implicit understanding of it.  An example of this is a variable, where the name, for example counter, implies the function as an abstraction.

2.      The second property is that an abstraction be coherent.  An abstraction is coherent when the attributes and behaviours that comprise the abstraction are related and expected given the domain in which they operate.  For example, the attributes of a counter are to store an integer and the behaviours of a counter are to increment and decrement the integer.

3.      The third property of an abstraction is that it be minimal.  The abstraction should provide the least behaviours and attributes needed.  For example, the behaviour increment by two is not minimal for a counter.  A counter which has a colour attribute is an example of an abstraction that has more attributes than it needs.

4.      The fourth property is that it is complete.  This means that the abstraction should have enough behaviours and attributes to model the entity that is being abstracted.  The counter behaviours presented previously are not complete because you need to be able to reinitialize to some reference value typically zero.

Kafura’s general properties could also be used to evaluate the various abstractions of concurrency.  To supply a good abstraction of concurrency there is a need that the concurrency mechanism to be well named.  For example, mutex could be considered a poorly named abstraction although the full expansion mutual exclusion would be more appropriate and considered a better abstraction by the first property.  Co-routines are well named as the name implies a similarity to subroutines but parallel in nature.  Coherence implies that the attributes and behaviours of say, semaphores (Dijkstra 1968) which have two behaviours up and down are related and expected based upon the name.  The third property that it is minimal can be seen in monitors where the abstraction only describes an entity that has behaviours wait and notify.  The final property completeness implies that the monitors are complete enough to represent synchronization.

2.2.2    The Hierarchy of Abstractions of Concurrency 

It has been noted that concurrency relies on three primitive abstractions (Schneider and Andrews 1986) atomicity, basic communication, and synchronization.  These primitive abstractions are the lowest level abstraction of concurrency.  For a concurrent system to be usable, it must implement these primitive abstractions in some form.  Consequently, these abstractions are considered to be the foundation abstractions for concurrency.    

All concurrent systems are composed of multiple actions that are active at the same time communicating and synchronizing with each other.  The most fundamental abstraction of concurrency that is of interest to computer scientists is the atomic action (Lomet 1977).  Every concurrent system needs some form of atomicity.  This is the point in which an action is indivisible.  Historically semaphores (Dijkstra 1968) were proposed before atomic actions; however it is well established that semaphores are built on the idea of atomic actions.  These abstractions of concurrency fit neatly into the Kafura properties presented earlier.  That is they are well named coherent, minimal and complete.  The abstraction of atomicity is well named as it implies indivisible action.  The abstraction is coherent as its behaviour is what would be expected.  The abstraction is certainly minimal and it is complete.

For concurrency to be useful there needs to be some form of communication between various parallel actions in the system.  If communications were not supported in a concurrent system the only result from concurrent processes would be side effects such as printing information to the screen.  The most basic form of communications is a variable in a shared address space (Schneider and Andrews 1983).  Communication using such a shared variable can only occur correctly if the read and write operations on the variable are shown to be atomic.  The abstraction is well named.  It shows coherence with its behaviours being related to reading and writing from a shared variable.  The abstraction is minimal in that it only is concerned with reading and writing to and from a shared variable and again is minimal as the behaviours and attributes could not be reduced any further.  The abstraction is complete as all the behaviours necessary for the abstraction are present.

Synchronization is about the ordering of atomic events.  Synchronization can thus also be defined as a mechanism for controlling access to a shared resource (Bloom 1979).  For basic communication to be reliable, the abstract concept of atomicity is also necessary.  Synchronization is how mutual exclusion of access is guaranteed to a shared variable in the previous example.  Mutual exclusion means that only one parallel process can access a shared variable at one time, thus eliminating the problem of processes reading data structures that are currently being altered.  Synchronization requires atomicity to work and assists communications to work reliably.  Again, this abstraction obeys Kafura’s basic properties of being well named, and coherent in that the behaviours of synchronization say that given a specific order of calls the result will always be correct.  It is also minimal as the abstraction restricts itself to representing ordering of operations (the behaviour).  The abstraction is complete as it deals completely with ordering of atomic actions.

Figure 2.  Foundation of Concurrency Abstraction

This layer of abstraction is illustrated in Figure 2 where it is represented as the foundation for a hierarchy of abstraction.  This hierarchy will be built on; showing how each level of abstraction requires the lower levels of abstraction to function.  Each higher-level abstraction gives the users easier to understand and utilize mechanisms for achieving concurrency and synchronization whilst hiding the lower-level details.

A number of concurrency abstractions have become popular in programming languages.  These abstractions are somewhat higher in abstraction than that of atomicity, synchronization, and basic communications.  However, if examined closely it could be seen that they make use of the lower-level abstractions to function.  For example, semaphores make use of atomic calls to undertake check and set operations.  Without atomicity of calls, semaphores would not work correctly.  Similarly, monitors express a higher-level abstraction of concurrency.  Monitors make use of synchronization atomicity and communications to function.  Consequently, monitors and semaphores are considered a higher level of abstraction than that of atomicity, synchronization, and basic communications.  One could look at monitors as an abstraction that hides the locks and queues associated with entry and exit to protected sections of code.  The locks act as synchronization mechanisms and queues control the ordering of threads accessing protected segments of code. 

Figure 3.  Extended Concurrency Abstraction Hierarchy

It is clear that semaphores and monitors can be expressed in terms of each other’s implementations; consequently, they are expressed side-by-side in the hierarchy.  These mechanisms can be considered primitive abstractions in the abstraction hierarchy.  Figure 3 shows yet another layer being added called intermediate-level abstractions.  This level adds yet more abstractions such as buffered channels.  These layers in turn reflect this increase in expressability via hiding of lower-level abstractions.

From the applications development point of view concurrency problems can also be partitioned into design patterns.  That is most concurrency problems can be implemented using one or more generic design patterns proposed by a number of authors (Lea 1998; Schmidt 2000).  Programmers make use of these “design patterns” to solve their specific application requirement.  However, these patterns tend to be expressed in a high-level way that leads to difficulties if the language they are using only provides low-level abstractions.  This is the case with most mainstream concurrent languages.  Consequently, one of the most difficult aspects of programming using patterns is the translation from the general implementation supplied by the pattern designer to the specifics of the language and the problem.  This difficulty could be reduced if there was a higher-level abstraction that better matched the generalized way design patterns are expressed.  This missing layer between intermediate-level abstractions and design patterns can be seen in Figure 4.  In this thesis, it is suggested that an extension to the Java language based on the Join calculus is one possible higher-level layer that more closely matches a number of design patterns.



Figure 4.  Complete Concurrency Abstraction Hierarchy


2.3     Process Calculi and the Join Calculus

In this section, we firstly introduce some popular process algebras giving some background on their design motivations.  The Join Calculus (Fournet, Gonthier et al. 1996) is introduced in more detail and has its major terminology introduced.  Finally, Join is contrasted with popular process calculi showing that it has features that are more suitable for implementation into the object-oriented paradigm.

2.3.1     Introduction

The term “process calculi” defines a group of formal description languages that are designed to describe concurrent communicating systems (Fidge 1994).  These description languages (also known as process algebra’s) have been defined as

An algebraic approach to the study of concurrent processes.  Its tools are algebraically languages for the specification of processes and the formulation of statements about them, together with calculi for the verification of these statements(Glabbeek 1986). 

In this thesis three process calculi are mentioned; They are calculus of communicating systems CCS (Milner 1980), communicating sequential processes CSP (Brookes, Hoare et al. 1984; Hoare 1985) and finally the algebra of communicating processes (Bergstra and Klop. 1985) ACP.  The CSP process algebra was initially inspired by Dijkstra’s guarded command language (Dijkstra and Scholten 1990).  CCS is designed in a similar handshake communication style to that of CSP.  Whilst CSP is directed more at a theoretical basis and tends to be a more specification based language, CCS is more operationally oriented (Baeten and Verhoef 1995).  ACP, a more recent work, differs from CSP and CCS in that it is more algebraic in its approach to modelling concurrency.  These process algebras’ embody the intermediate to higher-level abstractions based on our definition from the previous section.  They are frameworks for modelling the behaviour of interacting processes rather than programming them.  For a full comparative introduction of the differences between CSP and CSS consult Fidge (1994).  Virtually all process algebras have the aim of modelling interaction of concurrent systems with syntax and semantic architectures varying between them.  This research is more interested in the models of concurrency that are below the level of the process calculi.  That is the syntax and semantics of concurrency and synchronization that are being implemented rather than modelling of behaviour.  Consequently, an in-depth discussion of process algebras is beyond the scope of this work.  However, the point that should be made is that CCS, CSP and ACP have implied synchronization rather than explicit synchronization.  This contrasts with the Join calculus presented in the next section.

2.3.2    Join Calculus

In this section, a detailed outline of the Join calculus is given.  This section starts by giving a brief overview of the calculus and some of the terms that will be used for the remainder of the discussion. 

The Join calculus can be thought of as both a name passing polyadic calculus[2] and a core language for concurrent and distributed programming (Maranget, Fessant et al. 1998).  The calculi’s operational semantics can be specified as a reflexive chemical abstract machine (CHAM) (Berry and Boudol 1992) (Maranget, Fessant et al. 1998).  Using this semantic the state of the system is represented as a "chemical soup" that consists of active definitions and running processes (Maranget, Fessant et al. 1998).   Potential reactions are defined by a set of reduction rules when a reaction occurs reactants are removed from the soup and replaced with the resultant definitions.  The syntax and semantics of the Join calculus is defined in the next section.

2.3.3    Join Syntax

The Join calculus (Fournet and Gonthier 1996) contains both processes and expressions called Join patterns.  Processes are asynchronous constructs that produce no result.  Expressions are synchronous and produce a result.  Processes communicate by sending messages through communication channels.  These communication channels can be described using Join patterns.  A Join pattern definition has two or more left hand terms and a right hand side.  The expression will not reduce until there are calls to all left hand terms.  A body itself may contain another expression that may include a parallel composition of two or more other expressions. 

Expression         E => def D; E | x (y1….yN) | E & E

Definition         D => L = E | D, D

Left Hand Side     L => X(Y1…YN) | L & L


Figure 5.  Funnel Variant of Join Calculus Syntax

The syntax of the Funnel (Odersky 2000) language is used in preference to the syntax originally defined by (Fournet and Gonthier 1996).  This syntax is closer to that of the target domain object-oriented languages, in our case Java, as its syntactic components are consistent with other parts of the Java language specification.  The Funnel variant (Odersky 2000) syntax of the Join calculus is presented in Figure 5.   

def  X1(Y1) & X2(Y1) =

     xa(y1) & xb(y1) ;


Figure 6.  Example Join Calculus Expression

For example given the Join calculus expression presented in Figure 6 it can be seen that there are two expressions on the left hand side of X(Y1) and X(Y2) and a right hand side with a parallel composition of xa(y1) and xb(y1).  This means that when expressions X1(Y1) and X2(Y2) are available the body of the definition is evaluated, in this case the parallel composition of xa(y1) and xb(y1) expressions.

2.3.4    Join Semantics

In the previous section, it has been shown how the body of the definition is only evaluated when all expressions in the left hand side are available.  This behaviour can be modelled using the CHAM (chemical abstract machine) semantics proposed by (Berry and Boudol 1992).  The chemical abstract machine semantics are modelled on the idea of a soup of waiting chemicals that can react to generate new chemicals while removing old chemicals from the soup.  As an example of the semantics of the chemical abstract machine a brief example is presented.  Given a machine with one possible reduction def A&B = C&D it is assumed that the soup already contains the left hand side expressions A & R & F & S & C.  Consequently, the state of the CHAM is as shown in Figure 7.  Further if the expression B is added to the soup, the soup will contain all the terms on the left hand side of the reduction rule.  The terms will then react and generate all the terms on the right hand side of the reduction rule removing the terms on the left hand side of the reduction rule from the soup.  This is shown in Figure 8 where the addition of term B creates new terms (via the reduction rule) D and C in the soup and the terms A and B are removed.

Figure 7.  Addition of B to CHAM

Join’s dynamic nature emerges when there is the possibility for multiple reductions to occur depending on the combination of Join expressions appearing.  If one were to augment the definition above with an additional definition def A&Z = X it can be seen that on a call to A depending on whether Z or B were previously called the choice of which definition to evaluate would occur.  If Z is waiting (called previously) in the CHAM then the second definition would evaluated, A and Z would be removed from the CHAM and replaced by X.  Alternatively if B was waiting in the CHAM then the first definition would be evaluated removing A and B from the pool, replacing them with C and D.  One alternative situation that can occur is; what happens if Z and B are both waiting in the CHAM and A is then called.  In this situation, a non-deterministic reduction is possible in that either the first definition or second definition could equally be reduced but not both.  The calculus does not specify what would occur in this case.

2.3.5    Synchronous Names

Combining the Join semantics from the previous section with blocking semantics a mechanism of flow control can be achieved.  These blocking fragments of the definition are called synchronous names.  With this addition to the semantics of Join calculus, message passing can be supported in a language level implementation.  This idea is sympathetic to the object-oriented paradigm.

Figure 8.  Operational Semantics of the Join Calculus

2.3.6    Join Calculus vs. Other Calculi

The Join calculus gives an ideal alternative to the current popular concurrency semantics in programming languages via its locality and reflection.  Locality allows us to limit reductions to only one site in the calculus rather than all sites.  The CHAM semantics in the previous section omits the locality as the reductions are entirely completed within a global pool.  If this were to be implemented in the object-oriented paradigm, a CHAM would need to be created in each object in order to give locality.  Reflection on the other hand allows us to define reductions dynamically when the evaluation occurs.  In this thesis, we will implement locality via localization of CHAM’s to the object level.  We will not implement reflection other than that implicitly offered by that of reflection libraries of the base language.

Fessant also points out an advantage of the Join calculus over other calculi.

Processes communicate by sending messages on channels or port names. Messages carried by channels are made of zero or more values and channels are values themselves.  By contrast, with other process calculi (such as the pi-calculus and its derived programming language Pict), channels, and the processes that listen on them are defined by a single language construct.  This feature allows considering (and implementing) channels as functions, when channels are used as functions.  (Fessant and Conchon 1998)

Odersky (Odersky, Zenger et al. 1999) argues that Join Calculus better expresses the idea that most modern applications are reactive in their interfaces and concurrent in their implementation.  The pi-calculus (Milner, Parrow et al. 1992) and the fusion calculus (Parrow and Victor 1998) whilst expressing interaction between entities well do not represent sequential expressions sufficiently.  It has been pointed out that complicated type systems are needed to address these problems. 

2.3.7    Summary

In this section, the Join calculus was identified as a promising candidate for the expression of concurrency that would be suitable for incorporating into a mainstream object-oriented language.  It is shown that Join’s high-level abstraction is based on the conjunction of atomic actions and expression of synchronization explicitly in a way that is not likely to breach the modularity required by object-oriented paradigms.  This strongly motivates the choice of Join as the basis for a high-level concurrency extension.

2.4     The Object Oriented Paradigm

In this section, what defines the object-oriented paradigm is examined.  The review particularly emphasizes aspects that interact with concurrency.  Some of the basic concepts of the object-oriented paradigm are described.  It is shown how the object-orientated paradigm has information hiding as one of its central concepts.  The requirements for a programming language to be object-oriented are looked at and what advantages this gives are examined.  It is shown how object-oriented languages provide a framework in which the only information an object shares with another object is the information passed via a message passing mechanism such as method calls.  The idea of message passing in the language and how encapsulation requires us to limit the access of member variables in classes is scrutinized.  Finally, the family tree of object-oriented languages is examined.  For a comprehensive survey of object-oriented languages consult (Sutherland 1999).

Object-oriented languages were first proposed by (Dahl and Nygaard 1966) as a way of representing real world concepts.  The primary object-oriented language structure is the class, that is an augmented record that also contains the operations that work on the data contained within that record.  Instances of classes can be created which are known as objects.  Each object’s code operates on the data within the instance object.  In this way, the code is associated with specific data rather than as is the case with non object-oriented languages, where the code is generic and data is passed to it.

Before object-oriented languages, most software analysis and design focused on process.  Process designs were either data centric[3] or control centric[4].  This data centric or control centric approach tended to become problematic and was limited to small to medium sized programming projects.  These limitations and non-scalability issues became known as the software crisis (Dijkstra 1972; Booch 1987).  At this point software engineering was more an art with the differences between good projects and bad projects being mainly due to the skill of the team.  In an attempt to improve this situation a methodology of design that would avoid the scalability issues of the process model techniques was pursued.  Developers needed to bring reuse and closer representation of the real world to software engineering.  This closely matched the idea of the object-oriented paradigm that was proposed by the Simula language (Dahl and Nygaard 1966).  The object-oriented paradigm offered abstraction via “class hierarchies” that allowed control of cohesion between components in the system.  This was a necessary requirement for the increased complexity of projects being developed.

An important concept of software engineering is the idea of high cohesion and low coupling (Stevens, Myers et al. 1982).  A highly cohesive module will require outside modules as little as possible.  A loosely coupled module will be as constrained as possible from communication with other modules in the system.  The two principles of high cohesion and low coupling work together to decrease the complexity of software hence reducing potentially hard to localise errors.  In the object-oriented paradigm, encapsulation is the method in which these two principles are achieved.  Encapsulation hides attributes from other objects (modules) via modifiers.  Modifiers allow the programmer to set the visibility of methods and attributes.  In object-oriented languages, it is customary to restrict visibility of internal methods and attributes so that they cannot be seen outside the object in question.  In this way, these issues of high coupling are avoided such as having various objects access any attribute or method they wish.  If arbitrary access to the objects state were allowed the application would be highly coupled and hence unnecessarily complex.  This reduction in accessibility leads to a requirement of a more formal mechanism of object communication known as message passing.  This starkly contrasts with the majority of concurrency abstraction implementations that are highly coupled and have low cohesion.  Concurrency abstractions tend to be non-localized where concurrency calls can be made from anywhere in the code to anywhere else in the code.  An example of this is semaphores where up and down calls can be made anywhere.  Monitors address some of these deficiencies by associating locks and queues to scopes.  However, monitors have even been shown to be a problem (Ramnath and Dathan 2003) where they increase the coupling of more complicated patterns.  Within an object-oriented program, no object should be able to directly manipulate another object’s attributes and hence state.  The way that one object seeks to change the state of another’s object is via mutator methods.  These method’s explicit purpose is to allow another object to request a change of an object’s internal state.  If an object needs the contents of another object, it is done via an accessor method.  In this way, the object can protect itself against being placed in states that it should not be.  This reduction in coupling reduces the complexity of the interaction between different modules of the system hence increasing the scalability of software development.  Any concurrent mechanism implemented in an object-oriented language should support this implicitly.

2.4.1     Advantages of the Object Oriented Paradigms

The advantages of the object-oriented paradigm can thus be summarized as:

1.      Data Abstraction:  This concept allows complicated representations to be modelled in the language whilst still maintaining readability.  The object-oriented paradigm achieves this via inheritance of classes.  In this way, generic concepts are represented high in the inheritance structure whilst concepts that are more specific are added progressively as one descends through the inheritance tree.

2.      Encapsulation:  Bundles related data and methods into neat packages that then provide a form of modularity[5].  Encapsulation also provides support of information hiding via interfaces (definition of the methods within a class).  In this way, the user of the class does not need to know the implementation details of the class they are using.  In fact, the implementation can be arbitrarily changed at any time with the user of the class not even knowing.

3.      Polymorphism: Quite often, it is a good idea to group similar objects of different classes so they can be treated in the same way.  For example in a drawing program, representative classes may have for each type of drawing a widget such as circles or spheres.  It would be useful if one could just refer to all these widgets as shapes and write methods that use shapes.  In this way, one can add new types of widgets without having to write new methods for each.  This helps support code reuse hence leading to more scalable engineering of software systems.

4.      Reusability:  By using the encapsulation facilities of object-oriented languages, an amount of reusability is acquired.  The encapsulation of data structures with the methods, attributes with well-defined interfaces allows us to hide the implementation details from the user.  This, together with high cohesion and low coupling, leads to code that can be reused in different projects.  Furthermore, making use of the facility of polymorphism, code reuse is attained.  That is, rather than writing a different method for each type of class; instead, a carefully designed inheritance structure will provide a generic class with generic methods.  The methods are written to accommodate the generic class signatures any subclassed object can then be passed as the generic type.

2.4.2    History of Object Oriented Languages