Archive for the 'Concurency' category

Performance of ReentrantLock and Synchronized

19 Apr 2011 by

ReentrantLock and Synchronized

ReentrantLock and Synchronized

Synchronized keyword is famous in locking shared variables to be modified by only one thread at the specific time. However, it has performance issue while multiple threads are trying to contend for accessing shared variables. JDK 5.0 is redesigned memory model that makes it truly cross platform for multithread environment. JDK 5.0 adds java.util.concurrent package that supports various optimized concurrency tools.

In this article, I present the performance of ReentrantLock and Synchronized. Actually, the performance has been reveal in Java theory and practice: More flexible, scalable locking in JDK 5.0. However, that article did not show the source code of how to do it.

Prerequisite

Continue Reading »

5 responses so far

Token Ring: Mutual Exclusion in Distributed Systems

08 Apr 2011 by

Token Ring Mutex

Token Ring Mutex

Token ring is one among mutual exclusion algorithm in distributed systems that does not involve with a server or coordinator. A token is passed from one node to another node in a logical order. A node received token has a proper right to enter the critical section. If a node being holding token does not require to enter critical section or access a shared resource, it just simply passes the token to the next node.

Continue Reading »

No responses yet

Distributed Algorithm: Mutual Exclusion in Distributed Systems

06 Apr 2011 by

Distributed Algorithm in Mutex

Distributed Algorithm in Mutex

Distributed algorithm in mutual exclusion takes advantage of multicast protocol to broadcast the requested message to enter the critical section. Once a node sent requested message to every node on the network, every node must send back a message stated that it allows that node to enter the critical section. The node sent requested message must receive a gaining access message from every node on the network before it can actually access to a shared resource. If one node fails to send a gaining access message, then the node cannot access the shared resource.

Because there is no centralized coordinator, it is likely that two or more node send request message simultaneously. However, logically the arrival time stamp of each request message is never been the same. We can use Lamport clock algorithm for this scenario.

Continue Reading »

No responses yet

Centralized Algorithm: Mutual Exclusion in Distributed Systems

04 Apr 2011 by

Centralized Algorithm in Mutex

Centralized Algorithm in Mutex

Semaphore and Monitor that protected critical section in single system are no longer appropriated in distributed systems. So in distributed systems, we must use a special algorithm to achieve mutual exclusion. There are 3 basic algorithms to be considered and work as a baseline for developing another mutual exclusion algorithm in distributed systems. Those 3 are:

  • Centralized algorithm
  • Distributed algorithm
  • Token ring algorithm

Prerequisite

Please refer to these articles for better understanding of mutual exclusion:

Continue Reading »

No responses yet

Monitor: Mutual Exclusion in Single System

01 Apr 2011 by

Monitor

Monitor

Monitor and Semaphore is somehow the same. However, monitor adds more abstract layer to semaphore. Semaphore refers to operation level while monitor refers to class level which is the combination of many operations and variables.

Monitor

By definition, a monitor is a collection of procedures and data that are all grouped together in a class. Just as semaphore, monitor ensures that a particular thread can exclusively access to a shared resource at a specific time. Monitor may implement semaphore and condition variable over the critical section to provide the best total single flow of modifying the resources.

Continue Reading »

One response so far

Semaphore: Mutual Exclusion in Single System

29 Mar 2011 by

Semaphore

Semaphore

Mutual Exclusion is required both in single system and distributed systems to protecting the shared resource from being modified concurrently. In this article, we will discuss the mutual exclusion in single system by using Semaphore. Finally we will follow by giving Java example. Before reading this article, it is highly recommended you read the Principle of Mutual Exclusion.

Semaphore

Semaphore is a flag that railroad engineers would use when trains enter a shared track. Semaphore here refers to a mechanism to ensure only one thread or process to access a shared resource. To implement semaphore in Java, synchronized(this) keyword should be used to surround critical section. The code from previous article will look like this.

Global variables

int x = 6;
int y = 0;

FooThread

void foo(){
  synchronized(this){
    x++;
    y = x;
  }
}

BarThread

void bar(){
  synchronized(this){
    y++;
    x += 3;
  }
}

Continue Reading »

2 responses so far

Principle of Mutual Exclusion

28 Mar 2011 by

Mutual Exclusion is a process that prevents multiple threads or processes from accessing shared resources at the same time. The problem is coming from the fact that in both distributed systems and single system, several threads or processes share a resource but cannot use it concurrently. Mutual Exclusion, Synchronization and Data Coordination are commonly use interchangeably; or just MUTEX for short.

Rationale

This will guide you to the root of the problem and give you solution.

Let see the two blocks of code that runs in parallel called foo() and bar().

The initial state: x=6 and y=0.

FooThread

void foo()
{
   x++;
   y = x;
}

BarThread

void bar()
{
   y++;
   x += 3;
}

Continue Reading »

2 responses so far

Thread Pool

18 Mar 2011 by

Thread Pool

Thread Pool

In previous post you may understand the architecture of single thread and thread per request in distributed systems. Here we will see another architecture called Thread Pool. Thread pool pre-defines a number of thread running in its pool. Let’s say we have 2 threads in the pool, then the server is capable of running only 2 requests at the same time. Other requests must wait in queue until the 2 threads in the pool finish their work. In real world applications, we see a lot of servers fall into this category. For example, Apache web server, IIS server, etc. The advantages and disadvantages are discussed below:

Continue Reading »

No responses yet

Thread per Request

17 Mar 2011 by

Thread per Request/Connection

Thread per Request/Connection

Thread per request or connection has been introduced in order to maximize the system resource and provide responsiveness to users which single thread is not capable. Thread per request or connection architecture will create a thread for each request from a client. In other words, server can create a number of threads as per request. The only limitation is software and hardware capacity. Thread per request is one among thread in distributed systems architecture. This architecture is coming with advantages and disadvantages.

Continue Reading »

No responses yet

Single Thread Server

16 Mar 2011 by

Single Thread

Single Thread

Single thread server is one among concurrency architecture in distributed systems. It is the early stage of distributed systems development. Its footprint is very small and easy to develop. Single thread has an ability to handle only one user a time. The other users must wait in queue until the working user finished his or her task.

If we consider about CPU and I/O usage, single thread is not fully utilized because while CPU is busy I/O usually is free and vice versa.

The development of a single thread server is the same as my example in UDP Programming with Java and TCP Programming with Java.

Advantage

  • Straight forward
  • Easy to develop

Disadvantage

  • Waste time. CPU and I/O are not fully utilized
  • User unsatisfactory because user may wait for getting services
  • May be not exists in real world distributed systems application

 

No responses yet