Introduction to Concurrency

Introduction to Concurrency

As part of a series of assignments, today’s topic will be about concurrency in web development. We’ll explore this topic by defining what concurrency is and its various methods. Then to break it down a bit further we will look at how concurrency is implemented with Node.js and how Oracle and MongoDB support database concurrency.

What is Concurrency?

Concurrency in simplest terms is when multiple events happen within overlapping time frames. When working with data, controls are placed to make sure that there is no corruption due to timing issues when various people are working with the same elements in the database. Other important risks to consider and control are events happening in a random order, and ensuring they are dealt with as little intervention as possible. The methods through which concurrency can be implemented are multiprocessing, multi-threading, and the event-driven approach.


In multiprocessing each task is assigned to a different process in a particular processor. This is an effective method since tasks are isolated and memory is not shared. To handle the burden of creating various processes for each of the connections, servers can use a strategy called preforking. In this strategy the main server process, forks several child processes to handle future requests. Each new request handler waits for new connections to handle them and once established blocks any additional ones.


In multi-threading sharing cache, address space, global variables, and state is possible through the use of lightweight threads with a smaller memory consumption. They’re a good option to map connections to activities. To control the inflow of requests a single dispatcher is placed to control the incoming connections to a thread pool. The accepted connections then wait in a queue, from which they are then taken by the different threads in the pool. The advantage of this method is that the risk brought by concurrency is limited, latencies are more predictable and there is no overload since additional requests meeting a full queue are rejected.


In this strategy instead of a synchronous blocking, it uses an asynchronous approach so that a single thread is mapped to various connections. The thread will then handle the various events and if saturated new events are placed in a queue. The dynamism lies in a loop where events are removed from the queue to be processed while adding new ones to the queue. But it’s not all good news, it’s noteworthy to mention that debugging becomes more difficult given that an event-driven program uses asynchronous calls and callbacks that get triggered on events that were processed with event handler coding.

Handling Concurrency with Node.js

Concurrency is implemented in Node.js by using a single-threaded event loop, which is why it’s good to deal with multiple I/O requests such as HTTP requests and database operations. We saw earlier in the event-driven strategy (which is the one Node.js uses) what it means for the process to be using one thread, but what about the event loop? In the event-loop functions are dequeued onto a call stack and executed by an interpreter, given JavaScript is single-threaded this would mean dealing with a slow environment where the interpreter has to wait until each function is dealt with until it can move to the next one. So how is this solved? Through the use of callbacks (non-blocking functions that are passed on to other functions as arguments so that later…you guessed it, they can be called back). Callbacks are sent to APIs, interpreter sees this as an absolute win and removes them from the call stack, so that other functions on the call stack can be executed. The API in the meantime has finished processing the callback function and proceeds to push it to the task queue. When the event loop sees that the call stack is empty it will proceed to remove a function from the task queue and move it to the call stack so it can be processed, which means callbacks from the task queue are processed until other functions are removed from the call stack. Confusing? This image should help:


Concurrency and Oracle

As indicated by Oracle, when dealing with concurrency in databases there are three concerns:

Dirty reads: A transaction reads data that has been written by another transaction that has not been committed yet.

Non-repeatable (fuzzy) reads: A transaction rereads data it has previously read and finds that another committed transaction has modified or deleted the data.

Phantom reads (or phantoms): A transaction re-runs a query returning a set of rows that satisfies a search condition and finds that another committed transaction has inserted additional rows that satisfy the condition.

In regards to data, Oracle provides concurrency through the use of locking mechanisms where application designers need to define the transactions and Oracle takes care of the locking through a shared lock mode or an exclusive lock mode. In the shared locked mode multiple users can access the resource but to write they need the exclusive lock mode (the transaction causing this lock is the only one that can modify the resource until the lock is released).

Concurrency and MongoDB

MongoDB provides concurrency through the use of optimistic concurrency control, which means transactions can make changes to the data without checking if there’s a possible conflict due to other transactions done at the same time. When MongoDB detects a conflict at the time of committing, it will retry the operation causing the conflict. The type of locking used by MongoDB is multi-granular; meaning, it allows operations to lock at global, database, or collection levels and for each individual storage engine to implement their own controls. By having many levels some of the locks can be yielded to guarantee a strong performance in the face of long-running transactions. Finally, like Oracle it has locking modes based on read or write operations but enhances this through intent-based locks (intent shared and intent exclusive). An example to land this, is that in a write operation using the exclusive lock mode there would be locks at both global and database levels.

Recommended readings and source materials:

Miguel Morales

Leave a Reply

Your email address will not be published. Required fields are marked *