https://en.wikipedia.org/wiki/Staged_event-driven_architecture
http://berb.github.io/diploma-thesis/original/042_serverarch.html#seda
As a basic concept, it divides the server logic into a series of well-defined stages, that are connected by queues. Requests are passed from stage to stage during processing. Each stage is backed by a thread or a thread pool, that may be configured dynamically.
http://stackoverflow.com/questions/3570610/what-is-seda-staged-event-driven-architecture
https://www.quora.com/Design-Patterns/What-is-Staged-Event-Driven-Architecture-SEDA
SEDA seems to be dead.
A modern take on this, avoiding the infamous "one thread per socket" architecture which produces a lot throttling, is the node.js and vert.x architectures, which are based on callbacks and internal OS mechanisms, which happen to be queues too.
http://matt-welsh.blogspot.com/2010/07/retrospective-on-seda.html
http://www.slideshare.net/planetcassandra/cassandra-summit-2014-monitor-everything
The staged event-driven architecture (SEDA) refers to an approach to software architecture that decomposes a complex, event-drivenapplication into a set of stages connected by queues. It avoids the high overhead associated with thread-based concurrency models, and decouples event and thread scheduling from application logic. By performing admission control on each event queue, the service can be well-conditioned to load, preventing resources from being overcommitted when demand exceeds service capacity.
SEDA employs dynamic control to automatically tune runtime parameters (such as the scheduling parameters of each stage) as well as to manage load (like performing adaptive load shedding). Decomposing services into a set of stages also enables modularity and code reuse, as well as the development of debugging tools for complex event-driven applications.
http://www.infoq.com/articles/SEDA-Mulehttp://berb.github.io/diploma-thesis/original/042_serverarch.html#seda
As a basic concept, it divides the server logic into a series of well-defined stages, that are connected by queues. Requests are passed from stage to stage during processing. Each stage is backed by a thread or a thread pool, that may be configured dynamically.
The separation favors modularity as the pipeline of stages can be changed and extended easily. Another very important feature of the SEDA design is the resource awareness and explicit control of load. The size of the enqueued items per stage and the workload of the thread pool per stage gives explicit insights on the overall load factor. In case of an overload situation, a server can adjust scheduling parameters or thread pool sizes. Other adaptive strategies include dynamic reconfiguration of the pipeline or deliberate request termination. When resource management, load introspection and adaptivity are decoupled from the application logic of a stage, it is simple to develop well-conditioned services. From a concurrency perspective, SEDA represents a hybrid approach between thread-per-connection multithreading and event-based concurrency. Having a thread (or a thread pool) dequeuing and processing elements resembles an event-driven approach. The usage of multiple stages with independent threads effectively utilizies multiple CPUs or cores and tends to a multi-threaded environment. From a developer's perspective, the implementation of handler code for a certain stage also resembles more traditional thread programming.
The drawbacks of SEDA are the increased latencies due to queue and stage traversal even in case of minimal load. In a later retrospective [Wel10], Welsh also criticized a missing differentiation of module boundaries (stages) and concurrency boundaries (queues and threads). This distribution triggers too many context switches, when a requests passes through multiple stages and queues. A better solution groups multiple stages together with a common thread pool. This decreaes context switches and improves response times. Stages with I/O operations and comparatively long execution times can still be isolated.
The SEDA model has inspired several implementations, including the generic server framework Apache MINA and enterprise service buses such as Mule ESB.
http://stackoverflow.com/questions/3570610/what-is-seda-staged-event-driven-architecture
Stage is analog to "Event", to simplify the idea, think SEDA as a series of events sending messages between them.
One reason to use this kind of architecture, I think, is that you fragment the logic and can connect it and decouple each event, mainly for high performance services with low latency requirements fits well.
If you use Java TPE, you could monitor the health, throughput, errors, latency of each stage, and quickly find where is the performance bottleneck. And as a nice side effect, with smaller pieces of code, you can easily test them and increment your code coverage (that was my case).
For the record, this is the internal architecture of Cassandra (NoSQL), and Mule ESB (AFAIK).
https://www.quora.com/Design-Patterns/What-is-Staged-Event-Driven-Architecture-SEDA
SEDA seems to be dead.
A modern take on this, avoiding the infamous "one thread per socket" architecture which produces a lot throttling, is the node.js and vert.x architectures, which are based on callbacks and internal OS mechanisms, which happen to be queues too.
http://matt-welsh.blogspot.com/2010/07/retrospective-on-seda.html
http://www.slideshare.net/planetcassandra/cassandra-summit-2014-monitor-everything