Sequential Cross Platform Integration Framework

Sequential' s cross platform business framework is an extensible base on which to build business applications. From low level technical services such as logging and configuration/environment management, to medium level business services such as security management to data and reporting management, to high level business process management, our framework will fit your company's needs.

The main components are discussed below:

(Role Based Access Control) Security Server

Role Based Access Control plays a major part in good Business Process Management and IT strategy. Our Security Server is a one stop shop for all firm wide resource and permissions management. Using our Security Server, all Resources, Resource Types, Roles, and the Operations that those Roles are allowed to perform on those Resources and Resource Types can be captured in one place. This aids in the transparency and visualization of  business processes, and will integrate seamlessly with our Business Process Server, or any existing frameworks and/or applications across many disparate technologies.

Your User Administrators will simply then only need to move users in and out of roles, making the administration of permissions in your organisation a much simpler and cost effective task.

Business Process Server

The Business Process Server allows for the capture of high level process flows, middle level workflows and low level data flows into one place, enabling a company to visualise, from a business level, what is happening at any point in time.

With this framework, computer languages like C# or Java (or both) can be used to write the custom logic required in your business, but at the same time allows the framework to orchestrate the flows. This gives the organisation  both flexibility and control.

Tabular and Hierarchical Data Providers

The Tabular (and Hierarchical) Data Server allows you to easily and rapidly create Tasks to interact with (and additionally provide data to) clients, whether they be C#, Java, COM, VBA, Excel, Access, RESTful APIs or WebServices. The Tasks can be written in Java or C# meaning that with just a few lines of code, you can now connect and bridge these technologies in your organisation. For example, you could have a Java server natively providing real time data straight into Excel or a .Net Client, or vice versa.

Data Scrubbing

Every large organisation will have a need to cleanse, or scrub, data coming from one or more systems. The framework allows for creation of efficient data workflows that can compare, validate or flag data items for human verification. The full audit trail of each data items' state (value, updated on, updated by etc) is maintained and the workflows can work in full or incremental mode. Examples of data validation rules include new data, large differences in data values or differences in sets of data, but any complex rule can also be written. These workflows can also be used in conjunction with the workflows that gather and store down / process the raw data in the first place. Examples of data scrubbing include Market Data, Trade Data and Reference Data.

Data Caching

Having large sets (Terabytes) of Market, Trade or Reference Data brings in its own set of issues. If server nodes are to efficiently use this data, they will need to cache it in memory, and to make sure that the cache is efficient, and only storing recently used data at any point in time. It will also need to have in-memory indexes to be able to serve up data as quickly as possible, and to have efficient keys to be able to firstly link back to an index in as smaller timeframe as possible, and secondly to be easily overridden (for perturbation reason, for example). The Data Cache can store data in any format, or you can use one of our efficient containers.

Node Management

When an organisation has a set of server nodes, it is crucial that they can be managed by a single dashboard, and that that dashboard can see everything that is happening on the server node. Items like CPU, GPU or memory usage, the number of tasks, data flows or workflows, connected sessions, queues, database connections, threads etc need to be monitored in either automated fashion or by a support team to ensure smooth running of the system.

These nodes also need to be upgraded (or downgraded) in a live fashion, and this upgrade might need to be done in a partial manner (for example upgrading half the nodes with a new version, testing, and then upgrading the other half).

Each physical box has a DMC Daemon Node running which controls all of this via the dashboard.

Broker Nodes

Broker nodes allow the efficient  distribution of tasks across the server nodes. Intelligence can be built in to allow node affinity and failover, and pre-creation can aid scalability. (For example, instead of creating random numbers on the fly, you can create a set of normally distributed random numbers, give then an ID, and pre-cache these using the Data Cache. This even benefits testing, where your test can pre-specify the ID so thus becomes a deterministic test again). Broker nodes can also be split into Domains and can be use from servers as well as clients (Excel, GUI etc).

Worker Nodes

Worker  nodes are the leaf nodes that do the work (having said that, a worker node can distribute work via its broker node). A hybrid approach can also be taken where a broker node distributes to worker nodes, and these worker nodes invoke their GPGPUs to further parallelize the work.