Skip to content

Yget

YGET initial architecture explanation

My current position is with Corefount Inc. as a Principal Architect and we are putting together a mobile platform for preventive healthcare. To accomplish this goal there are a few requirements:

  • Mobile Application UI
  • Web Application UI
  • Data Driven Backend

There are an unlimited number of ways to approach developing these items. I will layout what choices we have currently begun implementation with and why we chose them. One of the primary driving factors behind the decisions are the existing staff’s skill set. Myself, My Boss, and the QA lead are all experienced with Microsoft technologies. Also, we do not have any infrastructure and consider the overhead of maintaining our own servers an unnecessary burden at this stage. Another leading factor in the decision making is the initial teams Xamarin experience. Now that I have explained our backgrounds, I will expand on the initial requirements.

Mobile Application


The mobile application allows users to track their daily nutritional information while also prompting them with what they are lacking. It will also help them avoid items that they are allergic to or have a general disinterest in consuming. The application needs to allow the user to login using corporate credentials and may need to be hosted from a corporate app store. Users should be able to use the application offline. The application should support Android, iOS, and Windows Phone. This lead to the following decisions being made:

  • Xamarin Forms
    • Xamarin Forms allows us to build a cross platform UI with Xaml and easily override needs per platform.
  • sqlite
    • sqlite makes caching data locally easy.
    • Xamarin has an easy to use cross platform plugin for sqlite
  • Web API 2 client
    • Web API 2 client works with all the mobile platforms as a PCL
    • The backend is powered by Web API (this will be discussed later)
    • The identity provider is also through ASP.NET and can utilize the web api client
  • Syncfusion
    • Has controls with Xamarin Forms that allow us to quickly put together dashboards for users to track their progress

 

Web Application


The web application has many of the same requirements as the mobile application. The only true difference is the web application doesn’t need internal hosting or offline usage and it needs to be responsive. This lead to the following decisions to be made:

  • angularjs
    • allows for creating a SPA and seems to be the easiest to hire maintainers for
    • allows for reuse of existing JQuery widgets as directives
    • allows for code reuse between data management portals and consumer facing application
  • typescript
    • typescript gives us type checking and intellisense for writing large amounts of javascript
    • seems to be an easier pickup for non javascript developers (sometimes you have to throw bodies at a problem)
    • used with definitely typed gives intellisense and compile time checking to existing libraries
  • bootstrap
    • can quickly create a responsive website
    • customizable via less or sass compiler
  • less
    • helps to create complex css
    • extends bootstrap 3
  • kendo ui
    • facilitates the creation of dashboards quickly
    • works with angularjs
  • npm + gulp
    • manages javascript packages
    • allows for build automation and integrates easily with VS 2015

 

Data Driven Backend


The backend is going to be broken up into multiple sections. As with most platforms, the data is the real key to success. It is also the source of much of the complexity. For the YGET platform, the data drives decisions that affect patient’s lives and has to be good enough not to cause them harm. Because of this, the data has to be properly vetted and organized for use. Along with that, there should be multiple redundancies to ensure patients are not harmed.

There are some encompassing technological choices that were made that can be discussed in this section. One of the major choices was to use Microsoft Azure. Azure allowed us to host our solution more easily than managing our own data centers or managing more than one data center vendor. Azure was also chosen over other cloud providers due to the team’s knowledge of the product and our relationship with the local Microsoft team (I can’t state enough how important having local contacts are). Also, SQL Server was chosen as our primary relational database engine. Corefount’s CTO (my boss) has more experience with SQL server than I have experience so it was an easy decision.

Now that the over arching decisions have been explained, its time to look at the systems required to power the backend of the platform. I will try to organize the following systems in least dependent to most dependent:

 

Data Generation and Acquisition


YGET‘s need for data will require lots of manual input and lots of automated processes. First, let us focus on the automated processes. The major process currently is targeted web crawlers parsing aggregate data feeds. There are also smaller processes that grab published data from the web for import (pdfs and the like). These processes are run in Azure’s Web Jobs due to the simplicity of scheduling and the ability to expand resources on demand.

The web crawler needed to be a very low overhead and fast worker. This caused us to decide to write it in Visual C++. This lead to the following decisions:

  • boost
    • full of helper functions and classes (especially in string searching)
  • Casablanca
    • utilizes PPL
    • supports json and makes http lifecycle easier
  • sql api
    • simplest SQL Server client library I could find

After the web crawler has parsed and saved the data successfully there is an SSIS package that is able to take the data and put it into the management database.

 

Data Management and Configuration


After the data is acquired, it needs to be organized and sanitized. Also, the data may be incomplete and manual input may be needed. To accomplish this an internal website was put together using the following:

  • ASP.NET Web API
    • Easily expose OData endpoints for use with kendo ui tools
    • Expose data quickly and easily
  • angularjs
    • integrates with kendo ui easily
    • allows for SPA application
    • internationalization support for internationalization market data manipulation
  • kendo ui
    • controls allow for easy data management
    • controls support out of the box paging and filtering with odata
    • internationalization ease for international market data manipulation
  • Entity Framework
    • makes data access a non issue
    • works in the simplest manner possible with ASP.NET Web API OData

Data Endpoints


Finally, the last piece of the puzzle is exposing the data to the end user. The data will need to be available to users globally. The data also needs to be secured so that user’s can access only their data. User’s may be apart of a corporate domain, use a social login, or use a YGET local account. These requirements led to the following decisions:

  • ASP.NET Web API
    • Easily expose http endpoints
  • ASP.NET Identity
    • allows for integration with third party OAuth services
    • allows for integration with on premises and azure active directory
    • supports local accounts
  • AutoMapper
    • Allows for easy exposing of DTOs
  • Azure Service Bus
    • allows for offloading of computationally heavy code
  • Entity Framework
    • makes data access a non issue
  • Sql Azure
    • relational database hosted instance that covers data needs in the cloud

Azure Service Bus as a scalable request response architecture

I have recently started working for a startup to build their go to market apps. The requirements are for a mobile app and a web app with the back end hosted in Microsoft Azure.

One of the initial concerns in the architecture is scalability. When it goes live we expect over 200,000 users and I expect them to hit almost all at once (due to a notification or email or however we inform them its live). Since it would be a launch day, there can be no delay for the user. Its the first and sometimes only impression a user will have. This means that the application must be easily and automatically scalable. Being that it is hosted in Microsoft Azure, this allows for a plethora of options to scale out. For the startup I decided to that the service layer could be implemented utilizing Azure Service Bus.

The service layer needs to return responses to the calling client. This means that messages dropped onto the bus need to be tagged in a way that allows for them to be later identified. This is accomplished easily enough, in fact; the bus message provides a mechanism just for this. The MessageId or CorrelationId can be used in this instance to map the message dropped on the bus back to the client that put the message on the bus. Utilizing on of these mechanisms, a message can be dropped onto the a Queue and then pushed back to all clients pushing messages onto that Queue via a Topic. This diagram shows the workflow:

Service Bus Basics

To accomplish this, the first thing I made was an IObservable implementation for listening on the bus. I tried to create a class that could implement IObservable<T> and IObservable<BrokeredMessage>. I learned this is not allowed in C# and for good reason (1). Instead I created an interface that would expose both interfaces as a property. This is that definition:


public interface IMessageBusObservable<out T> : IDisposable
{
    IObservable<T> MessageObservable { get; }

    IObservable<BrokeredMessage> BrokeredMessageObservable { get; } 
}

Now I need to implement this interface. For ease I used Reactive Extensions to implement it.


public class QueueObservable<T> : IMessageBusObservable<T>
{
    private readonly QueueClient _queueClient;

    private readonly Subject<BrokeredMessage> _brokeredSubject;

    public QueueObservable(string connectionString,string queueName)
    {
        _brokeredSubject = new Subject<BrokeredMessage>();
        _queueClient = QueueClient.CreateFromConnectionString(connectionString, queueName);
        _queueClient.OnMessage(OnMessage,new OnMessageOptions
        {
            AutoComplete = true,
            AutoRenewTimeout = TimeSpan.FromMinutes(1),
            MaxConcurrentCalls = 1000
        });
    }

    private void OnMessage(BrokeredMessage brokeredMessage)
    {
        _brokeredSubject.OnNext(brokeredMessage);
    }

    #region Implementation of IDisposable

    public void Dispose()
    {
        //A placeholder until I flesh out the QueueClient life cycle    
    }

    #endregion

    #region Implementation of IMessageBusObservable<out T>

    public IObservable<T> MessageObservable {
        get { return _brokeredSubject.Select<BrokeredMessage, T>(x => x.GetBody<T>()); }
    }
    public IObservable<BrokeredMessage> BrokeredMessageObservable => _brokeredSubject;

    #endregion
}

With a listener in place, I can now create a client for topics and queues that share a common interface:


public interface IServiceBusClient<in T>
{
    void SendMessage(T message);

    Task SendMessageAsync(T message, CancellationToken cancellationToken);

    void SendMessage(string identifier, T message);

    Task SendMessageAsync(string identifier, T message, CancellationToken cancellationToken);
}

Now heres the Topic implementation of the interface:


public class TopicClient<T> : IServiceBusClient<T>
{
    private readonly TopicClient _topicClient;

    public TopicClient(string connectionString,string topicName)
    {
        _topicClient = TopicClient.CreateFromConnectionString(connectionString, topicName);
    }

    #region Implementation of IServiceBusClient<T>

    public void SendMessage(T message)
    {
        _topicClient.Send(new BrokeredMessage(message));
    }

    public Task SendMessageAsync(T message, CancellationToken cancellationToken)
    {
        return _topicClient.SendAsync(new BrokeredMessage(message));
    }

    public void SendMessage(string identifier, T message)
    {
        _topicClient.Send(new BrokeredMessage(message)
        {
            CorrelationId = identifier
        });
    }

    public Task SendMessageAsync(string identifier, T message, CancellationToken cancellationToken)
    {
        return _topicClient.SendAsync(new BrokeredMessage(message)
        {
            CorrelationId = identifier
        });
    }

    #endregion
}

Now that I have both of those in place I can setup a class to synchronize the messages. The actual implementation of the synchronizer is split into two separate classes for ease of use. The first class is the message sender and listener:


public class BusClientSynchronizer<T,TResult> : IDisposable
{
    private readonly List<ObservableCompletionListener<TResult>> _topicCompletionListenerCollection;

    private readonly IServiceBusClient<T> _sender;

    private readonly IMessageBusObservable<TResult> _messageBusObservable;

    public BusClientSynchronizer(IServiceBusClient<T> sender, IMessageBusObservable<TResult> messageBusObservable)
    {
        _sender = sender;
        _messageBusObservable = messageBusObservable;
        _topicCompletionListenerCollection = new List<ObservableCompletionListener<TResult>>();
    }

    public virtual async Task<TResult> SendMessage(T message, CancellationToken cancellationToken)
    {
        var identifier = Guid.NewGuid();
        await _sender.SendMessageAsync(identifier.ToString(),message, cancellationToken);
        var topicCompletionListener = new ObservableCompletionListener<TResult>(_messageBusObservable.BrokeredMessageObservable);
        _topicCompletionListenerCollection.Add(topicCompletionListener);
        RemoveDisposedListeners();
        return await topicCompletionListener.GetResultAsync(identifier, cancellationToken);
    }

    private void RemoveDisposedListeners()
    {
        for (int i = 0; i < _topicCompletionListenerCollection.Count; i++)
        {
            if (_topicCompletionListenerCollection[i].Disposed)
            {
                _topicCompletionListenerCollection.RemoveAt(i);
            }
        }
    }

    #region Implementation of IDisposable

    public void Dispose()
    {
        foreach (ObservableCompletionListener<TResult> topicCompletionListener in _topicCompletionListenerCollection)
        {
            if (!topicCompletionListener.Disposed)
            {
                topicCompletionListener.Dispose();
            }
        }
        RemoveDisposedListeners();
    }

    #endregion
}

The second class is used for listening to messages on an IObservable and seeing if the CorrelationId matches the Id that was passed in:


public class ObservableCompletionListener<T> : IDisposable
{
    private readonly IDisposable _topicObservableSubscription;

    internal bool Disposed;

    private readonly TaskCompletionSource<T> _taskCompletionSource;

    private Guid _identifier;

    private CancellationToken _cancellationToken;

    public ObservableCompletionListener(IObservable<BrokeredMessage> topicObservable)
    {
        _taskCompletionSource = new TaskCompletionSource<T>();
        _topicObservableSubscription = topicObservable.Subscribe(OnNext);
    }

    private void OnNext(BrokeredMessage brokeredMessage)
    {
        if (brokeredMessage.CorrelationId == _identifier.ToString())
        {
            _taskCompletionSource.SetResult(brokeredMessage.GetBody<T>());
        }
    }

    public Task<T> GetResultAsync(Guid identifier,CancellationToken cancellationToken)
    {
        _cancellationToken = cancellationToken;
        _identifier = identifier;
        return _taskCompletionSource.Task;
    }

    #region Implementation of IDisposable

    public void Dispose()
    {
        if (!Disposed)
        {
            Disposed = true;
            _topicObservableSubscription?.Dispose();
            if (!_taskCompletionSource.Task.IsCompleted)
            {
                _taskCompletionSource.SetCanceled();
            }
        }
    }

    #endregion
}

If you see any optimizations or any reproduced code let me know. I would love to use something simpler if it exists.

 

 

1. If the class was implemented with a typeof(T) being a type that inherits from BrokeredMessage, then the compiler would not know which implementation of the IObservable<T> to call.

Twitter Auto Publish Powered By : XYZScripts.com