New client API using GraphQL


In 2018, we have introduced our Server API (using REST/Json), allowing to integrate Kinow services inside « server » applications, for example website.

Today, we are proud to announce the launch of our new Client API, using GraphQL.

1. Server API usage

With Server API, it’s possible to integrate inside an existing website all resources linked to a Kinow video platform, for example video list from CMS or user account information’s.

This API also allows to edit data on-the-fly or create new one, so it provides an access as administrator context with write rights on all over platform data.

It’s also very powerful for mass actions, like synchronize users account with an external SSO.

API serveur

Server API documentation :

2. Client API usage

The Client API introduction allows to use a user context inside a client application. Indeed, needs are not the same between a website hosted on a server on which client cannot access code, and a mobile or STB application on which client has total access.

So, Client API plays the proxy role and is the bridge between client application and Server API: it allows to authenticate the user inside its own context, and limits access rights to data using the customer permissions. Use the Client API avoid to develop a proxy server between a client application and the Server API.

We can use as example the view of videos inside the catalog: the Client API will returns to the user only the content on which he can access, by filtering with access restrictions (TVOD/SVOD) and geolocation filters.

API client

Client API documentation:

3. Very high scalability

Kinow offers a technical infrastructure adapted to answer to very important activity peaks. The Client API for mobile applications and set-top box can handle large load and ensure very low latency for the user.

So, the Client API can handle simultaneously millions of user requests.


4. GraphQL requests

All queries to the Kinow Client API are in GraphQL, which offers a great flexibility to front-end application integrators. It is thus possible to recover cross-data which greatly limits the number of calls to the API and therefore the integration time.

5. Serverless architecture

The Client API provided is hosted on a serverless architecture, based on AWS Lambda.

Serverless computing allows to build and run applications and services without thinking about servers. Kinow doesn’t require to provision, scale, and manage any servers. You can build them for nearly any type of application or backend service, with built-in availability and fault tolerance.

Resources scaling is done automatically by running code in response to each trigger. The code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.

6. Proxy gateway

A proxy gateway, using AWS API Gateway, allows you to cache API calls outputs, to ensure optimal response time and avoid the use of unnecessary resources. A dashboard allows Kinow to track calls in real time and view information, performance metrics, data latency, and error rates.

7. SQL engine  

Storing and writing operations are done through AWS Aurora service, which is a relational database engine that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of classic databases like MySQL/MariaDB.

Amazon Aurora offers greater than 99.99% availability. It has fault-tolerant and self-healing storage built for the cloud that replicates six copies of your data across three Availability Zones.

8. Read: cache system

All READ requests are cached through a NoSQL server, AWS Elastic Search. Thus, the SQL server is almost never solicited for reading, which limits the use of resources.

Elastic Search is designed to deliver consistent, fast performance, regardless of scale and associated load. The average latency time on the service side is generally of the order of a few milliseconds.

Our webhooks system, the Notification Manager, makes it possible to notify in real time any data write changes made and refresh the cache keys related to the targeted resource.

9. Write: queue system

All WRITE requests are queued via AWS SQS to be processed on the fly and to avoid unnecessary overhead on the Aurora database server.

Using SQS, we can send any data volume, without losing messages or requiring other services to be available, increasing the overall fault tolerance of the system: multiple copies of every message are stored redundantly across multiple availability.