Estimate your project

Message sent

Thank you!

Your message has been sent. We’ll get back to you in 24 hours.

Back to page
24 hours

We’ll get back to you in 24 hours

to address your needs as quick as possible.


We’ll prepare an estimation of the project

describing the team compostition, timeline and costs.

Code Review

We’ll perform a free code review

if you already have an existing system or a part of it.

Our work was featured in:

Tech Crunch
Business Insider

New Node.js features will make it more versatile than ever

Node.js is well-known for its speed and simplicity. As a result, more and more companies are willing to give it a shot. With the release of a new LTS (long-term support) version, 2018 marks a very important date for every Node.js developer. Why should we exactly be so excited? That’s because the new Node.js 10 features and the possibilities they create are simply that amazing!

It’s all about threads!

If there is one thing we can all agree on, it’s that every programming language has its pros and cons. Most popular technologies have found their own niche in the world of technology. Node.js is no exception.

We’ve been told for years that Node.js is good for API gateways and real-time dashboards (e.g. with websockets). As a matter of fact, its design itself forced us to depend on the microservice architecture to overcome some of its common obstacles.

At the end of the day, we knew that Node.js was simply not meant for time-consuming, CPU-heavy computation or blocking operations due to its single-threaded design. This is the nature of the event loop itself.

If we block the loop with a complex synchronous operation, it won’t be able to do anything until it’s done. That’s the very reason we use async so heavily or move time-consuming logic to a separate microservice.

This workaround may no longer be necessary thanks to new Node.js features that debuted in its 10 version. The tool that will make the difference are worker threads. Finally, Node.js will be able to excel in fields where normally we would use a different language.

A good example could be AI, machine learning or big data processing. Previously, all of those required CPU-heavy computation, which left us no choice, but to build another service or pick a better-suited language. No more.

Threads!? But how?

This new Node.js feature is still experimental – it’s not meant to be used in a production environment just yet. Still, we are free to play with it. So where do we start?

In order to activate it, you have to use a special feature flag –experimental-worker. Note that it will only work in Node 10.5+.

node index.js –experimental-worker

Now we can take full advantage of the worker_threads module. Let’s start with a simple HTTP server with two methods:

  • GET /hello (returning JSON object with “Hello World” message),
  • GET /compute (loading a big JSON file multiple times using a synchronous method).

The results are easy to predict. When GET /compute and /hello are called simultaneously, we have to wait for the compute path to finish before we can get a response from our hello path. The Event loop is blocked until file loading is done.

Let’s fix it with threads!

As you can see, the syntax is very similar to what we know from Node.js scaling with Cluster. But the interesting part begins here.

Try to call both paths at the same time. Noticed something? Indeed, the event loop is no longer blocked so we can call /hello during file loading.

Now, this is something we have all been waiting for! All that’s left is to wait for a stable API.

Want even more new Node.js features? Here is an N-API for building C/C++ modules!

The raw speed of Node.js is one of the reason we choose this technology. Worker threads are the next step to improve it. But is it really enough?

Node.js is a C-based technology. Naturally, we use JavaScript as a main programming language. But what if we could use C for more complex computation?

Node.js 10 gives us a stable N-API. It’s a standardized API for native modules, making it possible to build modules in C/C++ or even Rust. Sounds cool, doesn’t it?

Building native Node.js modules in C/C++ has just got way easier

A very simple native module can look like this:

If you have a basic knowledge of C++, it’s not too hard to write a custom module. The only thing you need to remember is to convert C++ types to Node.js at the end of your module.

Next thing we need is binding:

This simple configuration allows us to build *.cpp files, so we can later use them in Node.js apps.

Before we can make use of it in our JavaScript code, we have to build it and configure our package.json to look for gypfile (binding file).

Once the module is good to go, we can use the node-gyp rebuild command to build and then require it in our code. Just like any popular module we use!

Together with worker threads, N-API gives us a pretty good set of tools to build high-performance apps. Forget APIs or dashboards – even complex data processing or machine learning systems are far from impossible. Awesome!

Full support for HTTP/2 in Node.js? Sure, why not!

We’re able to compute faster. We’re able to compute in parallel. So how about assets and pages serving?

For years, we were stuck with the good old http module and HTTP/1.1. As more and more assets are being served by our servers, we increasingly struggle with loading times. Every browser has a maximum number of simultaneous persistent connections per server/proxy, especially for HTTP/1.1. With HTTP/2 support, we can finally kiss this problem goodbye.

So where do we start? Do you remember this basic Node.js server example from every tutorial on web ever? Yep, this one:

With Node.js 10, we get a new http2 module allowing us to use HTTP/2.0! Finally!

Full HTTP/2 support in Node.js 10 is what we have all been waiting for

The future is bright!

The new Node.js features bring fresh air to our tech ecosystem. They open up completely new possibilities for Node.js. Have you ever imagined that this technology could one day be used for image processing or data science? Neither have I.

This version gives us even more long-awaited features such as support for es modules (still experimental, though) or changes to fs methods, which finally use promises rather than callbacks.

Want even more new Node.js features? Watch this neat Node.js 10 features report from Traversy Media.

As you can see from the chart below, the popularity of Node.js seems to have peaked in early 2017, after years and years of growth. It’s not really a sign of slowdown, but rather of  maturation of this technology.

However, I can definitely see how all of these new improvements, as well as the growing popularity of Node.js blockchain apps (based on the truffle.js framework), may give Node.js a further boost so that it can blossom again – in new types of projects, roles and circumstances.

The TSH Node.js team is so looking forward to 2019!

Apiary: The API documentation tool that will change your API game

Say “no” to outdated documentation with Apiary

APIs open up a world of possibilities for developers all over the world, making it possible to create third-party software. As such they are the foundation of their creativity. On the down side, they require efficient cooperation, both in short- and long-term, between backend and frontend developers, testers as well as other project members. Oracle’s Apiary is one of the most successful attempts at making this cooperation smoother, speeding up and simplifying development in the process. Say “hi” to the Apiary API documentation tool with our quick guide.


There are few tools on the market as complex and advanced as Apiary. It’s an API documentation tool (when really, it’s much more than that), which allows you to quickly design an API and share it with whom it might concern using a mock server provided by Oracle. It has a wealth of features, each of which can be very useful on its own. But it shines its brightest when we use its full potential. So let’s cut to the chase.

APIARY – how does it work?

At Apiary’s heart is API Blueprint – a high-level description language, which makes designing APIs a breeze. Once our list of requirements is ready, the system will verify our syntax and subsequently launch a mock server, which will respond to our requests. At this point, frontend developers can already start working with it. But our journey with Apiary is only beginning. After all, our API description is a contract that all backend and frontend devs need to adhere to, so we need to commit it to our repository. Apiary’s deep GitHub integration makes it a one-step process. What’s more, thanks to integration with Dredd – a testing framework – backend developers will be able to automatically generate tests. All that’s left to do is write code that will pass them all.

Cool. But in order to understand why all this is so useful, we need to take a step back and answer another question.

Why do we need contracts and mock servers?

Frontend and backend developers can’t always work simultaneously – timing is very important. For example, a frontend dev can’t do its job without mocking endpoints first unless all backend job is done. As a result, one task can often span two, three, or even four sprints. if we could somehow make it so that frontend and backend are entirely independent from each other, development would be much faster. Let’s see how it can be done without having to create an all full-stack team.

Once we commit the contract to our repository, we can create our definition of done, since we now know what and where should be achieved. When the contract is verified, Apiary launches a mock server, which means that a frontend dev can start their work. Backend work is easier as well – Apiary can use the contract to generate automated tests for use in Dredd. This framework can help us test the implementation of an API. It’s not quite as sophisticated as Behat, but more than sufficient to generate basic tests for our app to comply with.

And what if a developer breaks the contract? Since we’re using automated tests, we will be informed about it instantly.

Creating an API description with Apiary documentation tool

Making an API description with API Blueprint is quite straightforward. The language itself (Oracle’s own creation) has a low barrier to entry. Bundled with our first project is an example description, which we can freely use. Once you master Apiary, you can try Swagger – a more complex and capable API description language. Unless we need modelling or imports, API Blueprint will suffice.

As you can see, the syntax is reminiscent of Markdown. Above, you can see the description of an /elements endpoint, which responds to GET requests. The endpoint will return a collection, which here includes just one object. Apiary will render the endpoint to the right.

What can we find under List All Elements? Could it be that Apiary has something more in store for use?

Indeed. Under this URL you can find an API client, which you can use to test your new endpoint.

As you can see, the mock server returned just what we requested. The frontend dev has all they need now. What about the backend? We’ll soon get to that. But first…

API Inspector

Thanks to the convenient API Inspector, we can see exactly who used our endpoints and what kind of requests were made. This is how the inspector tab looks like:

All elements are clickable. Below, you can see the endpoint test we just made.

This feature is very useful for frontend developers – if their requests do not comply with the contract, they will know that immediately.

OK. At this point we can rest assured that Apiary will be of use for our frontend friends. What about backend? There are plenty utilities for them as well, but the one that stands out the most is….

Dredd – simple API testing framework

Dredd is another handy API solution from Oracle. It’s an API testing tool. Does that mean that it is yet another Behat? Of course not. Had it not provided anything new, it wouldn’t have been made.

When compared to Behat, Dredd does away with a lot of complexity. Its main goal is to extract the definition of endpoints from the Blueprint file and send them over to our (local) backend. Thanks to that, we can quickly ascertain if the data format the endpoints return meets our requirements. How to do that? It’s quite simple. The “tests” tab includes a short tutorial, which guides us through the process of installing an app and creating basic config files:

Open your terminal, use the “dredd” command, sit back and let the magic begin.

Now we can be sure that our endpoints return data properly. Keep in mind, tough, that Dredd does not test the logic of our app. Backend still requires BDD tests.

Back to Dredd, when the test is over, we are given an URL that takes us to the results page. Each completed test has its own separate tab in Apiary so that we can get back to them:

Now we know that our script has tested all endpoints, allowing us to mark the job as done (complaint with the contract).

We sure saved a lot of effort and, by extension, time and money!

Apiary’s selling feature – full GitHub integration

Apiary is integrated with GitHub by default. We can instantly create a new branch out of our new API contract, and then git-commit it and send to GitHub. The entire process takes just a few clicks to complete.

  1. To begin, click the “Push” icon at the top right.

  1. Enter a commit message.
  1. The changes are moved to the master branch of our repo.

Unfortunately, the master branch is the only one we can choose in a free version. To push changes to other branches, a paid plan is required. And it may be worthwhile to go for as it gives us big advantage – with that, our endpoints can fully become a part of our project. We will be able to use version control to know exactly who and when modified the contract. Business-wise, it’s a massive benefit.

I think that by now you know just how happy Apiary can make both a dev and your client. Whether you plan on using all or some of its features, you should find it very useful. To give you an idea, in one of the projects I did, we used Apiary to create a mock version of an external (paid) API, using documentation provided by the vendor. Once the project was out of development, switching to the real deal proved amazingly easy.

One of my colleagues created a contract that included over 60 endpoints. With that, it was possible to integrate half a year’s worth of backend and frontend work within 5 days. It’s hard to think of a better recommendation for Apiary than this. What’s more, the entry threshold is low enough that you can start your first project as early as today. And backing from a company as big and trustworthy as Oracle means that your project is certainly in good hands.

Node.js tutorial for beginners

Every technology has its own standards. No matter if it’s PHP, JavaScript, Java or Swift, there’s always a set of rules you should follow. This Node.js tutorial lists our best practices at TSH, describing not only the code structure or design patterns we all implement, but also a set of libraries we found worth using.


As software developers, we try to avoid reinventing the wheel. Before we start coding something on our own, we check if it wasn’t done already.

Javascript (together with NPM) has reached a tremendous number of publicly-available libraries. You now have access to multiple different HTTP frameworks, validation libraries, test runners and even loggers.

As someone who works with less experienced developers, I see, time and time again, how they struggle to choose the best-suited tool. Because of that, me and my senior colleagues decided to create a list of tools that worked out best for us. That’s how the Node.js tutorial was born.


So, what does Node.js do? First of all, there’s a very big difference between a Node.js server and a PHP server. In PHP, every request is isolated, so errors from one request have no impact on the other ones.

Node.js works differently. If there’s an error in one request, it will have an impact on the whole server.

What’s worse, this impact will probably mean crashing the server and making it unusable.

For that reason, you need a supervisor – a special tool to monitor your process and restart it if necessary. There are multiple such tools available on the market. The most popular ones are forever and supervisor, but in TSH Node.js Team we have only one recommendation: PM2.

PM2 is a Production Runtime and Process Manager for Node.js applications. And, in our opinion, it is exactly what every developer needs.

PM2 has a special, development-only version – PM2-DEV, which makes development easier by restarting your app every time you change something in the code.

PM2 has a built-in load balancer and scaling support (it allows you to run your Node.js app in cluster mode). Plus, it supports deploys (with m2, you can deploy with a simple command). PM2 also has its own docker images in case you want to containerize your app. Finally, it comes with a fully working app dashboard that allows you to monitor the app state (logs, errors, resources usage etc).

At The Software House we have only one recommendation and it is PM2
PM2 is a Production Runtime and Process Manager for Node.js applications


The next item on the agenda in our Node.js tutorial is finding the right framework. As we all know, in the JavaScript world, a new framework pops up every week. It’s the same for Node.js. There are so many alternatives that it’s hard to choose the best one.

We were working with Koa, Loopback, Nest.js, Hapi, Restify, Fastify, Sails.js, Total.js and many, many more, but, in the end, we still decided to go with the good old Express.js.

There are many reasons why we chose Express (for most projects) instead of some other alternative (even rejecting the newer ones).

The first and probably the most important one is the community. Express became so popular that when we talk about Node.js, most people automatically think about Express.

Another advantage is that, unlike Koa, Express is based on the old callback approach. Plus, it isn’t preconfigured for REST APIs – unlike Restify. And, true, Express doesn’t offer such great performance as Fastify does, but the framework makes up for it with a number of extensions and adding new features with ease.

Express is based on middlewares, so if you’re familiar with redux, you’ll feel right at home. Even though, unlike Hapi, the framework doesn’t have a built-in validator, you can use one of many popular middlewares connected with your favourite library (express-validator (ajv), celebrate (Joi)).

The same goes for logging functionality. You can easily connect express to winston, pino or bunyan.

This Node.js tutorial wouldn’t be complete without listing what our usual middleware stack contains:

  • winston – for logging purposes
  • morgan – for request logging
  • cors – cors handling
  • helmet – for security purposes
  • celebrate – for request validation and error handling

Those are just the most common ones we tend to use.

At TSH we often use Express
Express quickly became very popular


With another section of Node.js basics covered, let’s talk about testing. We all write tests, or at least I hope so. In order to do it, you need a couple of different tools: something to execute the test (a so-called tests runner), something to assert the results (an assertion library) and, for more advanced cases, something to mock/stub modules (a mocking library).

When I started working with Node.js, I had to install all of these. Back then, I decided to go with a mocha + chai + sinon combo, but I knew there were other possibilities. For example, my frontend colleagues were using Jasmine (as a test runner + assertion library) and Rewire (as a module mocking library).

Then React was released and everything has changed. Facebook gave us a wonderful testing tool that contains everything we need in a single package – JestAt The Software House, we use Jest for everything – mocking, testing, asserting, on both frontend and backend. And, for now, we see no better alternative available on the market.

Jest has variety of functionalities: mocking, testing and asserting to name a few
Jest is a great testing tool


Let’s face it. There’s no way to write a bug-free software. It sounds like something you might aspire to, but, in the end, it’s simply impossible. No matter how hard you try to cover as many use cases as possible by testing, you’ll still face the unexpected inside your app.

That’s why every Node.js tutorial should have a debugging chapter. The most important thing is to have the tools that will allow you to find a solution.

Node.js came a long way. It made debugging easy. I remember using console.log to find out what was happening in my code. Then, I switched to node-inspector, which, at some point, was built into the Node core (as –inspect flag). Still, it was nothing compared to the tools I had in PHP.

How many of you had a problem with inspecting the code executed with pm2, ts-node (especially the 5.0+ version)?

A few weeks ago, I found a new library, NDB. This is an advanced debugger from Google – a company behind the V8 engine.

Instead of executing a process with the –inspect flag, you can simply run it with the ndb command. For example, ndb npm start. It will open a completely new instance of chrome with the purpose of debugging only. All core node files are blacklisted by default there, plus you have access to a terminal, REPL and much, much more. It works with standard node, ts-node, pm2 and event pm2 executing typescript. Which means, TSH-approved!

GoogleChromeLabs is advanced debugger from Google
GoogleChromeLabs is advanced debugger

Code style

Unlike PHP in the form of PSR, JavaScript doesn’t have general code style rules, so most companies establish their own.

In order to obey all those rules, you should have a tool that will track (and possibly fix) all mistakes.

At TSH, we use a combination of two libraries: eslint or tslint (depending on the language we choose) and Prettier. What’s cool about them is that you can make them work together. Both eslint and tslint are linting tools, meaning they enforce a specific set of rules in your code – for example, disallow var usage. 

We tend to connect these with a git hook library (husky + lint staged), so we’re sure that every file is properly formatted and checked before it goes to the repository.

At TSH we use a combination of eslint or tslint and Prettier
Prettier is a formatting tool which makes sure that the code looks the same across the whole system


In the final section of our Node.js tutorial, we’ll talk about language. Even though we all use JavaScript, it has different flavours. Some users have to stick to ES5 while some could go all the way to ES9 or even use Stage 3 proposals with the help of Babel.

Node.js is still a little behind the current ES version, but, with the latest Node 10, you’re finally able to use native es modules without the need for transpilers. In order to do that, you have to activate an experimental feature and use a special *.mjs extension.

Another thing worth mentioning is static typing, which is getting more and more popular. We’re used to Flow in React components and we’re used to TypeScript in Angular 2+ code, so, naturally, we’re trying to bring it to Node too.

At TSH, we’ve decided to adopt two strategies of project development:

  • if our code is JavaScript based, then we don’t use transpilers anymore – we either use native es modules (if possible) or go with common.js modules
  • if possible, we use TypeScript for backend code

Why do we feel the need to have two options? We value flexibility.


Node.js ecosystem is so vast that a young developer might not be able to just dive into it and play with all the features. So, instead of trying out all the available tools, it might be easier to follow the best practices of more experienced developers. At TSH, we value our time, so we choose only the tools that allow us to work efficiently without losing flexibility. I hope that this Node.js tutorial has shed some light on how we do things and helped you get familiar with the technology.


PHP-PM guide: Getting started with the process manager

PHP Process Manager is a relatively new way of running PHP applications. In this article, you’ll find out what it is, how to run it and if it’s better than the other currently available server solutions.

What is PHP Process Manager?

According to its creators, PHP Process Manager (PHP-PM/PPM) is “a process manager, supercharger and a load balancer for PHP applications”. The main features of PHP-PM are:

  • a performance boost – up to 15x (compared to PHP-FPM, Symfony applications),
  • integrated load balancer,
  • hot-code reload (when PHP files change),
  • static file serving for easy development procedures,
  • support for HttpKernel (Symfony/Laravel), Drupal (experimental), Zend (experimental).

PHP Process Manager uses ReactPHP and Symfony components. It also requires PHP-cgi and PCNTL – the PHP library to manage processes. Probably in the future, there will be a support for Swoole (asynchronous library for PHP written in C). Take a look at the dedicated issue in the PHP-PM repository for more details.

How does PHP-PM work?

First, take a look at the image below.

This is how a standard pipeline of PHP request works

This is a standard pipeline of a PHP request. At the beginning, a request is sent. Some “front” on the server side (nginx, Apache, etc) handles the request and runs the PHP script. PHP does its job, then simply dies. This method of handling PHP requests has been used for years. It has some advantages, like virtually no problems with memory leaks.

Then, there’s the PHP-PM way.

An example of how PHP Process Manager helps handling requests

When the server starts, workers are created – they bootstrap your application. Then, nginx, if used, will forward the request to PPM. Alternatively, PHP-PM itself handles the request and load-balances traffic to workers. Contrary to the method above, workers don’t die after serving each request. But they are restarted from time to time or when they have handled a certain number of requests (the default is set to 10000). Restarts are the simplest way to handle PHP memory leaks.

What to use PHP-PM with?

So far, PHP Process Manager looks promising. But what else do you need?

First of all, PHP-PM requires Request-Response PSR-7 abstraction from your application. There’s a HttpKernel adapter for Symfony and Laravel, which should work stable. There are also bridges for Zend, CakePHP and Drupal, but, according to the documentation, they’re in the beta phase. For that reason, it wouldn’t be wise to use them in development.

How to run PHP Process Manager?

You can run PHP-PM in many ways, e.g. by using a prepared docker image or building the image yourself. Of course, you can also set it up manually on your machine.

Let’s start with docker. There are already three ready-to-use images for PHP-PM. The first one is nginx as proxy + PHP-PM. The second one is PHP-PM standalone, and the third one PHP-PM as binary. The creators suggest using nginx + PHP-PM as it’s the fastest one.

But you have to remember that if you want to run PPM on a local machine or server, you can do it only on UNIX systems due to the PCNTL restrictions.

If you want to learn how to implement each method, there’s quite comprehensive description in the documentation.

Is PHP Process Manager the best server solution?

Of course, the most important question is whether PHP-PM is the most effective solution. In order to find out, I’ve compared nginx + PHP-PM, nginx + FPM and an Apache server, using Symfony and Laravel Frameworks. In each test, I was using:

  • 4 cores processor,
  • 24 GB memory,
  • docker containers on Linux alpine with the same PHP base (7.2); the only difference was the setup of PPM, FPM and Apache,
  • most tests were run at 1-minute intervals,
  • siege for tests (with benchmark flag),
  • PHP Process Manager with 8 workers and a disabled debug mode,
  • Symfony running in the production environment,
  • Laravel running without the debug mode.

I was increasing concurrency to simulate more users. I focused on the speed and stability of all three server solutions.


The first thing worth noticing is that PHP-PM consumed a lot more memory on standby (83MB) than FPM (13MB) and Apache (8MB).

During the first test, I was using Symfony 4.1. I created a simple Symfony skeleton app. Let’s take a look at its performance:

PHP Process Manager is really fast and a diagram shows its great performance

As you can see, PHP Process Manager is over 5 times faster, as long as it handles a single request. When the load is increased, it loses its advantage. That being said, PHP-PM is still over 2 times faster than the other 2 solutions.

The second test was more challenging. I added an API platform framework to the skeleton from the previous test and created a simple entity. My endpoint was returning 100 serialized objects from the database. Let’s take a look at the chart:

PHP Process Manager in faster than Apache and FMP

PHP Process Manager is still faster than FPM and Apache.

After that, I tested a Laravel application. Same as with the Symfony test, I built just a simple skeleton app in version 5.6 – a Hello World app in Laravel:

In Laravel application, results of PHP-PM, FPM and Apache were rather random

As you can see, the results are rather random. This probably happened because not enough time was spent to bootstrap the framework (read cache, etc.).

The next test was run on the Laravel example in version 5.5. Here’s how it turned out:

In some cases, PHP Process Manager may be a way faster than Apache or FPM

Once again, the results look random. PHP Process Manager works faster with 100 concurrent requests, but, in other cases, there’s not much difference.

In the end, I wanted to test the stability of PHP-PM. So, I decided to run a 10-hour test on the simplest endpoint of a Symfony application. Here’s the result:

Apache beating FPM may be considered surprising

PHP-PM is still the fastest option and it isn’t a surprise. What surprised me though was old Apache beating FPM. Another thing I should mention is that PHP-PM returned 3 bad requests (its rivals didn’t have bad requests at all).


These are the conclusions I drew after I ran all the tests:

  • in almost all cases, all servers remained stable; only in the 10-hour run, PHP Process Manager lost three requests,
  • in the other tests I ran, PHP-PM lost even more requests;
  • PHP-PM could cause big memory leaks (there’s probably some issue with restarting workers); it consumed all of my memory and crashed my PC; it happened twice,
  • I never got the promised,15x better performance (+/- 5x was all I got),
  • there were times when PHP-PM “lost” some requests; on the whole, there were just a couple, but I should emphasize that Apache and FPM didn’t lose a single one.

In conclusion, it can’t be denied that PHP Process Manager has a huge potential. In the future, we may get Swoole support, which will provide more stability and speed. But the big question is if PHP-PM is ready for production right now? I think I’ll let you answer that one yourself.

Creating an API with Apiary: How to boost the collaboration in your team

Being a backend developer in a software development team isn’t a piece of cake. Frontend and mobile developers often cannot proceed with their own job unless backend devs finish building an API. Therefore, the whole process gets longer and the client gets mad. But what if we tell you that there’s this platform, Apiary, which can boost the design, development and documentation of APIs in your team?

Maciej Mączko and Andrzej Wysoczański – backend developer and frontend developer with years of experience – will show you the magic of Oracle Apiary and some good practices for using it in your projects. During the webinar you’ll learn:

  • What are the most common problems with creating an API and presenting it to other developers
  • How can you tackle these problems using non-technical solutions and why they don’t always work
  • Why we believe that Apiary is one of the most useful tools to boost the whole process of building an API
  • How to quickly create your first API contract with Apiary and how to benefit from the „instant mocking” and „auto test generation” features
  • What are the most common use cases of Apiary and when its best to stick to other solutions

During the webinar, we’ll also answer additional questions. Register for free and boost the collaboration in your team!

Available dates

  • November 7th 2018, 10:00 (CET)
  • November 14th 2018, 14:00 (CET)

Each session will last about 1 hour and 30 minutes.

Free registration

Practical guide to API Platform: How to tell if it’s the right framework for you

Is it possible to create a useful API in the course of a day or two? That’s exactly what the authors of the new framework called API Platform promise to us. However, you probably wonder if it’s really worthwhile to spend time getting to know yet another tool. Well, it depends.

One of the most important challenges in the software development is to “not reinvent the wheel”. That’s why, from the beginning of the history of coding, developers have tried to create solutions that’d speed up their work by automating the most repetitive processes. However, selecting tools that’ll really meet your requirements is almost always a painful procedure.

Among the most promising frameworks/tools that have shown up in the PHP world recently is API Platform. In my work, I often consult solutions with my clients – who usually are CTOs as myself. And now, more and more often, I get questions about API Platform. Is it really a useful framework or just some temporary hype? We’ve used API Platform in The Software House in a dozen of projects now, therefore, I feel like I already have enough experience to share with you some of our thoughts.

OK, what’s that?

API Platform was created in 2015 by a Symfony expert, Kévin Dunglas. It was claimed to be one of the most important development events on SymfonyCon in Paris, alongside and Among those three, I think API Platform was the most successful one. While the first version had indeed some annoying mechanisms, the framework was greatly enhanced in version 2. But let’s start from the beginning: what does it actually do?

In short, it’s a tool based on the PHP Symfony framework which helps you speed up building the backend part of your project.

Especially in the beginning of the development process (but sometimes on later stages as well), we have to put the same code over and over again without any or with some little changes, for example when implementing the paging mechanism or an authorisation system. It doesn’t matter if you’re creating an app for a bookshop or for a car renting company – for sure there’ll be a list of some items, a place where you can add an item, edit it and, of course, delete it.

Mechanisms which allow you to speed up the development of those parts are called CRUD or boilerplate. And there are lots of them on the market. You’ve probably already used Admin Generator for Symfony 1 or Scaffolding for Ruby on Rails. What’s new in API Platform is that times have changed and now we’re facing a completely different approach to architecture. All the previous solutions were creating monolith codes where backend and frontend were created on the server side. Now, we tend to implement more service-oriented architecture with a backend API that serves the data and a couple of clients consuming it – for example frontend SPA applications or mobile apps. API Platform focuses only on API, leaving the presentation layer to others.

What exactly does API Platform give us from the beginning?

Here’s the list of my favourites:

  • A ready-to-use API based on REST with a variety of formats to choose – JSON, XML, CSV, whatever you need. You can also change the format with one simple alteration in a configuration file. All methods serve POST, PUT, GET and DELETE options.
  • The API is built securely on the best programming practices, using one of the most popular PHP frameworks now – Symfony.
  • It’s already optimised and shipped with a caching mechanism (Varnish).
  • There’s an automatically-generated documentation (we all know how much developers hate to write any docs, leave alone supporting them in a long term). Plus a fancy mini-console to run simple queries and play around with the examples.
  • You don’t have to worry about things like filters, paging mechanisms or error messages. This stuff can be enabled with just a few lines of code.

If I was about to present estimation for building from scratch an API which, for example, handles about 3 database entities (let’s say, we have a school with classes, teachers and students and need to build a really simple CRM) with some authorisation, filters, paging and documentation, I’d say it’d took about 2-3 weeks to implement. With API Platform we can prepare it in one day.

OK, I know what you’re thinking at this moment:

RAD is only good at the beginning, as when you go deeper, there are always problems. Isn’t it the same here?

Yes – that’s also true in this situation. After the basic setup, things tend to be a little more complicated than in the tutorial on the API Platform’s website (BTW: a very good tutorial). After you create a database as you like and configure all the relations, you’ll probably need some custom fields or some custom business logic for your API. You start to look for the code and it occurs that there isn’t any! You have to do everything via the special filters or create customised controllers. And both solutions require pretty good knowledge of Symfony – the framework on which everything’s built here (I’ll get back to it later).

Updated documentation of API Platform is now clear and useful
API Platform’s documentation used to be one of its biggest flaws but now it’s updated and very useful

Is this for basic projects or rather for advanced ones?

Actually, it’s for both – the main problem is the gap in between. If you’re a beginner and just want to create a very simple API, you can for sure use this tool but please do not expect that you’ll be able to customise it exactly as you wish. I’ve also seen frontend developers who wanted to specialise their app on frontend side using one of the modern SPA frameworks (like Angular or React for example) but still needed a simple API that’ll deliver them some backend data – API Platform worked very well for them.

The same situation is with mobile apps. I know that right now there are similar solutions just delivering you simple backend service in the cloud, like Firebase, but for me it’s still easier and faster to run API Platform on my server than to configure it all using some cloud panel. It also gives me more flexibility. If you just follow the tutorial and change the sample database names to your own, you should really get your app within one day tops. It’s also available as a Docker container, but my advice is to not use it if you haven’t had anything in common with Docker before – you’ll just get frustrated at the beginning and lose lots of precious time and nerves. Installing it via composer (the second installation method in the tutorial) is, in my opinion, way more intuitive.

But what if I want to create a more complicated app?

Yes, you can create almost everything you want. But only under one condition – before you begin, you have to really know Symfony first. This is the biggest advantage and the biggest flaw of this solution – it’s all standing on top of another framework. Furthermore, as I’ve mentioned above, API Platform isn’t generating a CRUD code that you can just modify as you wish. I’ve used such code-generating solutions and they were very good at the beginning (you can just get some code and modify it easily without reading any further instructions) but there were certain issues with that:

  • If you’re not an experienced developer, it becomes very messy soon. Sometimes even despite following very good programming practices.
  • After modifying any part of the pre-generated code, there’s no turning back. You cannot generate it again for some database changes. There are some solutions which allow that but you need to cope with pretty confusing overriding mechanisms.

API Platform isn’t generating any code ­–­­­ it creates the content “on the fly”. It’s not really a new solution, as I remember similar tools being available years ago (even API Platform’s predecessor, Admin Panel from Symfony 1) but the problem with these older mechanisms was that usually you had very low flexibility on what you could create – limited to what the author allowed you to do. Here the things are a little different, as API Platform is built following the “Symfony way” of how things should work, meaning it’s based on event-driven mechanisms.

How deeply should I know Symfony before using API Platform?

As I wrote above, if you really need to do advanced things in API Platform, you need to know Symfony first. If you’ve created a couple of Symfony-based apps before and you love it, API Platform will be a perfect enhancement for you. If not, you should first focus on understanding how Symfony works. Here are the three most important things to learn:

  • EventStack – That’s basically the first and most important part of Symfony you really need to get to know. You can find the documentation here: The only way you can modify anything within the API is if you hook into one of the events. To do it properly and avoid situations when you’re asking yourself “Why this @#$ isn’t doing what it’s supposed to do?”, you really need to know how events are working in Symfony and which events API Platform is using.
  • Serializer – All the data that’s transferred via API goes through a serialisation component built in Symfony. You can find the documentation here: If you’re wondering why the given field or structure isn’t displayed in the GET method, the issue probably lays in the serialisation.
  • Dependency Injection – You may have met this pattern in other modern frameworks. The general idea is that all the things should be independent of each other and then just bonded together at the final stage by “injecting one to another” – it improves the sustainability of the code and is a must-have in modern apps right now. Here’s the description of how Symfony handles that: As you might suspect, API Platform is also injecting its code to other services and can also benefit from services that you wish to inject.

There’s also one more thing that some PHP developers might not be familiar with – annotations. Annotations first started to be a popular solution in Java and soon migrated to other platforms, including PHP. The basic concept is that you define how the class or method should behave by commenting it in a proper way on top of it. Some frameworks use annotations so much, that in many cases the comment before the method name contains more data and instructions than the code itself – API Platform is definitely one of them. I call such approach “annotation programming”. It can be very useful and descriptive but if you haven’t been familiar with such things before, you’ll need to get used to it now. Here’ a link for further reading: Of course, you can still use XML or YAML but I strongly recommend to get to know annotations, as it seems that they’re the future of programming.

This is a pretty big threshold to cross. Is it worth it?

If you know Symfony very well, along with API Platform you have a very powerful tool in hand. Creating APIs will be much more simple and cost-effective, everything based on a foundation of a very solid framework which is Symfony. But what if you’re not so familiar with the solutions I’ve mentioned above? Learning all of this stuff requires a lot of time and practice. In return, you just receive an ability to speed up the process of creating the most common element of the API.

However, in my opinion, it’s worth it. You’ll have to learn most of those components anyway if you’re planning to create a long-term project – if not in this framework, they’ll still appear in other. Although at first this solution might be very hard to implement (you might be frustrated that the things aren’t working as you expected), there’s a good reason why it works this way, as it prevents developer from making a mess in the code which was one of the biggest problems in PHP some time ago.

Beloved framework of API pet spider is Symfony
API Platform’s pet spider showing his love for other frameworks – especially Symfony

It seems like you’re defending API Platform. Does it have any flaws?

Like all frameworks, API Platform also has some annoying issues that you just have to cope with. For me, the most important one is that you often cannot directly find help for the issue you’re dealing with on the internet. If you just type “API Platform <the problem you have>” in Google, you probably won’t find any solution and might be under impression that the community is very small.

That’s because 90% of questions regarding API Platform are in fact questions about Symfony itself.

Those questions are answered by the Symfony community – which, lucky for us, is a pretty big one. Therefore, in order to find your answer, you should rather search Symfony forums than the API Platform ones. This, of course, means that you have to ask the right questions and (again) know Symfony quite well. You can also get help by asking on GitHub – very often from Kévin Dunglas (the creator of API Platform) himself.

Second thing is the naming convention. Like any other CRUD solution, API Platform has to generate some names of classes and methods on its own. Usually, those names reflect the names used in the database but you still have to guess how the particular method or filter has been named. For example, when working with API Platform in The Software House, we had a table named “Language Spoken” and we got a method for retrieving all languages that users speak. There was a lot of confusion about the auto-generated name: will it be “getLanguagesSpoken” or “getLanguageSpokens” or maybe something else? We finally got a debugger tool from Symfony (which I also suggest you should do) to see the call stack and found out what the generated name was in fact.

As other CRUD solutions which I’ve seen on the market, API Platform also has issues with virtual fields. Basically, the concept of such automated tools is that we have some database and that’s the main source of all the data and structure. However, sometimes we need to return values that are not directly stored in the database but just calculated based on some business logic (or, for example, fetched from some 3rd party source). In these cases, you can still add such fields to the API but they won’t be 100% supported and you may expect some issues with that. For example, sorting is by default executed as the database query, so for a virtual field, you cannot add sorting easily (even if you implement some functions providing sorting rules).

It’s also worth mentioning that although you can use any output provided by API Platform in your other Symfony bundles, it is very complicated to make API Platform use any other bundles and make API out of them. For example, if you’re using SonataAdminBundle or Sylius (which are very popular solutions in Symfony) you cannot count on API Platform to create API from the code provided there.


I grew up in the gaming community where there’s the saying that good games are “easy to learn and hard to master” – I think this sentence also describes API Platform. If you just want to quickly create a very simple API, this tool will be very suitable for you. If you want something more – it’s also possible but, unless you’re a Symfony expert, you’ll need some time and practice to achieve it. Although I always approach fresh projects with reserve (API Platform is just 2 years old now, which for me is a very short period of time), I think this framework is a very promising solution.

By releasing the second version about half a year ago, the guys from the core team have really shown that they were able to fix their own mistakes and were listening to the community. Most of the annoying problems from the first version were solved, for example, now there’s a possibility to use subresources from parent resource instead of creating two separate API endpoints for that. It’s a sign that the framework is going in a good direction.

To be honest, I’ve also wanted to write something about gaps in the documentation. We were even planning to release our internal subsidiary document explaining differences between documentation on their site and the real code situation. But, meanwhile, the documentation has been updated – which also shows that the framework is still evolving and we might expect even better versions in the future.