Developing Our Custom Documentation Portal: How We Started
Previously, you could have read about our Customer Education Team’s vision of a new documentation portal. Having prepared all the necessary materials, they approached us, a team of a few developers, with their idea. This series of blog posts is going to describe how we built our new documentation portal from the developers’ perspective.
Juraj BielikPublished on Feb 4, 2020
I’m going to share with you our continuous process all the way to the final release of the portal, along with all the major decisions we had to make and all the obstacles that occurred throughout. Buckle up, you’re in for an interesting ride.
For starters, let’s recap what we knew when the development process began:
- There would be two sources of data—GitHub for code samples and Kentico Kontent for everything else.
- We had a rough idea of how the content models in Kentico Kontent would look like.
- The project would be split into three major phases with increasing complexity.
Additionally, we learned that an external developer, a JavaScript specialist, would take care of the website part of the portal, so we ended up being responsible only for the back-end part of the portal. Having set out the release plan and all the requirements, we could begin specifying what had to be done before the first release. Firstly, we had to decide on our technology stack, and, based on that, prepare the infrastructure for the whole project. Only after that could we actually start developing the proper solution.
Our Path to Microservices
One could think that since we were already using Kentico Kontent as the source of our data, the job would be almost done. However, considering the requirements of the whole project that included integrating our back end with a search engine, GitHub repository, Redoc, and so on, this task proved to be quite a challenge. We really wanted to get everything right from the start, so we began with figuring out the architecture of the website’s back end.
Kentico Kontent provides developers with webhook notifications that fire every time the content in the project changes. This seemed perfect for us—we definitely wanted to react to publishing of a new article, or to any change in an already published one. Therefore, our goal was to make the back end reactive—it would process all the work done in Kentico Kontent by our documentation writers. Of course, we wanted to design a robust infrastructure while also reducing maintenance costs as much as possible.
Because we had a lot of experience with Microsoft Azure, we decided that we would be hosting our back-end services there. Moreover, with all of us being aspiring JS/.NET full-stack developers, the choice of which programming languages to use had been effectively reduced to two: we would be either using JavaScript or C#, depending on which language would prove to be more useful for our use case.
During our extensive investigation, we encountered Microsoft Azure Functions that seemed like a perfect fit for us—just look at all the advantages:
- They provide serverless event-driven code execution, which fitted our needs perfectly—we needed to react to webhooks from either Kentico Kontent or GitHub.
- They are offered with a consumption plan—we would be paying only for computational resources when the functions would be running.
- The functions scale on demand based on the number of executions—no need to scale up manually when needed.
- They could be written in both JavaScript and C#—great!
When we saw all the goodies that Azure Functions provide, the decision to build our back end on microservices was a no-brainer. Being able to scale autonomously, deliver value faster, or having isolated points of failure sounded amazing. Also, having the whole project as an open-source one gave us even more benefits—we could show our customers a complex use case with a non-trivial project that consumes data from Kentico Kontent. One of our other goals was to reduce the maintenance costs of the portal to the minimum, and for this, Microsoft Azure with its generous subscription plans seemed like a good choice.
Setting Up the Infrastructure
However, microservices, in general, don’t offer only positives. This new approach also comes with some drawbacks. Those basically represent the problems of dealing with a distributed system, for example:
- Having many different separate services instead of a single monolith requires an increased effort for operations, deployment, and monitoring as each service acts as a separate unit.
- Even though small single-responsibility services are simple in general, the whole project gets more complex—one has to properly design how each service will communicate with the others. Careful planning ahead is necessary.
- The performance is less predictable because as the services communicate via the network, the performance of the whole project gets worse due to network latency, message processing, etc.
- The system is harder to test and monitor as a whole due to a more complex architecture.
In order to minimize these drawbacks, we decided to do the following:
- Automate our whole deployment process.
- Add static code analysis.
- Monitor each service within Microsoft Azure.
- Implement integration tests that will cover the basic scenarios for the whole system.
- Thoroughly document every single deployed service, including diagrams of the architecture.
We chose Travis CI as the tool for automating our deployment process. Since it’s a continuous integration service used to build and test software projects hosted at GitHub, it perfectly satisfied our needs. All that was needed were some deployment scripts and travis.yml files. When it came to our development workflow, we decided to go with Gitflow Workflow, which consists of the following:
- A master branch with the production code
- A develop branch that contains code used for testing before pushing to production
- Feature branches on which the development of new features happens (they would then get merged to develop when ready)
Thanks to our Travis deployment scripts, our services would get automatically deployed to Azure after each successful push to the develop or master branches—this leaves us with two different environments: Master and Develop. However, we also wanted to support previewing the unpublished content on the website, so we ended up with four environments:
- Live master - master branches of all services + live (published) Kentico Kontent project data - actual documentation website
- Preview master - the master branch of the website - only the documentation website populated with unpublished content
- Live develop - develop branches of all services + live (published) Kentico Kontent project data - environment for testing
- Preview develop - the develop branch of the website - only the testing website populated with unpublished content
Finally, in order to support our integration tests, we added a fifth environment that was similar to the live develop environment, but it was connected to a different project in Kentico Kontent.
Regarding the static analysis, we decided to go with Codebeat for services written in JavaScript and SonarCloud for the C# ones. Free plans provided by all these tools were sufficient for us, which, again, kept the maintenance costs of our new documentation portal minimized. Nevertheless, in order to have all of them work separately for each service, we needed to have our services split on GitHub as well—each one ended up with its own repository. You can see all the repositories in the Kentico Customer Education organization on GitHub.
Monitoring the life and performance of our services turned out to be quite tricky. Having deployed everything on Azure, we had to set up the Application Insights properly. At least that’s what we thought initially. Further blog posts will be delving deeper into this issue but, for the start, we were satisfied with a few alert rules that monitored:
- Average response time of the website (both live and preview master environments)
- Exceptions on all back-end services
Additionally, we wanted to get notified when something breaks down as soon as possible, without the need to check Application Insights on Azure all the time. Therefore, we set up some Logic Apps that would inform us through a Microsoft Teams channel (our internal communication tool) whenever a new alert would occur. This approach had a nice benefit—after the release of the first phase of the project, the Customer Education department could be easily notified by the logic apps whenever there was something wrong with the live website, while we, developers, would be keeping the notifications regarding the development environment only for us.
Regarding the integration tests, documenting the services, and designing diagrams, those ended up as a continuous process that occurred on a regular basis (imagine one day of the week spent on tasks like that). All this extra effort helped us at the end of the project tremendously—we were able to onboard both new developers and current maintainers working on the project faster. As you will see later on, the complexity of even a relatively simple project that is based on microservices can become quite overwhelming.
Conclusion
All in all, spending some time in order to prepare the infrastructure has proved to be a wise step for us. Most of the time, we did not have to think about the deployment process, a simple push to a branch did it for us. The static analysis helped as a pre-review tool, that has, again, sped up our development process.
If you are on the verge of starting a new project that will not only be taking a significant time to implement but will also require continuous maintenance afterward, it is certainly not a bad idea to consider the points mentioned above. Not only will it reduce the volume of repetitive tasks that the development team has to go through, but it will also help the project and all the people involved in the long term.
What’s Next?
There is definitely much more to come—we will look at some obstacles that we encountered on our way to the release of each phase of the project. Also, you will see how we managed to integrate Kentico Kontent with the GitHub repository as the storage of code samples, how we implemented our semi-custom API reference, or how we dealt with the search functionality. To sum up, if you want to know how we ended up with the back-end architecture shown in the picture below, stay tuned in! :)