Ever since Cloudera and Hortonworksmerged, the choice of commercial Hadoop distributions for on-prem workloads essentially boils down to CDP Private Cloud. CDP can be seen as the “best of both worlds” between CDH and HDP. With HDP 3.1’s End of Support coming in December 2021, Cloudera’s clients are “forced” to migrate to CDP.
What about clients that are not capable of upgrading regularly to follow EOS dates? Some other clients are not interested in the cloud features highlighted by Cloudera and just want to keep running their “legacy” Hadoop workloads. Hortonworks’ HDP used to be downloadable for free and some companies are still interested in having a Big Data distribution without support for non business critical workloads.
The work on “Trunk Data Platform” (TDP) has been initiated through talks between EDF and the French Ministry for the Economy and Finance regarding the status of their enterprise Big Data platforms.
Trunk Data Platform
The core idea of Trunk Data Platform (TDP) is to have a secure, robust base of well-known Apache projects of the Hadoop ecosystem. These projects should cover most of the Big Data use cases: distributed filesystem and computing resources as well as SQL and NoSQL abstractions to query the data.
The following table summarizes the components of TDP:
Note: The versions of the components have been chosen to ensure inter-compatibility. They are approximately based on the latest version of HDP 3.1.5.
Our repositories are mainly forks of specific tags or branches as mentioned in the above table. There is no deviation from the Apache codebase except for the version naming and some backports of patches. Should we contribute meaningful code to any of the components that would benefit the community, we will go through the process to submit these contributions to the Apache code base of each project.
Another core concept of TDP is to master everything from building to deploying the components. Let’s see the implications.
Building TDP boils down to building the underlying Apache projects from source code with some slight modifications.
Most of the components of TDP have some dependencies on other components. For example, here is an excerpt of TDP Hive’s pom.xml file:
We created a tdp directory in every repository of the TDP projects (example here for Hadoop) in which we provide the commands used to build, test (covered in the next section) and package.
Note: Make sure to check our previous articles “Build your open source Big Data distribution with Hadoop, HBase, Spark, Hive & Zeppelin” and “Installing Hadoop from source: build, patch and run” if you want to have more details on the process of building inter-dependent Apache projects of the Hadoop ecosystem.
Testing is a critical part of the process of releasing TDP. Because we are packaging our own releases for each project in an interdependent fashion, we need to make sure that these releases are compatible with each other. This can be achieved by running unit tests and integration tests.
As most of our projects are written in Java, we chose to use Jenkins to automate the building and testing of the TDP distribution. Jenkins’ JUnit plugin is very useful for a complete reporting of the tests we run on each project after compiling the code.
Here is an example output for the test report of Apache Hadoop:
Just like the builds, we also committed the TDP testing commands and flags in each of the repositories’ tdp/README.md files.
Note: Some high-level information about our Kubernetes-based building/testing environment can be found here on our repository.
After the building phase we just described, we are left with .tar.gz files of the components of our Hadoop distribution. These archives package binaries, compiled JARs and configuration files. Where do we go from here?
To keep consistent with our philosophy of keeping control over the whole stack, we decided to write our own Ansible collection. It comes with roles and playbooks to manage the deployment and configuration of the TDP stack.
The tdp-collection is designed to deploy all the components with security (Kerberos authentication and TLS) and high availability by default (when possible).
Here is an excerpt of the “hdfs_nn” subtask of the Hadoop role which deploys the Hadoop HDFS Namenode:
The Ansible playbooks can be run manually or through the TDP Lib which is a Python CLI we developed for TDP. Using it provides multiple advantages:
The lib uses a generated DAG based on the dependencies between the components to deploy everything in the correct order;
All the deployment logs are saved in a database;
The lib also manages the configuration versioning of the components.
What about Apache Ambari?
Apache Ambari is an open-source Hadoop cluster management UI. It was maintained by Hortonworks and has been discontinued in favor of Cloudera Manager which is not open-source. While it is an open-source Apache project, Ambari was strongly tied to HDP and was only capable of managing Hortonworks Data Platform (HDP) clusters. HDP was distributed as RPM packages and the process used by Hortonworks to build those RPMs (ie: the underlying spec files) was never open-source.
We assessed that the technical debt of maintaining Ambari for the sake of TDP was too much to be profitable and decided to start from scratch and automate the deployment of our Hadoop distribution using the industry standard for IT automation: Ansible.
TDP is still a work in progress. While we already have a solid base of Hadoop-oriented projects, we are planning on expanding the list of components in the distribution and experimenting with new Apache Incubator projects like Apache Datalab or Apache Yunikorn. We also hope to be able soon to contribute code to the Apache trunk of each project.
The designing of a Web UI is also in the works. It should be able to handle everything from configuration management to service monitoring and alerting. This Web UI will be powered by the TDP lib.
We invested a lot of time in the Ansible roles and we are planning to leverage these in the future admin interface.
The easiest way to get involved with TDP is to go through the Getting started repository in which you will be able to run a fully functional, secured and highly available TDP installation in virtual machines. You can also contribute via pull requests or report issues in the Ansible collection or at any of the TOSIT-TIO repositories.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.