Here we are again with a new and improved release of HarperDB. This time we are bringing some new features and improvements to the platform. Let’s take a look at what is new in HarperDB 4.2.
Pre requisites
Before we start this article, make sure you have HarperDB installed and running. If you don’t have it yet, you can follow the instructions in the documentation.
All the code in this repository is also available in this GitHub repository if you want to follow through and test it yourself.
What is new in HarperDB 4.2
First and foremost, it’s important to go through some of the new features and improvements that are coming with this release, the full release notes can be found here.
Resource API
One of the most important features is the new ways of interacting with the database. The Resource API, for example, is one of the new ways of interacting with it. This is a new interface for accessing the data in HarperDB using a uniform way of interacting with the database or its tables. It’s designed to be implemented or extended so you can also define customized application logic for that specific table or external resources.
The resource API also changed a bit how the ReST API works, it’s now implemented using best-practice HTTP APIs, which means that GET requests will be responsible for fetching data, POST requests will be responsible for creating data, PUT requests will be responsible for updating data, and so on…
Component architecture
Another interesting update is that the custom functions are now a fully fledged component-based system. This will allow you to create and use functions more like external packages being installed in the database with just one configuration entrypoint.
GraphQL schema definition
It’s now possible to define your table schemas using GraphQL. It means that now you can declaratively define your database, without having to rely on the database alone to keep its state, this way you can version and track your database schema changes over time.
These schema definitions can be used to ensure that tables exist, that they have the correct columns, and that the columns have the correct data types. It’s also possible to define the primary key and foreign key relationships between tables.
Real time data
HarperDB 4.2 now provides standards for real-time data through standardized interfaces for subscribing to changes and messages. Using the new real-time messaging API it’s possible to create and leverage a whole new type of applications that can react to changes in the database in real-time.
You can, for example, create queues, publish messages, and subscribe to topics. This will allow you to create applications that can react to changes in the database in real-time, or even create a whole new type of application that can leverage the database as a message broker.
These interfaces can be used in the most common protocols like MQTT and WebSockets, as well as server-side events through standard HTTP requests.
GraphQL Schemas
To start up, let’s take a look at the new GraphQL schemas. This is a new way of defining your database schema, and it’s also a way of versioning and tracking your database schema changes over time.
You can run this example by cloning the repository, entering the graphql-schemas folder and running npm install && npm run start:container, you need to have Docker installed.
The applications quickstart tutorial has a very intuitive and interesting way of setting up the database using GraphQL schemas if you need a more in-depth explanation. But, in general, it’s just a matter of defining your tables and columns in a GraphQL schema file.
If you need more information about what is a GraphQL schema, please refer to the actual GraphQL documentation about it.
Schemas are defined by applications, if you take a look at our docker-compose.yml file, you’ll see that we are defining a volume in our installation:
Each folder inside components in the container is a new application. So, if you want to create a new application, you just need to create a new folder inside components and put your GraphQL schema file inside it. This directory is mapped to our src directory, so let’s just create a new folder blog inside it and throw a new schema.graphql file inside it.
All HarperDB’s custom directives for GraphQL are defined in the docs
Let’s dive a bit deeper into this schema. We are defining two types, Post and Comment, both of them are tables in our database. The @table directive is used to define that this type is a table, and the @export directive is used to define that this table should be exported as a resource to our external API. The @sealed directive is used to define that we don’t accept extra properties from the external calls. Since HarperDB is a NoSQL-Oriented database, it will create any new properties you send to it. This directive is used to prevent that.
See that @export can be used with a name argument, this is the name of the table in the external API, if you don’t define it, it will use the name of the table itself, so the ReST API will be available at http://localhost:9926/Post and http://localhost:9926/Comment instead of http://localhost:9926/posts and http://localhost:9926/comments, however, using export with a named argument, will not exclude the table name, this means that even if you put @export(name: "posts") in your schema, the table will still be available at http://localhost:9926/Post.
Now let’s create the application configuration file, this can be seen from the application template repository, it’s a config.yaml file present at the root of our application folder.
Doing that and going to our HarperDB Studio, go to the Applications tab and you’ll see our application over there:
Click on the schema file, and then on the refresh button, now go to the Browse tab and you’ll see that our tables are there:
New ReST API
The new version of HarperDB also brings a new way of interacting with the database through the ReST API. Now we can query the database through a more streamlined and intuitive way, using the Resource API.
But not only that, we can now access the new ReST API. This new API is more intuitive and follows the best practices for HTTP APIs, this means that GET requests will be responsible for fetching data, POST requests will be responsible for creating data, PUT requests will be responsible for updating data, and so on…
And we don’t need to use the operations like we were used before, this is so much easier because we can leverage our schemas to do it. Let’s use the same schema we created before in the same database to query for our posts, but we need to include a post first.
For these examples, I’m using the excellent ReST client extension for VSCode to make the requests, but you can use any other tool you want.
We need to get the log in information for the current user, this is a base64 string with the username and password, joined by a :, you can get this manually by transforming admin:admin (in our case), into a base64 string, or you can go to the Studio in the Config tab and get the token from there in the header
Note that you can use both the Post (as the name of the table), or posts (as the name of the resource) to make the request, both will work. Now let’s query for our posts:
Querying directly for the table name will not return the data, but instead the table definition. To return the posts, we need to limit our data, this can be done through URL parameters:
We can also query by id through another path parameter:
Which will return the same post but outside of an array.
We can also update the post using the PUT method:
It’s important to notice that any property that is defined as required in the schema (with the !), will be required in the request, otherwise, it will return an error. The PUT method returns a 204 No Content response, which means that the request was successful, but there is no content to return.
We can also delete the post using the DELETE method:
Which will respond with:
Applications
Applications are the evolved version of the custom functions that we had before. They are now way better to use and more intuitive through the new Resource API.
To make it work, let’s duplicate our graphql-schemas folder and rename it to resource-api, then inside the src/blog folder, we’ll add a package.json:
This is a very simple package.json file, but it’s important to notice that we are defining the type as module, this is because we are using ES Modules in our code, and we need to tell Node that we are using it. Let’s then install the harperdb package with npm install harperdb@next (for the alpha version).
In our config.yaml we are saying that we want to use the resources.js file as our entrypoint, so let’s create it:
This is a very simple example, but it’s important to notice that we are extending the Post table from the tables object in the harperdb package. This is a very important part of the new Resource API, we can extend the tables and add our own logic to it. So let’s add a logic to bring all the comments from the post.
Let’s insert a comment in that post:
Now, if we query for our posts, we’ll get the comments as well:
With this response:
Conclusion
These are some of the many new features and improvements that are coming with HarperDB 4.2. There are many more, like the new real-time data which we couldn’t cover here.
I encourage you to test and take a look at the documentation to see what else is new. If you have any questions, feel free to reach out to me on Twitter or LinkedIn.