Avatar of Faraz PatankarFaraz Patankar

Queues on Railway

Queues are an asynchronous form of service-to-service communication used to solve various different problems in an elegant way. They’re used to smooth out processing peaks, offload heavy work from one server to many small workers or to buffer and batch work. They’re a common practice once your app reaches a heavy workload.

We have been getting a massive influx of questions revolving around deploying queueing solutions on Railway, and while we don’t have first-party support for a queueing solution, it is straightforward to deploy one using a language-level library.

Examples of such libraries would be Celery for Python, Resque for Ruby, rmq for Go and several other libraries in all the languages we support. In this post, we will go over an example template we created to work with queues in JavaScript using BullMQ and Redis.

BullMQ with BullBoard

Along with using BullMQ and Redis to set up our queueing solution. Additionally, we’ll take advantage of a package called bull-board which is an open-source dashboard built on top of BullMQ. It helps us visualize our queues and provide us with options to retry and clean our jobs.

We will also deploy a Fastify server to serve our dashboard and allow us to add jobs to our queue via an API endpoint.

The Template

Click the button above to deploy your own queue to Railway using our template. We’ll explain how it works as you read the sections below.

The Environment Variables

// env.ts
import { envsafe, port, str } from 'envsafe';

export const env = envsafe({
  REDISHOST: str(),
  REDISPORT: port(),
  REDISUSER: str(),
  REDISPASSWORD: str(),
  PORT: port({
    devDefault: 3000,
  }),
  RAILWAY_STATIC_URL: str({
    devDefault: 'http://localhost:3000',
  }),
});

// queue.ts
const connection: ConnectionOptions = {
  host: env.REDISHOST,
  port: env.REDISPORT,
  username: env.REDISUSER,
  password: env.REDISPASSWORD,
};

The env.ts file specifies the required environment variables all of which are automatically provisioned if you use the template. All the Redis-related variables are then used to establish a connection to the Redis instance in the queue.ts file.

The queue, the board, and the worker

// index.ts
const welcomeEmailQueue = createQueue('WelcomeEmailQueue');

const server: FastifyInstance<Server, IncomingMessage, ServerResponse> =
  fastify();

const serverAdapter = new FastifyAdapter();
createBullBoard({
  queues: [new BullMQAdapter(welcomeEmailQueue)],
  serverAdapter,
});
serverAdapter.setBasePath('/');
server.register(serverAdapter.registerPlugin(), {
  prefix: '/',
  basePath: '/',
});

Next up, we use the BullMQ package to create a new queue called the WelcomeEmailQueue and use the FastifyAdapter to set up our dashboard so we can visualize and monitor what’s going on with our queue.

server.get(
  '/add-job',
  {
    schema: {
      querystring: {
        type: 'object',
        properties: {
          title: { type: 'string' },
          id: { type: 'string' },
        },
      },
    },
  },
  (req: FastifyRequest<{ Querystring: AddJobQueryString }>, reply) => {
    if (
      req.query == null ||
      req.query.email == null ||
      req.query.id == null
    ) {
      reply
        .status(400)
        .send({ error: 'Requests must contain both an id and a email' });

      return;
    }

    const { email, id } = req.query;
    welcomeEmailQueue.add(`WelcomeEmail-${id}`, { email });

    reply.send({
      ok: true,
    });
  }
);

Next, we set up an /add-job endpoint with Fastify to allow us to set up jobs for our worker via an API request. Each request will require the email and id query parameters.

new Worker(
  queueName,
  async (job) => {
    for (let i = 0; i <= 100; i++) {
      await job.updateProgress(i);
      await job.log(`Processing job at interval ${i}`);

      if (Math.random() * 200 < 1) throw new Error(`Random error ${i}`);
    }

    return { jobId: `This is the return value of job (${job.id})` };
  },
  { connection }
);

Lastly, we have a dummy worker that demonstrates job progress and randomly fails jobs so users can experience what a failed job looks like in the dashboard and try out the retry option.

If this were a real-world application, this worker would handle the sending of the welcome email to our users.

The demo project

We have a fully functional live demo of the queue for users to play around with. Here’s a link to the Railway project and the queue dashboard UI. We have an example curl request you can try to experience the dashboard live.

curl https://queue-service-production.up.railway.app/add-job?id=1234&email=hello%40world.com

Conclusion

We hope this post was useful if you were exploring options to deploy a queueing solution on Railway. We have kept the template intentionally simple so it can be extended both in terms of functionality but also to be used with other queueing libraries.

We’d love for the community to contribute more templates and options for our users especially with queueing solutions involving other languages and libraries. If you do end up creating such a template, feel free to reach out to us on Discord as we’d be happy to feature it both on this blog post and our templates page!