Setting Up Our Worker Service
We need to set up a new task definition for our worker service, but it will use the same image our Rails app is using.
Let's start by pushing up our new image.
➜ docker build -t production .
➜ docker tag production:latest 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton
➜ docker push 154477107666.dkr.ecr.us-east-1.amazonaws.com/dailydrip/produciton:latest
Now, let's create our new task definition. Since we're using the same underlying image and we'll need to set up the same environment variables, it'll be easier to create our new task definition based off of the existing web task definition we created a few videos back.
Before we do that, we need to set up a couple of new environment variables on our existing web task definition.
For that, we need to select the latest version of our Web task definition and click
Create new revision.
Once we're in the configuration page, we need to scroll down and select our
produciton container. Scroll down a bit and add these environment variables:
tcp://:email@example.com:7419 (this is the internal ip for our faktory task)
FAKTORY_PROVIDER is needed in the current version of the faktory_worker_ruby gem we are using, but in the current master branch it is not required. It only references the environment variable that the Faktory service's URL is set to.
Once we've added that, we can save the new revision of our web task definition.
Now, we can move on to creating our new task definition for our worker service. Let's start by selecting the latest revision of our web app task definition and clicking
Create new revision.
From here, we need to change two things. First, we need to change the name of our task definition to
worker, then we need to scroll down to our produciton container and click on it.
Once we are in the model for our container settings, we need to set the command to
bundle,exec,faktory-worker. After that, we can scroll to the bottom and click
Update. Then, after the modal screen disappears we can scroll to the bottom and click
We should now have a new task definition named worker, and we can move on to creating our new service for the worker.
Creating the Service
Now that we have our task definition set up, let's create our worker service. To do that, we need to navigate over to our Clusters dashboard and click
We're going to choose these options (leaving the rest as their defaults) to create our service:
- Launch Type:
- Task Definition:
- Service name:
- Number of tasks:
For our network settings, we're going to choose:
- Cluster VPC: The same cluster VPC that was used to configure our other services.
- Subnets: The same subnet that was used to configure our other services.
For our Auto Scaling, we'll leave it at the default and click
Lastly, look over the configuration and make sure everything is correct and click
After we've created our service, we can click the
View Service link to go to the dashboard for our service. After a minute or so, we'll probably see that the service fails to start. This is because we haven't given our worker service access to the Faktory service.
We can follow the same steps above that we used to provide access to our Rails app. Once we've edited the rules, it should look similar to this.
Now, we can restart our new service.
➜ aws ecs update-service --service worker --cluster Produciton --force-new-deployment
After a moment, we can navigate back to our AWS console and see that our worker service is now running.
We can verify that we've successfully connected to the Faktory service by visiting the page and looking at the number of connections.
We can also go into our Rails app, try to share a checklist, and verify that the action triggers a new background job.
Oops! It looks like we forgot to give our worker service access to our database.
Let's correct that and try again.
As you can see, there were a few failure attempts while I was making the changes. This was caused by me creating a few additional background jobs, which retried before I could update the settings.
But now, we're finally able to run background jobs! Yay!
Today we cloned our existing web app tasks definition to create our worker task definition and set up a worker service.
However, with all that we've done, there are still a few more improvements that could be made for real production usage.
First, does it actually make sense to run Faktory as a service using ECS?
As of now, we don't have a good way of pointing the other services (web/worker) to it outside of using the internal IP address. We could have put the service behind a load balancer, but that doesn't really make sense, since we're not going to run more than one instance of Faktory.
With the current configuration, if we have to restart the Faktory service, it will get a new IP address assigned and we'll need to update the task-definition for both the web app and worker.
Another potential issue is that we didn't mount an external volume and use that as the data store for Faktory. So, if you were to restart the Faktory service, you would lose any jobs left in the queue.
That can easily be fixed by mounting a volume on the Faktory container and updating the configuration settings where Faktory stores all its data.
In closing, I hope you've enjoyed learning some of the basics of setting up Faktory and encourage you to take a look at the Faktory documentation. Thanks!