< mockra >

Benefits of Pairing - 28 Dec 2016

A common misconception I’ve found in regards to pairing is that it’s mostly done to improve code quality. While that’s certainly one of the many benefits, I think the most important benefits are related to culture and knowledge sharing.


One of the more difficult tasks for an engineering manager is building a shared culture among the team. Pairing is one of the best tools for accomplishing this goal. An easy example would be if you’re looking to establish TDD as a common practice.

It’s one thing to tell your developers to use TDD and even give a demonstration of its benefits, but if you want to ensure they’re following it, the best way is to pair with them. If you’re pairing on a regular basis and test drive all of the features while pairing, the other developers will start to adopt TDD and even bring new developers up to speed when pairing with them.

You’ve now established a culture of using TDD that will continue even if you stop pairing with your developers. The practice of pairing will ensure that your developers that use TDD currently will instill that habit upon new team members.

Knowledge Sharing

One of the many issues a growing team will encounter is the concept of isolated knowledge. If only one person knows how the new billing system works, product managers, engineers, and stake holders will all need to go to that one person with questions.

One of the best ways to mitigate this issue is through pairing. If you establish a culture of pairing, you’ll at a minimum have at least two people with knowledge of any given feature. This also allows for healthy discussion between engineers when it comes to building out new functionality around those features. It will also help drive discussion when it comes to grooming or estimating stories.

If your team is in the habit of rotating pairs, pretty soon your entire team will have a good idea of how each feature works. Your engineers will no longer get calls while on vacation, because anyone on the team is capable of answering those questions.

Getting Started

Introducing pairing to your team can be a challenge depending on your engineers. A lot of developers can be pretty averse to the idea of pairing if they don’t have experience with it. I think slowly introducting the concept is the right approach, as well as keeping it optional for each team.

A low barrier option to introduce pairing is by forcing code reviews to be a process that requires a pair. Paired code reviews are a great practice in general, because it’s a lot easier to relate to the original author when you’re talking to them while reviewing. This helps avoid some of the tensions that can arise through text based reviews. It will also help lower cycle time, because engineers need to actively seek a reviewer.

Once developers are in the habit of pairing for code reviews, you can start marking stories or features as items that need to be paired on. More complex stories or ones with a lot of stake holders are great use cases for pairing. Pairing on those stories will help with knowledge share, and get developers used to the idea of pairing.

I’ve found that once the ball starts rolling on pairing, engineers will usually jump on pretty quickly.


ScreenHero will allow you to share your entire screen with your pair and provides voice chat. This is useful for paired code reviews which require a browser typically, as well as for when developers switch to a browser while working on a feature.

Tmate is another great option, but has limitations for code review, as well as feature work that can’t be done in the terminal. You’ll also need a way to communicate with your pair. I’ve found Discord to be a great option there.

Add Swap to Ubuntu - 29 Nov 2016

If you’ve been playing around with Elixir on small web servers, you’ve probably noticed that you run out of memory building your application. An easy solution to this problem is adding swap space to your server. Here’s a quick setup guide for Ubuntu.

The first thing we’ll need to do is allocate space for our swap file.

  sudo fallocate -l 1G /swapfile

Once that’s done, we’ll need to enable our file, mark it, and turn it on with these commands

  sudo chmod 600 /swapfile

  sudo mkswap /swapfile

  sudo swapon /swapfile

Once that’s done, we’ll want to make our swap file permanent. This way the swap sticks around even when we reboot our server, or if it crashes.

  echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Elixir Web Scraping with Floki - 06 Sep 2016

One thing I end up working with in all languages is the ability to scrape data from a web page. I’ve been pretty happy with the tools available in Elixir for doing so, here’s a quick preview.

In this snippet, we’re going to be crawling my personal blog using HTTPoison and Floki.

  posts = "https://www.mockra.com"

  body = HTTPoison.get!(index_url, [], hackney: [:insecure]).body

  posts = Floki.find(index_body, "section.post")

You’ll quickly notice something strange about the options we’re passing to HTTPoison. I’ve run into an issue when crawling some websites due to a bad certificate, so this is a work around for now. You can find more details about the issue here.

Once we get the content body, we can pass it into Floki to search for the information we want. In this example we’re grabbing the posts on the index page. If we wanted to get the title of the first post, we could do so with:

  |> List.first
  |> Floki.raw_html
  |> Floki.find("h3")
  |> Floki.text

This is a bit of a crude example, but in this case, we’re grabbing the first post from the list and converting it back to html. This lets us use the find function again to grab the h3 element for the post. We then use Floki.text to get the post title.

Scheduled Tasks with Elixir - 25 Aug 2016

I’ve been writing several bots and scripts using Elixir lately, and I’ve found it to be a pretty great option. One of the key tools I’ve been using is the quantum-elixir library.

If you’ve ever used cronjobs before, quantum provides the same type of functionality. You can setup scheduled tasks to run at specific intervals through your config file. Here’s an example config/config.exs file:

  config :quantum, cron: [
    # Every 2 minutes
    "*/2 * * * *": {Mockra.Bot, :run},

In this example, the run function on my bot module will run every 2 minutes.

Once I’ve setup my script and config, I’ll use mix to run my script. I’ll use the following command in a tmux session:

  MIX_ENV=prod mix run --no-halt

I’ve transferred a few ruby and node scripts to Elixir, and the droplet I use for scripts has dropped from ~85% CPU load to ~1.5%. These are all unoptimized, but I was pretty surprised by the results.

OS X - Setup Postgres for Phoenix - 21 Jul 2016

Here’s a quick guide for setting up postgres to work with Phoenix. The first step is installing postgres through homebrew.

  brew install postgresql

After that’s finished, you’ll need to run the setup command while specifying the utf8 encoding.

  initdb /usr/local/var/postgres -E utf8

Once that’s complete you can start/restart postgres through the services command.

  brew services restart postgresql

The final step is creating your default postgres user.

   createuser -s postgres

You can now successfuly run mix ecto.create assuming you have the following config in config/devs.exs.

  config :api, Api.Repo,
    adapter: Ecto.Adapters.Postgres,
    username: "postgres",
    database: "api_dev",
    hostname: "localhost",
    pool_size: 10

Feel free to replace the database option with something specific to your project.

Rails Formulaic - Form Testing - 14 Jul 2016

If you’re already using factory girl and want to clean up your test suite, then the Formulaic gem is a great option. It allows you to pass in a hash of attributes to fill out a form.

Here’s an example:

  fill_form_and_submit(:user, :new, attributes_for(:user))

Productivity Tools - 31 May 2016

With the new responsibilities that come with being a parent, I’ve been thinking more about how to maximize my productivity and mental space. Here’s a couple tools/tricks I find the most helpful for my day to day life.

Inbox Zero

One of the best things I’ve done for my productivity and mental capacity was a move towards inbox zero. If you have 5,000 items in your inbox, it’s impossible to tell what you still need to respond to, or act upon. If your inbox has 6 items in it, it’s easy to see what tasks you need to do, or who you need to get back to.

By archiving all of the e-mails that I’ve “completed”, I’m able to quickly see what items need my attention. If there’s a pull request I need to review, it sits in my inbox until I’ve had the chance to do so. If my wife sends me a chore list, I can leave it in my inbox until I’ve completed all of the items on it. Since I’m always looking at my e-mail during the day, it’s an easy way to keep track of what needs to be done.

I never have to worry about forgetting to respond to an e-mail, or ignoring a comment on a project management tool. It removes a lot of the stress I used to feel that came from managing a messy inbox.

Aggregated News

For a while, I found myself struggling to keep up with all of the news and information related to my interests. I was checking Hacker News daily for programming news, TeamLiquid for the latest Starcraft discussions, etc.. A while ago, I decided that I was spending way too much time checking news sources and I needed to cut them out of my daily routine. I decided to settle on Twitter and E-mail as my primary sources for news.

I signed up for newsletters for Hacker News, Ruby Weekly, Node Weekly, EmberJS, and many others. I now get a curated list of content once a week from each of the sources I used to spend time on daily. This allows me to check a bunch of interesting articles at once, dismiss uninteresting ones, and save good articles to Pocket for later.

I follow an assortment of developers on Twitter, and I find it a useful resource for keeping up with the latest trends and ideas in programming. Since I didn’t see myself being able to cut out Twitter from my daily routine, I decided to pipe in content from non-developer interests to Twitter. To this end, I created a Twitter Bot to tweet the top posts from subreddits I used to follow. I no longer spend hours on reddit checking news and reading discussions, the top content I care about is sent to my Twitter feed.

Neovim - 23 May 2016

I recently switched to neovim, and was surprised by how easy the transition was. You can install neovim on OS X by running:

  brew install neovim/neovim/neovim

The first thing I needed to do was link my vimrc to my nvim config. I did so with the following command:

  ln -s ~/.vimrc ~/.config/nvim/init.vim

Neovim should now look and behave pretty similarly to your standard vim setup. Depending on your setup, there’s likely a few changes you’ll need to make to your .vimrc. Here’s some of the issues I ran into:

I use vim-test to run tests while in vim, and needed to update my testing strategy to work with neovim. This was done easily enough by adding the following to my .vimrc:

  let test#strategy = "neovim"

The switch also messed up my system clipboard commands, so I had to switch to use the system register. I created the following leader commands for yanking from vim to my system clipboard:

  map <leader>y "*y<cr>
  nnoremap <leader>yf :let @*=expand("%")<CR>

The first command will simply yank my current selection to the clipboard. The second command will add the current filename to the clipboard.

Those are the only changes I’ve had to make since switching to neovim, and I’ve been pretty happy with the results so far. I’ll be looking to adapt my current setup to take more advantage of neovim in the future as well.

Homebrew Services - 09 May 2016

If you’re like me, you likely have a long list of homebrew packages installed. You most likely also have quite a few running through launchd. Starting, stopping, and restarting these packages has likely been a cumbersome process. Luckily, there’s an easy solution to your problem in Homebrew Services.

You can install homebrew services by running:

  brew tap homebrew/services

The first thing you’ll want to do from there is see a list of currently running services, which can be done by running:

  brew services list

Here’s an example of the ouput:

  mysql        stopped
  postgresql   started username LaunchAgents/homebrew.mxcl.postgresql.plist
  redis        started username LaunchAgents/homebrew.mxcl.redis.plist
  rethinkdb    stopped

Now that I have homebrew service, I can start mysql by running:

  brew services start mysql

If I wanted to stop running redis, I could do that with:

  brew services stop redis

Homebrew services also comes with a handy utility for cleaning up stale services and unused plists.

  brew services cleanup

RethinkDB CSV Exports - 27 Apr 2016

I’ve been writing some toy scripts recently that use RethinkDB as a data store. I’ve gotten a couple of requests to export the data to a CSV document for manipulation in excel. Luckily, there’s an easy way to export your data using the rethinkdb command line tool.

  rethinkdb export -e dbname.posts --format csv --fields title,author

This command allows you to specify a csv or json format for the export. You’ll then need to pass in the database and table you want to export. The fields option allows you to pass in the fields you wish to export.

Web Scraping in Node with Cheerio - 10 Apr 2016

If you’re looking to write a simple bot or script that does web scraping, then node might be a great option. The cheerio library makes it easy to work with HTML. Here’s a quick example:

  npm install request --save
  npm install cheerio --save

Here’s a sample script for parsing article information from a list:

  request(articleListUrl, async function (err, resp, body) {
    const $ = cheerio.load(body)

    const article = $('ul#articles-list li.article:first-of-type')
    const articleLink = article.find('.media-body a:first-of-type')

    const articleTitle = articleLink.text()
    const articlePath = articleLink.attr('href')

It’s surprising how well the jQuery API lends itself to web scraping.

QA Testing with Nightmare - 29 Mar 2016

There’s a lot of tools available for doing automated browser testing, but I recently found out about nightmare and I’ve been pretty impressed.

Here’s an example of testing with Mocha/Nightmare:

  var Nightmare = require('nightmare');
  var expect = require('chai').expect; // jshint ignore:line

  describe('test yahoo search results', function() {
    it('should find the nightmare github link first', function*() {
      var nightmare = Nightmare()
      var link = yield nightmare
        .type('input[title="Search"]', 'github nightmare')
        .evaluate(function () {
          return document.querySelector('#main .searchCenterMiddle li a').href

I’ve only done some basic testing so far, but I’ve found nightmare to be a reliable solution for automated QA.

EmberJS Component Class Bindings - 24 Mar 2016

Here’s a quick post on using one of ember’s lesser known component features. While recently working on the game of life in ember, I was able to create a cell component without a template.

This was accomplished by using classNameBindings, and a click event handler.

  import Ember from 'ember';
  const { get, set, computed } = Ember;
  const { alias } = computed;

  export default Ember.Component.extend({
    tagName: 'span',
    classNames: ['cell'],
    classNameBindings: ['alive'],
    alive: alias('cell.alive'),

    click() {
      set(this, 'alive', !get(this, 'alive'));

In this example, we’re using a simple version of classNameBindings where we’re just passing in a property. When the alive property returns true, the alive class is added. When that value is false, the cell component only has the default cell class.

Another way we could have handled this using classNameBindings would be passing in classes for both states.

  classNameBindings: ['alive:enabled:disabled']

In this example, we would add the enabled class for a truth value, and the disabled class for a false value.

EmberJS Parse Pdf on Upload - 15 Mar 2016

If you’re building an application that deals with PDF files, then it can be useful to extract information from a PDF that a user wants to upload. Here’s an example of grabbing the page count from a PDF using the pdf.js library.

The first step is installing the library using bower.

  bower install pdfjs-dist --save

Once that’s done, we’ll need to include the file. This can be done by adding the following to ember-cli-build.js:


Now that we’ve finished installing the library we’re going to use to work with PDF files, we can write the code for our component. The first thing we’ll need to do is create a FileReader in the init call of our component.

  init() {
    const fileReader = new FileReader();
    fileReader.onload = get(this, 'parseFile').bind(this);
    this.fileReader = fileReader;

We’ll also need to define the parseFile function on our component, which will handle extracting the page count for us. Here’s what that function should look like:

  parseFile: async function() {
    const data = new Uint8Array(get(this, 'fileReader').result);
    const pdfData = await PDFJS.getDocument(data);
    set(this, 'pageCount', pdfData.numPages);

Once our FileReader is setup, we can pass it the file in our upload action. This example assumes that a fileLoaded action will be called with the file the user has selected.

  actions: {
    fileLoaded: function(file) {
      get(this, 'fileReader').readAsArrayBuffer(file);

With these three pieces in place, we can show a page count to the user when they upload their document. If we’re looking for a document of a specific size, we can display a warning message to the user, etc.. PDFJS can also be used to show a preview of the document, as well as a variety of other features.

I’ve been using the ember-cli-file-picker library for my applications to handle the file selection.

EmberJS File Uploads with S3 - 13 Feb 2016

There’s a lot of options out there for handling file uploads with EmberJS, but I’m going to go over my favorite option at the moment. It involves hosting your image on Amazon S3, but has the benefit of never sending the file to your server. Everything is handled on the client side using a signed request generated by your server.

The goal of this blog post will be to write an image-uploader component that will look something like:

  {{image-uploader url=post.imageUrl}}

File Picker

The first thing we’re going to do is cheat a little by piggybacking off of an ember-cli file uploader addon. The one we’re going to use is ember-cli-file-picker. You can install this addon by running:

  ember install ember-cli-file-picker

Image Uploader

Once that’s finished installing, we’re going to generate our image-uploader component. This can be done by running:

  ember g component image-uploader --pod

We can update our component template, so that it uses the file picker addon we installed. Here’s what our app/components/image-uploader/template.hbs should look like.

  {{#file-picker fileLoaded="fileLoaded" preview=false}}
    Drag here or click to upload a file

You’ll note that we’re passing in a fileLoaded action to the file-picker component. We’ll need to define this action on your image-uploader component, and it will handle uploading our file whenever a new file is added.

Here’s a quick look at what our app/components/image-uploader/component.js will look like with the action:

  import Ember from 'ember'
  const { set } = Ember

  export default Ember.Component.extend({
    actions: {
      fileLoaded: function(file) {
        set(this, 'file', file)

For now we’re simply storing the file on our component. We’ll need to add in functionality for uploading our file to S3 if we want our image uploader to be complete. We’re going to use two service objects for handling this process.

Signed Request Service - Ember

The first one we’re going to create is a signed-request service. This service will be responsible for fetching a signed request url from our server. Here’s what our completed app/signed-request/service.js file will look like:

  import config from "../config/environment"
  import Ember from 'ember'

  export default Ember.Service.extend({
    getUrl(fileName, fileType) {
      return new Promise(function(resolve, reject) {
        const url = `${config.API_HOST}/signed-request`
        const params = { file: fileName, type: fileType }

        jQuery.post(url, params, (data) => {
          if (data.errors) reject(data.errors)

A couple of pieces to notice about this service. The first one to pay attention to is that our config/environment file is expected to set a API_HOST. I use this property to set a different API host for each environment my application will run in. For example:

  if (environment === 'development') {
    ENV.API_HOST = 'http://localhost:3000'

  if (environment === 'production') {
    ENV.API_HOST = 'https://mockra.com'

The next thing you’ll notice is that we also expect our server to handle a route called /signed-request. This is the route that will handle generating a signed request that we’ll use to upload our file to Amazon S3. Our service also expects a fileName and fileType as arguments.

Node Signed Request Example - Server

Here’s an example route for generating the signed-request using Node/Koa. You should be able to find documentation for the AWS library of your choice as well. This example uses a few different files for setting up the AWS client, as well as creating a signed-url.


  // Example Config Keys
  s3Options: {
    accessKeyId: process.env.S3_KEY,
    secretAccessKey: process.env.S3_SECRET,
    region: process.env.S3_REGION || 'us-west-1',
    bucket: process.env.S3_BUCKET

  const config = require('../config')
  const aws = require('aws-sdk')

  const client = new aws.S3()

  module.exports = client


  const config = require('../config')
  const client = require('./s3-client')

  exports.getUrl = async (fileName, fileType) => {
    return new Promise((resolve, reject) => {
      const bucket = config.s3Options.bucket
      const params = {
        Bucket: bucket,
        Key: fileName,
        Expires: 60,
        ContentType: fileType,
        ACL: 'public-read'

      client.getSignedUrl('putObject', params, function(err, data){
        if (err) reject(err)
        const returnData = {
          signedRequest: data,
          url: `https://${bucket}.s3.amazonaws.com/${fileName}`


  const signedUrl = require('../util/s3-signed-url')

  router.post('/signed-request', async (ctx, next) => {
    const body = ctx.request.body
    const urlData = await signedUrl.getUrl(body.file, body.type)

    ctx.body = urlData

S3 Upload Service - Ember

Now that we’ve gotten the signed-request service and server response setup, it’s time to create the service that will handle the actual upload. The first thing we’ll need to do is generate that service. We can do so by running:

  ember g service s3-upload --pod

The code for our app/s3-upload/service.js will look like:

  import Ember from 'ember'

  export default Ember.Service.extend({
    uploadFile(file, signedRequest) {
      return new Promise(function(resolve, reject) {
        const xhr = new XMLHttpRequest()
        xhr.open("PUT", signedRequest)
        xhr.setRequestHeader('x-amz-acl', 'public-read')
        xhr.onload = () => { resolve() }

Finishing our Image-Uploader Component

Once the necessary services are setup, we can add the final touches to our image-uploader component. The completed app/components/image-uploader/component.js file will look like:

  import Ember from 'ember'
  const { get, set, computed } = Ember
  const { service } = Ember.inject

  export default Ember.Component.extend({
    signedRequest: service(),
    s3Upload: service(),

    uploadImage: async function() {
      const fileName = `${get(this, 'file.name')}-${Date.now()}`
      const fileType = get(this, 'file.type')
      const signedData = await get(this, 'signedRequest')
        .getUrl(fileName, fileType)
      await get(this, 's3Upload')
        .uploadFile(get(this, 'file'), signedData.signedRequest)
      set(this, 'url', signedData.url)

    actions: {
      fileLoaded: function(file) {
        set(this, 'file', file)
        get(this, 'uploadImage').bind(this)()

The final image-uploader will watch for a file being loaded through our file-picker addon. Once a file is selected, we’ll generate a signed-request from our server. Once we have that, we’ll upload the file to S3, and finally update the provided url.

You’ll notice that we’re appending Date.now() to our fileName. This is done to prevent duplicate file names from conflicting. There’s a wide range of other options for handling these issues, but this is one of the simpler solutions.

The best part about this approach is that we never have to worry about our API handling any file data.

Node Advanced Rest Serialization - 05 Feb 2016

In a previous blog post, I introduced a serializer library I created for use in my JSON APIs. You can find that post, here.

In this post, I’m going to show an example of how I use this library in my actual applications. The first step I take is creating a serializer for a specific type of object. For this example, I’m going to show a user serializer.

  var serialize = require('rest-serializer')
  var _ = require('lodash')

  module.exports = function (data, args) {
    var key = ((_.isArray(data)) ? 'users' : 'user')
    var without = ['token', 'password', 'passwordConfirmation']
    var options = { without: without }

    args = args || {}
    if (args.withPosts) options.sideload = { name: 'posts' }

    return serialize(key, data, options)

This serializer handles a few different things for us. The first thing it does is set the correct key based on if we’re serializing multiple users, or a single user. The second thing it does is exclude three values from our records, since we don’t want to expose tokens or passwords in our API. The final thing is does is accept the option to sideload post records.

We can then use this serializer in our route, like so:

  const User = require('../models/user')
  const serialize = require('../serializers/user')

  exports.index = async (ctx, next) => {
    const users = await User.filter({firstName: 'Mary'})
    ctx.body = serialize(users)

  exports.show = async (ctx, next) => {
    const user = await User.get(ctx.params.id).getJoin({ posts: true })
    ctx.body = serialize(user, { withPosts: true })

The serializer allows us to return an array of users in our index route without including related posts. When we go to fetch a specific user, we pass in our withPosts option to get the related post records. In both of these routes, we don’t need to worry about exposing sensitive data, because it’s handled by the serializer.

Inject EmberJS Router into Components - 31 Jan 2016

If you find the need to change routes in a component action, then you’ll need access to the router. Here’s a quick example of an initializer that will inject the router into your components.


  export function initialize(application) {
    application.inject('component', 'router', 'router:main')

  export default {
    name: 'component-routes',

You can then access the router in your components like such:

  get(this, 'router').transitionTo('dashboard')

Node - EmberJS Rest Serializer - 28 Jan 2016

Node Rest Serializer

I recently created a module for serializing objects pulled from my database for my JSON Api. I’m currently using the RestSerializer on a few of my projects while waiting for more adoption around the JSON Api. I felt some pain around manually serializing my objects in routes, so I decided to create a library for handling the serialization.

When designing the API for my module, I wanted to keep the interface simple and functional. My goal was to be able to do something simple like the following in my node applications.

  ctx.body = serialize('users', users, {
    sideload: { name: 'posts' },
    without: ['password', 'token']

I’ll typically create a unique serializer for each type of document in my API, which helps to keep my routes cleaner. A user serializer would give me the option to do something like:

  ctx.body = userSerializer(users, { withPosts: true })

If you’re interested in providing an API for the EmberJS RestSerializer, or adding better serialization support in your Node apps, feel free to check out the module, here.

EmberJS - Set Current User Service - 20 Jan 2016

If you’re using the ember simple auth addon for authentication, then you’ll likely want to setup a service for setting the current user. Here’s an example of overriding the session service to setup a currentUser.

  import Ember from 'ember';
  import SimpleSession from "ember-simple-auth/services/session";
  const { get, set, observer } = Ember;
  const { service } = Ember.inject;

  export default SimpleSession.extend({
    store: service(),

    setCurrentUser: observer('isAuthenticated', async function() {
      if (get(this, 'isAuthenticated')) {
        const user = await get(this, 'store').queryRecord('user', {});
        set(this, 'currentUser', user);

The key pieces to notice are that we’re extending the original ember-simple-auth session service. We’re also injecting the store service, which lets us fetch our user through ember-data.

The actual logic we’re adding to the session service is in the setCurrentUser observer. This observer watches the isAuthenticated property, and fetches the user based on the auth header.

Here’s an example of accessing the currentUser through a component.

  import Ember from 'ember';
  const { service } = Ember.inject;

  export default Ember.Component.extend({
    session: service()

Here’s the template:

  {{#if session.isAuthenticated}}

Koa - Current User Middleware - 19 Jan 2016

Most applications on the web have some sort of authentication, and will want to associate a current user with a request. Here’s a quick example of adding that functionality to Koa 2.0 using middleware.

Depending on your application, you will need to tweak a few lines to get it working with your implementation. Notably, you’ll need to adjust the token pieces depending on how requests submit an authentication token to your server. You’ll also need to adjust how you find or load a user based on your ORM.

This example assumes the token is passed in a header that looks like:

  Token token="13p123p123n12pi3n1i31", email="test@example.com"

It also uses the Thinky ORM for RethinkDB. Here’s an example of the middleware that would go in: middleware/current-user.js

  const User = require('../models/user')

  module.exports = (app) => {
    app.use(async function (ctx, next) {
      // Get the Token from the Header
      const authHeader = ctx.request.header.authorization || ''
      const tokenParts = authHeader.match('token\=\"([a-z0-9]*)\"') || []
      const token = (tokenParts.length ? tokenParts[1] : null)

      if (token) {
        // Load the Current User
        const users = await User.filter({ token: token })
        const user = users[0]

        // Add the Current User to Request State
        ctx.state.currentUser = user

      await next()

    return app

You can then include this middleware in your application by adding the following line to app.js or index.js.


You can then access the current user in your routes with the following code: