ZarahioN Presents

Answering why

Laid-back

(Ko)Koa, I guess

Well, well, well. Here we go, Koa.js (forge-e-et about express)

To be fair, I have no idea where this is going, bu-u-ut I have general idea to build something commerce-oriented, since it employes several important aspects of web apps: APIs integrations, forms, media and admin\manager interface. (And current out-of-box solutions kinda suck in some or other regard, so I’ll be that cool kid that ~not~ fixes everything)

First off we’ll start by installing koa itself, after that I and, I would strongly advise, you will spent some time sinking one interesting concept that (Ko)Koa brings compared to express:

Contrasting Connect’s implementation which simply passes control through series of functions until one returns, Koa invoke “downstream”, then control flows back “upstream”.
Cascading@Koa

If you’ve come from Express (or Connect for the matter) background where you have nifty little list like that:

app.use(log);
app.use(bodyparser);
app.use(multer);

You would expect to have all middleware to be executed in that order: we (setup to) log something, then fill in parsed body attached to req, then we get any files scanned and processed by multer (at least I hope that’s what this code does).

But, Koa brought back good times where you build stack of middleware – think tower of hanoi if you’re dumb and haven’t got proper CS basics – you add rings of middleware on a stick and then go through them from ground up and then back. Which was basically said in blockquote before.
(With some exceptions like some ring randomly decides to stop your careful examination by punching you with return)

So how does it look? Pretty interesting:

app.use(async (ctx, next) => {
    // start up logging 'in' and creating simple timeout promise
    console.log('in');
    const promise = new Promise(resolve =>
        console.log('Started promise') ||
            setTimeout(() =>
                console.log('Finished promise') || resolve(true), 1000
            )
    )
    // return false; //that's what I'm talking about angry rings that like to punch stuff; comment out that bad boy!
    await next(); // and now we pass the flag to next one in chain
    // we've got back to our first ring
    ctx.body = 'Flushed: ' + await promise; // and replaced body with awaited `true` from our previous `promise`
    console.log('out'); // we're outie!
})

app.use(async (ctx, next) => {
    // our next middleware, another log, another day
    console.log('setting body');
    ctx.body = require('util').inspect(ctx.req)
    // we're giving up control
    await next()
    // and receiving it right back since there's no more middleware
    console.log('out second')
    // our function ends and automatically resolves promise since it's an async one
})

By the way, if you aren’t familiar with modern JS ~callbacks~ I-I mean async-await ~promises~, GET OUT WHILE YOU STILL CAN!, cause it’ll be a proper ass blasting until you understand that every async function returns promise and await lets you to get then (resolved value) out of it.

So, what does it returns? As you should expect:

in
Started promise
setting body
out second
// wait a second
Finished promise
out

If you got the order wrong.. well, not like I can help you, but reiterate:
Koa calls each used middleware in the order you attached them, then when one finally returns (or there is nowhere else to run) it gets back through whole chain, executing anything after await-ed next(). If you have done some skinny dipping in asynchronous JS you’ll notice that it reminds of something, something generat-ed.. yas, Koa does little bit of gene.. khem doesn’t seem to be the case anymore.

Also as you may have noticed, middleware is littered with ES6 (or 7?) stuff like async-await, which you technically can ditch (I think so at least), which we can test right now:

app.use((ctx, next) => {
    console.log('in');
    const promise = new Promise(resolve =>
        console.log('Started promise') ||
            setTimeout(() =>
                console.log('Finished promise') || resolve(true), 1000
            )
    )
    return next()
        .then(() => promise)
        .then(promiseVal => {
            ctx.body = 'Flushed: ' + promiseVal;
            console.log('out');
        })
    // await next();
})

Other midleware function stayed the same and whole code produced same logging output (thinking about it, I guess I could have just appended it to body, bu-u-ut):

in
Started promise
setting body
out second
Finished promise
out

Which should again be of no surprise – async await after all is mostly cool syntatic sugar over combo of promises and generators. I.e. each await is just then-ned promise neatly unwrapped into “regular” variable. (On the surface that is, I would hapilly guess that there are some sheganigans with generator’s next-ing going on)

So.. where are we? I think I’ve been describing how to set up our cute and tasty (Ko)Koa, and I also think that’s enough for today.
I’ll be checking out router next and probably mapping out wtf we’re going to do with this.

Routing (Ko)Koa

So, we’ve figured out how (Ko)Koa middleware should work, right-o? I think so, at the very least.

And.. thinking about it I finally understand why it was so strange to see empty server.js file and wonder what am I missing? Thing is – I’ve been working with either WP, which is basic PHP, or Nuxt completely serverless SPAs (where only thing we need server for is pre-rendering bunch of pages)

So, surprisingly I haven’t been working with either express or any Node servers for quite some time (not counting strange adventure into firebase functions-supported monstrocity)

And here I am, staring on mostly empty blanket of our “server”. Well, what are node servers usually are for? Erm…
Right! REST apis (or whatever really, I’m not exactly sure)

And, surprisingly enough, we got a stale dump frontend project on Nuxt that I’ve even bundled with blank (Ko)Koa server from the get go. But, we won’t be delving into Nuxt and Vue topics here and for short while will just run two servers on separe ports, joing forces once we are done learning how to do most basic stuff with just (Ko)Koa.

Since we are now going to build API for “some other” “already built” application, we’ll have to switch a subject a little bit from generic commerce application to media-oriented community platform with addition of selling merchandize (cause I’ve been postponing payment integration for long enough)

And with this goal vanishing from mind, here thee go!

(Ko)koa-router

yarn add koa-router as ussl

Yet, we have to think just a lil more, I know it’s hard, but we have to persevere. So, we’re going to build an API, and we’ll (in future yas) be serving it alongside public app, i.e. from that same domain, so it does have a little bit of sense to futureproof our efforts by nesting our future access points. Which means to give out our precious data not just on example.com/gimme/all/data but on example.com/api/v0/gimme/all/data. Since we’ll be able:

  1. as I’ve said, to not pollute public scope so much,
  2. and to make drastic changes in api, name it v+1 and abandon vCURRENT ship without telling anyone and instantly breaking old boys.

Which (Ko)koa-router allows via several routes:

  1. Direct use-ing:
var api = new Router();
var routes = new Router();

api.get('/', (ctx, next) => {...});
api.get('/titles', (ctx, next) => {...});
routes.use('/api/:api_ver', api.routes(), api.allowedMethods());

// responds to "/api/v0" and "/api/v2/titles"
app.use(routes.routes());
  1. Prefixing:
var api = new Router({
  prefix: '/api/v0'
});

forums.get('/', ...); // responds to "/api/v0"
forums.get('/titles', ...); // responds to "/api/v0/titles"

I do like both of them, tbh, but former looks too complicated for our simple purpose of preventing easy access to API. But it should provide to be of utmost help if we’ll decide to create custom taxonomy hierarchy or likes.

For example consider two routes /cats/mein-kun/37-steven and /dogs/husky/67-mike where cats, mein-kun, dogs and husky all can be custom taxonomies with several levels of hierarchy. Cool jazz, cool

So, I’ll guesstimate we kinda ready to start, only 56 lines in:

const Router = require('koa-router');

const url = require('url');

let app = new Koa();

let api = new Router({
    prefix: '/api/v0'
});

api.get('/', async (ctx, next) => {
    ctx.body = {
        'what would we return?': false
    }
    await next();
});
api.use(async (ctx, next) => {
    await next();
    ctx.body = JSON.stringify(ctx.body, null, 2);
})

app
    .use(api.routes())
    .use(api.allowedMethods())

But you know I was just kidding, more thinking! Get me the drawing board!

As you should have noticed I’ve instantiated Router with prefix /api/v0 which obviously makes any requests to anything other that localhost/api/v0/* return 404. But, I still haven’t decided what to return and from where, which points us into deciding – WTF are we going to create. That scary question that requires to stop punching balls and start deciding, or at least drawing:

Which brings us 3 base models, one of which isn’t neccesarilly important right now and kinda has to be flushed out, but we’ll leave it as is until we need it. So, in terms of nice and dandy REST we (I think so?..) need few HTTP methods to support each future model: GET and POST at the very least, and POST + part of user’s GET should be protected.. ya… That’s a problem, but as I’ve said thee perserver or thee dee, or something like that.

So we should punch kb some more and produce the likes:

const BP = require('koa-bodyparser');
api.use(BP());

api.use(async (ctx, next) => {
    await next();
    ctx.body = JSON.stringify(ctx.body, null, 2);
})

let users = [{
    name: 'Pipik',
    role: 'supa-pupa',
    _accessLevel: 99,
},{
    name: 'Nyar',
    role: 'voicer',
    _accessLevel: 3,
    social: {
        vk: 'https://vk.com/nyanyar'
    }
}]

api.get('/user', async (ctx, next) => {
    ctx.body = users;
    await next();
})

api.get('/user/:id', async (ctx, next) => {
    let user = users[ctx.params.id];
    if (user)
        ctx.body = user;
    await next();
})

api.post('/user', async (ctx, next) => {
    // body should be parsed by bodeparser we've attached earlier
    let data = ctx.request.body;
    if (data.name !== '') {
        users.push({
            ...data,
            _accessLevel: -1,
        });
        ctx.body = users.slice(-1)[0];
    }
    await next();
})

Remember I’ve said some smarty stuff about securing users and etc? Nah, not going to happen this time. We’ll do that once passport is inbound though, but first we need to create and set up proper DB access, not like I wouldn’t try to store all our data in memory, but that usually causes unnecessary problems.

BS aside, what do we have in our code? Pretty simple stuff:
1-2: require and use of bodyparser, which will transform JSON post body into simple object that we can utilize,
4-7: our little JSON formatter,
9-20: our “data structure” with sample data, this one will go outside first since it’s useless to have RAM data storage without persistence in our case
22-25, 27-32, 34-45: our basic CR~UD~, it’s of utmost simplicity since we’re not going to let it stay for long either. Main point of current endpoints is to serve something live when we query them. But (and a big Butt must say) their simplicity won’t change much – we’ll return all users on GET: /api/v0/user and we’ll add a user on POST: /api/v0/user, etc. We’ll spyce some security and checks in there, of course, but gist should stay same.

By the way, take closer look at our JSON formatter, notice that it’s use-d almost first in our middleware chain, and notice its await next() as first line – it makes it wait until Koa goes through whole chain, gets response on our endpoint and starts to go back hitting our formatter and bodyparser second time and then exiting out of router.

Such bi-directional call stack gives us interesting ways to manipulate and observe data in Koa or place our logic throughout execution time. I would extrapolate that such behaviour wishes for error catching and logging solutions out of all things that you can do with it.

Anyway! We’ll finish our routes for titles in similar manner and finish up with this BS cause I do want to take a look at mongoose which I, by being a monk, was evading for quite some time already.

So in so, gotsa see thee later. And while waiting, you can check out complete codebase.

Mongoose that (Ko)Koa!

Ya know, I got thinking about this whole series about kokoa, whether it’s even needed – anyone can easily get kokoa docs, get router, write some code and get himself functional server. Why write tutorials? What is the meaning of life? Etc.
In short, that’s the basic problem, I’m hesitant to check how many valid and usefull kokoa tutorials are outie there, why write another one? What do I bring to the table? They all are identical, semantics change but ultimately you get to get data and return data, and the way it happes usually goes same every time, small semantic alterations aside. But yah, I’ll continue this BS for some time more, spyce it up with ~basic~ boring mongoose observation and simple passport authentication and.. dunno, I guess that’ll be the finish of shortlived and useless series. Anyho, have thee fun.

So, we’ll start of as ussl, by installing some new shiet, snakey-related of nature this time:

yarn add mongoose

an hour later

And then I went into doing all the work. Yah.. it was actually fun, mongoose is strange but it has interesting capabilities that monk ignored, and I’ve approached it with prototype-oriented mindset (which affected overal impression). Such tho, where do we start describing another monstosity I’ve created?

I guess since I’ve decided to introduce our little friend, a bit of examination is required:

Mongoose is kinda strongly ~typed~ Schema-ed MongoDB adapter. Which brings both pain and joy, just like everything else in life. Let’s start with painful:

  1. Your flexibility on what you put in gets theoretically a bit limited (I haven’t throughly checked whether mongoose allows saving and getting non Schema-defined fields or temporary\dynamic variation of Schema\Model)
  2. There’s now a bunch more stuff to take care of – instead of Collection.find({ xxx: true }) we have new Model( new Schema({ xxx: Boolean, stuff: { type: string, required: true }}) ).find({ xxx: true }),
  3. And bunch of classes to get: Model which invokes Query which returns Document(s) sometimes “skipping” Query alltogether, and which (Model) is created from Schema. They obviously have their respective places, but it took me good 15 minutes to grasp whole concept and that Model returns Query, just in case we’ll want to adjust it with filters\aggregations and likes, which resolves into Document(s?) on final such adjustment, this is cool, but takes time to adjust
  4. Incomplete validation logic, in first ~half an hour working with mongoose I noticed that it lacks unique validation – I get it, mongoose doesn’t want to query mongo on each validation, but they still throw Mongo adapter’s error instead of handling it in their usual way. They even state that they don’t support it and it is expected behaviour, but, problem persists and you either solve it via plugins, live with it or just don’t bother
  5. append Map type doesn’t seem to validate correctly as it seems of mongoose@5.3.15 (declaring of as { type: String, match: /abc/ } doesn’t throw no matter what pattern or input used)

Still, those pains (except for 5th one and probably 4th one, I would guess) are expected, since mongoose is geared towards robust and strict environment where it’s not that problematic to have boilerplate code in project whose only purpose is to protect codebase from accidental and avoidable errors.

And now we can go into the pleasant part:

  1. Not exactly for my taste, but typed Schema is a boon – you know what each field should be (this kinda heavily calls for introducting of TypeScript)
  2. Stemming from previous – typed Schema allows for proper strraightforward validation logic – you have built-ins for base types: enum, minlength, maxlength and importantly match just for Strings, which already allows to extend it to custom types like URLString, etc.
  3. Stemming even further – now you have two separate steps (which are handled ~automatically): validation and saving to db (not counting unique ~problem~ gotcha mentioned, which kinda ruins it, yah)
  4. Innate separation of concerns is much higher than monk‘s, where you can do same thing, but here it is mostly (at least mentally) enforced (cause you’ll probably be fucked otherwise)

I’ll update both lists need be, but that’s all I can find right now.

Now the interesting part, would I go back to being monk? Probably. They are different and their goals are different. Monk is versatile multi-tool that you can do anything with, while mongoose is very good hammer, you can hit nails very good, but when you try different approaches you’ll start having a hammer time.
(By the by, above is few hour’s experience and expectations BS)

But, theories aside, let’s delve into code, how does our route look like now?

const User = require('../../../model/user')

let api = new Router()

api.get('/user', async ctx => {
    ctx.body = await User.find()
})

api.get('/user/:id', async ctx => {
    let user = await User.findOne(ctx.params.id)
    if (user)
        ctx.body = user
})

api.post('/user', async ctx => {
    let data = ctx.request.body
    let user = await User.insertOne(data)
    ctx.body = user
})

As I’ve said – it still looks mostly same. Which is of no surprise – all our API does right now is serves data almost directly from DB. Things will get a bit more interesting with authentication, but even then modern practises, middleware and nesting ruin all fun -> we attach authentication middleware, set some routes as guarded and voila, nothing changed, and we just expect to get only authorized requests on some routes.

Still, we’ve swaped our boring arrays to some User from .../model, what is it?

const BaseModel = require('./base')

const roles = [
    'supa-pipik', //super-admin, duh
    'pipik', // admin
    'voicer',
    'editor',
    'moderator',
    'timer',
    'pipiker', // follower
]

const urlStringType = {
    type: String,
    match: /^https?:\/\/[^.]*\.[^.]*$/
}

const UserSchema = new mongoose.Schema({
    name: { type: String, required: true, unique: true },
    role: { type: String, enum: roles },
    _accessLevel: { type: Number, min: -99, max: 99 },
    social: { type: Map, of: urlStringType }, // which doesn't seem to be validating
})

UserSchema.index({ name: 1 }, { unique: true, name: 'uniq names' })

class User extends BaseModel {
    constructor () {
        super('user', UserSchema)
        this.roles = roles
    }
    find (conditions = {},
        projection = {
            name: true,
            social: true,
        }, options = {

        }) {
        return this.model.find(conditions, projection, options)
    }
}

module.exports = new User()

Which is an instance of our exended ES6 class User! Yay! So many new words.

Which means we’ve exported instantiated (and most probably duped for every require, I should really check up on that) Object of User ~prototype~ “class” which inherits an amount of stuff from BaseModel, which I’ll show it a bit later, let’s first dissect current bad boy:

  • 3-11: is an array of our roles, which we use as enum validator for mongoose, and share it through model (User) in case we need to provide it somewhere in our app (line #30),
  • 13-16: unsuccessful attempt to create validation for Map values, as I’ve appended earlier – mongoose doesn’t seem to be capable of that right now,
  • 18-23: the Schema! which is kind of pretty simple: you have field as key and type as value, optionally as object with key type in case you want options and validators,
  • 25: this should be automatic because of unique option in name’s field type definition, but I haven’t seen any index created so I’m not exactly sure how mongoose handles it as of now, I’ll check it out later,
  • **27-48*: our class! yay!
    • 28-31: constructor, which invokes BaseModel‘s constructor via super() (which basically translates into something along lines of this = new BaseModel(), but much prettier and, I hope, robust); and then we just attach roles to our User model,
    • 32-40: the definition of PENDING_REMOVAL, basically .find() states defaults to apply projection to mongoose.Model.find call, but this implementation takes 8 lines to just default projection AND calls model directly, which is fine now, but when we’ll, hopefully, start to implement error and validation logic, we would have to duplicate all that common code, which is utter bs (so we have to, I assume, pass default projection to super() and let it handle it, together with mongoose.model operations, unless we need some specific method with some specific predefined arguments, which is highly unprobable),
  • 43: export, which probably instantiates new User each require (but it doesn’t seem to be from my testing, surprising I must say). Instantiation on export is needed to exactly avoid duplicate models across different require-s – we need only one Model to interact with DB, and I don’t see any reasons to dupe them

Well, phew. There’s not a lot of complex stuff going on, but most important for User are Schema + roles, most other stuff is handled by BaseModel. Talking about:

const connect = require('./connection')
/*@connection.js
var connection = null
const connect = () => {
    if (!connection){
        connection = mongoose
            .connect('mongodb://localhost:27017/Anipipik', {
                useNewUrlParser: true
            })
    }
    return connection
}
*/

class BaseModel {
    constructor (name, Schema) {
        // if (new.target === BaseModel)
        //  throw new Error ('BaseModel is abstract')

        this.connection = connect().then(a => this.connection = a)
        this.model = mongoose.model(name, Schema)

        this.name = name
        this.Schema = Schema

        return this
    }
    findOne (id) {
        const doc = this.model.findById(id)
        return doc
    }
    async insertOne (document) {
        let doc = new this.model(document)
        let result = await doc.validate().then(() => doc.save())
            .catch(err => console.error(err) || ({
                error: true, details: err
            }))
        return result
    }
    find () {
        return this.model.find(...arguments)
    }
}

module.exports = BaseModel

So simple (and there’s even a find already that we should kick out of User). So remember that User calls super with its name and Schema? They go right into BaseModel constructor, get created as mongoose.model and are attached to this together with name and Schema, which is then attached back to User (technically BaseModel too, but it’s not supposed to be called by itself, and there’s even a foolproof to implement, checkout this SO for more info, it kinda makes BaseModel an abstract class – template for other classes to inherit, so to speak)

Thinking of which at later date – BaseModel isn’t abstract, it’s common class (which are kinda same) – you can instantiate common model with provided Schema for, let’s say, temporary date that should be cleaned up afterwards. It’s a stretch, I know, but it can be used for an amount of data where we don’t really need any additional logic – like User right now, if we count roles out.

Another note is this.connection – it’s a side effect of mongoose – it can instantiate models, run queries, etc, without connecting to DB, all interactions just get batched and are finally released once we have established connection. That’s why we need to instantiate connection somewhere, since we’re going to need it once we have our models up and running, it seems a pretty nice place.

But, if we create new connection on each new Model created, it can easily lead to unpredictable behaviour and, again, multiple points to handle errors-etc.

Which is why we have connection creator – it’s keeping reference to connection and passes function to get reference to it as require-able function. I’m not exactly sure whether it’s the most appropriate pattern for such use-case, but it works fine for now, and I’m too lazy to verify.

Other than that, BaseModel just serves as mongoose wrapper – creates models out of schema, wraps mongoose’s DB accessors (find, insert, etc), and later on should serve as common grounds for more complex queries, projection handling and error reporting, among some other possible stuffs.

At which point we could conclude our dip into mongoose grounds, but it has another very neat (I hope) feature – populate-ing Documents based on Schema references. So that’s what I’m gonna do:

const UserType = { type: mongoose.Schema.ObjectId, ref: User.name}

const TitleSchema = new mongoose.Schema({
    name: { type: String, required: true },
    date_aired: Date,
    episodes: [{
        name: String,
        number: { type: Number, min: 1 },
        video_iframe: urlStringType,
        video_url: urlStringType,
        date_aired: Date,
        date_voiced: Date,
        uploaded_by: UserType
    }],
    voicers: [UserType],
})

Since we don’t yet have any special alterations to Title, no reason to include anything other than Schema.

Our main attraction is on line 1: ref: User.name (and reason why we attach model name to its instance), which makes mongoose to keep in memory that anything (in our case ObjectId) stored in Title collection voicers array should bind to entries in User.name collection. (You should substitue collection to mongoose.model, but most often they are synonyms)

Surprisingly and simply it allows us just to change retrieving code from await Title.find({}) to await Title.find({}).populate('voicers') and voila, now we have whole User data in place of _id. If the data is in respective collection, otherwise it’s kinda null, or more like an empty result without even stored id, so be carefull deleting related data.

Also, I doubt that mongoose handle any SQL-like constraints logic so you’ll have to clean up by yourself.

I’ve took a long 15 minute look through mongoose doc on populate and while I’ve found a lot of interesting use cases, there’s nothing about constraints and auto-cleanup.

And I must say even with its crooks, I do like the “magic” mongoose provides. In short later we’ll probably take a look at multiple populate-ions (populating multiple fields), dynamic references (populate-ing based on user-defined values), and virtual references (“liquid” fields that exists only after population based on logical querying (?))

Ah, good relational times.

On this nostalgic note we finally finish up, we’ll be covering some nooks and cranks of mongoose at a bit later date, and nuxt time we’ll be setting up our basic admin interface (probably in-page) on Nuxt-Vue, cause writing templates in ES6(7?) syntax is cool, but sucks a load whatever anyone can say.

Code as ussl available at git.