State Management: ngrx/store vs Angular services

ngrx vs services

State management is one of the most difficult tasks in front-end development. In Angular2 we are presented with a number of options. The most promising ones seem to be holding reusable state in services vs. using the ngrx/store library specifically designed to help with state management. Here are my current views on the pros and cons of each approach.

Pros of using ngrx/store

  • You always know where all your state is
  • Everything is an observable => more flexibility to use stores
  • You get dev tools such as time-travel
  • It could become a standard
  • There are some examples

Cons of using ngrx/store

  • The examples don’t cover real world cases like having an actual database with items with ids.
  • Everything is an observable => more complexity in use
  • It could become an outdated library
  • Difficulty to learn
  • Name-spacing is ugly, e.g. '[heroes] UPDATE'.
  • Idea of immutability is nice but from the example //remember to avoid mutation within reducers. Having to remember is not so nice. It makes the reducers fragile and it’s easy to introduce sneaky bugs since nothing guarantees the immutability.

Pros of using services as your store

  • No extra library needed, it’s core Angular
  • Complete flexibility in use
  • You can have stores that aren’t observables
  • You can throw immutability overboard
  • No name-spacing problems as every resource has it’s own service

Cons of using services as your store

  • No-one tells you how to do it, no standards

Conclusion

I still prefer the manual approach with services. I think it adds a lot to the flexibility and simplicity that you can shape your data-store as you need it. Often you don’t need the entire store store to be an observable and you can easily extend your store-services to make them observables once you need that feature. For example I’d rather have a store with the signature

heroStore: {[heroId: string]: BehaviorSubject<Hero>}

than the signature

heroStore: BehaviorSubject<{[heroId: string]: Hero}>

As long as I don’t need something like the total number of heroes it’s completely sufficient to have the first signature. And in case I suddenly need the store as an observable in my app, I can still easily change the service.

Also the mistakes in the docs and an example-app that’s far from real-world scare me a bit. For me this shows too little dev power in a project too big to stem and a concept that’s not guaranteed to be ideal. I mean it’s an interesting idea to combine the principles of redux with observables, but maybe it’s a little overkill?

Please don’t use MongoDB (or any other NoSQL database) for your web-app.

There isn't a single good reason to primarily use a NoSQL database for your web-application. Click To Tweet

I’m going to crush all your arguments here. It’s a general argument against NoSQL in web-apps, but code is going to be mostly MongoDB-related, since that’s what I have the most experience with. Also note that I don’t say there is absolutely no use-case for a NoSQL database. I’m just saying your app should be backed by an SQL database to store things like your registered users and so on. In the unlikely event that you really have a special use-case, then use a separate NoSQL database for this, but don’t put your relational data in there too! So now let’s move on to the crushing.

 

“NoSQL can be faster than SQL because it doesn’t have to do table-joins”

That’s true, but there is a well known and understood solution to this problem. It’s called caching. Let me elaborate.

Let’s say we have to load the following view a gazillion times a day:

Display of Message Data-Structure

It’s similar to a typical forum which most webapps have in some form.

Then the argument for NoSQL would be that you could store everything denormalized as

{
  content: "bla bla"
  user: {
    name: "Jeremy Barns"
    ...
  }
  comments: [
    {content: "bla", user: {name: "Jonas Blue", ...}}},
    {content: "blarb", user: {name: "Jeremy Barns", ...}}
  ]
}

Now in our cool web-app we could fetch all the information we needed without any table-joins, just by extracting one document from the database. Super fast.

Of course, we’ve introduced a problem with this. We have the same user in multiple locations. So if Jeremy Barns decided to change his name, we’d have to update it on every message he ever made, a true nightmare. That’s why there are relational databases, so you can actually insert an id instead of the whole user, which solves a lot of consistency problems. Of course you could do the same with Mongo:

{
  content: "bla bla"
  user: 58e5ee14d37470005df49bcb
  comments: [
    {content: "bla", user: 50e5ee14d36470005cd66waf}},
    {content: "blarb", user: 58e5ee14d37470005df49bcb}
  ]
}

and then in your application code you query the Message and from the Message you query the users, but then it already looks awfully much like a relational database. And you still haven’t solved all problems with this. What if you now need a feature “delete a user and all of his related data”. Easy and consistent with a relational database. Impossible with NoSQL.

So you say “well, we’ve got to choose between speed and consistency!”. Wrong. Let’s say we implement the following data model:

interface Message {
  content: string;
  user: User;
  comments: Message[];
}

interface User {
  name: string;
  ...
}

where we use the same type “Message” for questions and replies. Now to build our view above one million time, would we have to do one million times all of those table-joins:

message -> user
message -> message -> user (n times, where n = #comments)

?

First of all, joins on properly indexed columns aren’t actually that bad. SQL databases have strongly optimised algorithms to make those joins fast. Second, we can still implement caching solutions, if the joins really pose a problem. Now I’m not saying caching is all that simple, cache invalidation is a topic for itself, but at least it’s a solution that holds water. Caching here basically means building something like the denormalised data. Ah, so for that you could use Mongo, as your caching layer! But if something goes wrong with the consistency you can just clear the cache. You can’t do that if the cache is your database!

And what if suddenly this view isn’t important at all anymore to your business? You’d rather like to display it like that:

Jeremy's Messages

Awww, snap! Those business people changed up everything, but you have a database full of data optimised for another case! Denormalising data means committing to certain joins.  So the ideal strategy that would give you performance and reliability would be:

  1. Use a relational database to store data normalised
  2. Use caching if there is a bottleneck of joins somewhere

Furthermore: While changing business requirements will change most certainly it’ll take probably a bit longer for your “new cool web-app” to actually reach a stage where this sort of caching is necessary.

 

“SQL doesn’t scale horizontally”

Not out of the box, but once you reach a certain size you can easily switch to something like Amazon Aurora (compatible with PostgreSQL and MySQL), which has up to 15 read instances. And once you outgrow this (and are probably a billionaire by now), you can still implement caching solutions (see above).

And what about write? Well, in SQL as well as in NoSQL databases you will have one master for write. Then, classically the first thing to do is to scale up vertically the instance that’s responsible for writing. Once this fails, you can check where’s the bottle-neck, e.g. move server logs to Redis (offloading). And once this fails, you can still use sharding. While MongoDB has an automatic mechanism for this, with the SQL solutions you’ll have to take care of the sharding-logic yourself. However, that’s not so bad. Since when this happens you’re probably also at a size where defining the sharding-logic yourself brings performance gains.

 

“In SQL, adding new columns to large tables can lock / slow down the database”

When you’re at the multi-million-rows-per-table level of an application, you can use a solution like Amazon’s Aurora. This system doesn’t slow down when columns are added. From the docs:

In Amazon Aurora, you can use fast DDL to execute an ALTER TABLE operation in place, nearly instantaneously. The operation completes without requiring the table to be copied and without having a material impact on other DML statements. Because the operation doesn’t consume temporary storage for a table copy, it makes DDL statements practical even for large tables on small instance types.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Managing.html

Actually, you can start with Amazon Aurora right away, as it isn’t really more expensive than a regular database server.

 

“MongoDB is more reliable / available with it’s Replica Sets than a classical SQL DB!”

Again, ever heard of something like AWS? Amazon (or other cloud RDB providers) takes care of availability for you, so you don’t need to worry about that.

 

“MongoDB is free and you can run it on cheap linux systems”

The server costs probably don’t factor in too much next to your love, sweat and time put into your project. Having said that, a simple MongoDB production setup on AWS would be around 65$ / month (smallest MongoDB atlas cluster). By setting it up yourself, you’ll not really be cheaper than this. A simple Aurora production setup would be ~40$ / month.

 

“But I have no schema! I have truly unstructured data!”

Really? You don’t have users logging in, connecting with something? Even if so, sooner or later you will have some relations in any meaningful web-app. I’m still waiting for that use case where there is literally no structure in the data of a web-app.

 

[With Mongo] you can choose what level of consistency you want depending on the value of the data (e.g. faster performance = fire and forget inserts to MongoDB, slower performance = wait til insert has been replicated to multiple nodes before returning) – source

You probably don’t have “fire and forget” data. Which data would that be? I can’t think of any. Users interact with your system, that’s what generates the data as well as the business value for your system. What would be so invaluable as to lose it? A user post? Certainly not. A user changing the privacy settings? Better not.

 

With MongoDB you get full-text search

While that may be true and will help you get your full-text search started, it’s not that hard to set up a search engine like Lucene or Elasticsearch. We’ve actually chosen MongoDB for one project for exactly this reason but we quickly outgrew the capabilities of it and switched to Lucene, while being stuck with Mongo for all our relational data.

Conclusion

Disadvantages of NoSQL:

  • No consistency
  • No transaction safety / No rollbacks
  • Less flexibility in querying

Advantages:

Yet to be found.


If you liked my rant, don’t forget to follow me on facebook or twitter and share this post with others thirsty for NoSQL criticism!

5 Funny Computer Science Jokes

Why do Java programmers wear glasses?

Because they can’t C#.

SQL

A SQL query goes into a bar, walks up to two tables and asks, “Can I join you?”

3 guys

A physicist, an engineer and a programmer are on a steep mountain passage, when suddenly the breaks stop functioning. After some shouting and near death experiences, they finally manage to bring the car to a halt. The physicist says “Let’s check the temperature on the breaks and test the friction, maybe then we can figure out what went wrong.”. The engineer goes “Let me just get some straps, I can fix it”. The programmer just gently shakes his head. Then he goes “Guys that’s not how it works. Let’s first do it again and check if it’s reproducible!”.

OOP Joke

Q: What’s the object oriented way to become rich?
A: Inheritance

[“hip”, “hip”]

(hip hip, array!)

Managing State in Angular 2+

manage state
The hardest thing in a growing single-page application is managing state. Click To Tweet

This problem has always been strongly discussed in the Angular community, but not many standards have emerged. Most tutorials are too far from real-world applications to capture the essence of the problem and only simplistic “parent-child” interactions are discussed. As an application grows, components become more deeply nested and we need solutions to handle interactions between components that are more distant from each other than “parent-child”.

The best approaches to me seem to be storing some state in services or using a state management library. The most prominent exponent for such a library would be ngrx/store (https://github.com/ngrx/store). If you’re having trouble with managing states you are not alone, other people have already had this problem and dealt with it. Unfortunately those people just don’t seem the to be the ones from the Angular Team, which still promote their “hello-world” data-binding mechanisms.

I’m not alone with this opinion, similarly thinks Kyle Cords in this excellent summary on managing states in Angular2+. Exactly as in my experience, he describes how the application gets messier as the codebase grows with the classical data-binding mechanisms you find in all Angular tutorials. The way I see it, the tutorials mostly mention those techniques for two reasons:

  • They are a tiny bit simpler to demonstrate in a small demo app than other methods.
  • They are a “Unique Selling Point” for Angular. If they would suggest using something like ngrx/store they’d essentially say “why don’t you just switch to React & Redux, here’s a link to get you started”.

Now whether it’s best to use ngrx/store or using mechanisms provided by angular (services) is subject to debate. Here’s a list of pros and cons for each: State Management: ngrx/store vs Angular services

In case you want to take the manual path with services, here are some of my thoughts on the subject.

Update resources with an id

Typically, behind every application, we have a database. And typically, this database has different tables (or collections or however you want to call them) and all of the entries in a table have an id. For example behind the Angular2 Heroes tutorial, we’d have a database with a heroes table where each hero is stored with its own id. Almost every application is structured this way. What’s very often the case in a real-world application, is that we want to change the data in the database as well as in our entire application. There could be, for example, an input field to change a hero’s name and when clicking the button “Change”, we’d send the change-request to the server (database). When the server responds with “200 ok” we want to change the hero’s name application wide. How would we go about that? The answer is: observables.

UpdateAll (1)

Since “ResourceName + ResourceId” are always unique for each unique object in our application, we can easily build a frontend-datastore based on this. The basics of the ResourceStore service, or here more specifically a HeroStore (leaned on the Angular hero tutorial-series) are:

import { Injectable } from '@angular/core';
import { Hero } from './hero';
import {BehaviorSubject} from 'rxjs/BehaviorSubject';

@Injectable()
export class HeroService {
  private heroStore = {
  }

  insertHero(hero: Hero): void {
    this.heroStore[hero.id] = new BehaviorSubject<Hero>(hero);
    this.heroStore[hero.id].next(hero);
  }

  updateHero(hero: Hero): void {
    this.heroStore[hero.id].next(hero);
  }
  
  getHero(heroId: string): Hero {
    return this.heroStore[heroId];
  }
  
}

Like this, everywhere in the application you can listen to changes on those observables. There is one catch though: You shouldn’t forget to unsubscribe when the component is destroyed or otherwise you’ll gather loads of dead listeners on your observables over time. Here’s how it’s done:

import { Component, OnDestroy, OnInit } from '@angular/core';
import 'rxjs/add/operator/takeUntil';
import { Subject } from 'rxjs/Subject';

import { MyThingService } from '../my-thing.service';

@Component({
    selector: 'my-thing',
    templateUrl: './my-thing.component.html'
})
export class MyThingComponent implements OnDestroy, OnInit {
    private ngUnsubscribe: Subject<void> = new Subject<void>();

    constructor(
        private myThingService: MyThingService,
    ) { }

    ngOnInit() {
        this.myThingService.getThings()
            .takeUntil(this.ngUnsubscribe)
            .subscribe(things => console.log(things));

        this.myThingService.getOtherThings()
            .takeUntil(this.ngUnsubscribe)
            .subscribe(things => console.log(things));

    }

    ngOnDestroy() {
        this.ngUnsubscribe.next();
        this.ngUnsubscribe.complete();
    }
}

(courtesy to stackoverflow)

Here is the Hero-Tutorial adapted to use observables instead of ngModel:

Angular Heroes

The essential code bits behind this example are the hero.service:

import { Injectable } from '@angular/core';
import { Hero } from './hero';

import {BehaviorSubject} from 'rxjs/BehaviorSubject';

@Injectable()
export class HeroService {
  private heroStore = {
  }

  insertHero(hero: Hero): void {
    this.heroStore[hero.id] = new BehaviorSubject(hero);
    this.heroStore[hero.id].next(hero);
  }

  updateHero(hero: Hero): void {
    this.heroStore[hero.id].next(hero);
  }
  
  getHero(heroId: string): Hero {
    return this.heroStore[heroId];
  }
  
}

as well as the subscribing entities (here: app.component and hero-detail.component)

//app.component.ts
...
ngOnInit() {
    HEROES.forEach(hero => {
      this.heroes.push(hero);
      this.heroService.insertHero(hero);
      this.heroService.getHero(hero.id).subscribe(updatedHero => {
        for (let i = 0; i < this.heroes.length; i++) {
          if (this.heroes[i].id === updatedHero.id) {
            this.heroes[i] = updatedHero;
          }
        }
      });
    })
  }
...

//hero-detail.component.ts
...
ngOnInit() {
    const heroObs = this.heroService.getHero(this.heroId);
    this.hero = heroObs.getValue();
    heroObs.subscribe(hero => {
      this.hero = hero;
    });
  }
...

 

Here, the heroes are fetched from a constant HEROES which is turned into observables on storage. The library used for observables is RxJS, the standard for an observable-library nowadays.

RxJS Logo

Admittedly, at first it may look a bit more complicated using observables than using ngModel, especially when you are used to ngModel from all the tutorials and Angular 1 two-way binding. But as your application grows, this scales nicely as it guarantees consistent data across your application.

Talk to all components of the same name

For example if our component structure would be:

hero-list > hero > delete-button

Then how could the delete-button tell the hero list to delete a hero? Does it really have to tell its parent (hero) “please delete this hero”, but the parent is also not allowed to do so, so the parent needs to ask the hero-list in return “please delete hero”? As you see this becomes more annoying as components become more nested. Is there really no way where components can communicate directly with one another? Can’t the child tell the grandparent directly please delete the hero?

Namespaced Broadcast Service

The solution to this problem are, again, observables. Let me elaborate. Basically you can create a service like the following

import { Injectable, EventEmitter } from '@angular/core';

@Injectable()
export class BroadcastService {

  public heroList = {
    deleteHero: new EventEmitter()
  };
  constructor() { }

}

The structure of this service encapsulates the magic behind this approach. It avoids name-spacing problems, by naming the methods of the BroadcastService after the components that should listen to the events. Each method then has the available EventEmitters as properties.

In the child component, you can simply inject the BroadcastService and emit a new event:

import {Component, Input, OnInit} from '@angular/core';
import {Hero} from "../hero";
import {HeroService} from "../hero.service";
import {BroadcastService} from "../broadcast.service";

@Component({
  selector: 'app-delete-hero',
  templateUrl: './delete-hero.component.html',
  styleUrls: ['./delete-hero.component.css']
})
export class DeleteHeroComponent implements OnInit {

  constructor(
    private heroService: HeroService,
    private broadcast: BroadcastService
  ) { }

  ngOnInit() {
  }

  @Input()
  hero: Hero;

  public deleteHero() {
    this.heroService.deleteHero(this.hero.uid).then(resp => {
      this.broadcast.heroList.deleteHero.emit({
        heroId: this.hero.uid
      });
    });
  }


}

and the great-grandparent (or whoever!) can subscribe to those events

...
ngOnInit() {

  //set up listeners
  this.broadcast.heroList.deleteHero.subscribe(evt => {
    this.heroes = this.heroes.filter(hero => hero.uid !== evt.heroId);
  });

  ...

}

Where is the downside of this approach? The child will not tell something directly to it’s grandparent. Other components with the same name are also listening. So for example if we were to feature the hero-list twice on the same page, and both lists would contain the hero with the uid xyz, then emitting a deleteHero Event would delete the hero in both lists. If you need the child to tell something to its grandparent and only to its grandparent, then you’ll really need to pass the information through the parent (see event-binding below). However, this is often not a problem since, like in this example, most of the time we would even want the second list to also be updated, since that hero was actually deleted from the database and this should be reflected everywhere in our application. We’ve also limited the problem with our name-spacing, such that only the right components will actually listen to those events. This is already much much better than just broadcasting events out into the wild like broadcast.deleteHero.

When no id is present

When no id is present on the data you want to update, the classical Angular data-binding mechanisms come into play which you find in most of the tutorials.

In any case, when no ids are available, those mechanisms prove to be quite useful.

“Downwards” Data-Flow

Upwards vs Downwards

Here “downwards” designates a flow from parent to a child to grand-child and so on. Angular pretty much takes care of this one for us. It’s similar as in Angular 1, just the notation changed a bit. Here’s how it would look like to propagate data and data changes from parent to child to grandchild.

The parent:

import { Component } from '@angular/core';

@Component({
  selector: 'app-parent',
  template: `<div> {{item + 1}} <app-child [item]="item"></app-child></div>`,
})
export class ParentComponent {

  item;
  constructor() {
    this.item = 'My Item ';
  }

}

The child:

import {Component, Input} from '@angular/core';

@Component({
  selector: 'app-child',
  template: `<div> {{item + 2}} <app-grandchild [item]="item"></app-grandchild></div>`
})
export class ChildComponent {
  @Input()
  public item;
}

The grandchild:

import {Component, Input} from '@angular/core';

@Component({
  selector: 'app-grandchild',
  template: '<div>{{item + 3}}</div>'
})
export class GrandchildComponent {
  @Input()
  public item;
}

Which would yield the output:

When item is changed in the parent, the changes are also propagated downwards. Here, I’m probably not telling you anything new and it’s the method of choice in most Angular tutorials. This method has proven clear, convenient and maintainable to our team over the years. The most common source of errors with this method in Angular 2 is to forget the square brackets (i.e. to write <app-child item=”item”> instead of <app-child [item]=”item”>, thus marked red above). Apart from this, you can’t really go wrong.

Upwards Flow

Classically in Angular1, you would just modify data on the child and the changes would be propagated upwards through two-way data-binding. However, in Angular2+,  they diluted this concept for performance & “clarity of the application” reasons. Now only changes via ngModel  in properties of objects are propagated upwards (example: tour of heroes). What exactly does this mean? For one, it means that if you don’t have an object but instead a primitive (string, number etc.), changes made will not be reflected in the parent. As an example, here is the “tour of heroes” from the Angular tutorial series adapted to demonstrate this:

Second, this means that you can’t just update this property in your controller or you may run into nasty errors like the following:

Nasty Angular2 Error

This happens with the following piece of code:

import {Component, Input, OnInit} from '@angular/core';

@Component({
  selector: 'app-grandchild',
  template: `<div>{{item.name + 3}}...</div>`
})
export class GrandchildComponent implements OnInit {
  @Input()
  public item;

  ngOnInit() {
    this.item.name = 'Jacky';
  }

}

All of this makes it hard for novices in Angular 2 to understand where they may and where they may not change data. That’s why I prefer the method with the observables whenever possible. In cases where you have to use ngModel: just make sure it’s actually an object and not a primitive you’re trying to change.

In case you want to propagate an event (as opposed to data-changes) upwards, then you can use the Angular event-binding mechanisms. Let’s consider the following component structure

hero-list > hero > change-color-button

In event-binding, the hero component invokes the change-color-button like so:

<change-color-button (colorChanged)="doOnColorChange($event)">
</change-color-button>

and in the change-color-button an event can be emitted like that

@Output() colorChanged = new EventEmitter();
...
colorChanged.emit({newColor: "green"});

Then the hero component can act upon a color change by implementing the doOnColorChange method. This method becomes quite annoying if we have components that are nested by several levels, as the each component must create its own EventEmitter, but sometimes we have no other choice.

Conclusion

There are many options for handling state. In my opinion it’s a close race between using a state-management library such and combining some of Angular’s core features. In case you want to handle management yourself rather than including yet another library, here’s a possible decision-making flowchart:

Exchange Information Between Components (6)

 

Thanks for reading, don’t forget to subscribe to get updates timely & share if you found this post useful!

Google Pagespeed Insights

Scoring 100 on Mobile and Desktop in Google Pagespeed Insights can be a daunting task. Once you use a JS framework you’re pretty much screwed because of the “above-the-fold” render blocking thing.

But is it really important to score 100? In which cases should you worry about the score? There’s heaps of articles suggesting “stop worrying about the pagespeed insights score”. Well, I agree that it might not be the most important factor for the user experience. But on the other hand, I don’t agree that it’s not important to Google. A Google employee told me during an AdWords campaign (set up by Google themselves), that our Mobile page speed was low and that this was penalized by the Search Engine. And how did he measure it? With Google Page Speed Insights, of course. See, the point is, if Google Page Speed Insights is good or bad is not important. It matters to Google, so it matters for your SEO.

It’s a bit hard to design an experiment to actually show the impact of an increased page speed insights score. What I could think of was this: Write an article and publish it twice, once with a bad page speed score, and once with a good one. So here’s the article, it’s about Angular Universal:

and here’s the article again, just loading slower since I load five unnecessary  versions of Jquery in the head:

The first article scores 99/100 (mobile) and 100/100 (desktop) and the second article 61/100 (mobile) and 75/100 (desktop). Let’s see what happens, I’ll post the result in a few weeks here.

RESULT

Google currently only lists the slower article. No idea why. Maybe it thinks it must be really awesome if 5 versions of jquery are required?

By the way, the article is also related to this topic, since Angular Universal would be the answer to page speed problems. Well, you know, if it’d actually work (read the article).

Typescript Mongo Express Angular.io Node (MEAN) Boilerplate

UPDATE: A cool starter kit has developed out of my initial Typescript MEAN seed / boilerplate! It currently uses Angular4 and is from mid 2017.

 

I was a bit shocked when I searched for Typescript MEAN (Mongo-Express-Angular-Node) tutorials and boilerplate code (with Angular.io I mean Angular2 or above). There is some material, but it’s pretty outdated. Of course it’s a bit unfortunate that you have to count 2015 as outdated in 2017, but that’s how it is in the frontend world. So I set out to build my own boilerplate code for MEAN (Mongo-Express-Angular.io-Node) apps that use Typescript for many apps to come. I also wanted the boilerplate to include unit tests, so this is not something you’ll have to add later on, but can start with right away.

So basically the requirements that I had for this boilerplate were:

  • 100% typescript.
  • Good coverage with unit tests.
  • Support simple REST calls of the form api/v1/:resourceName/:resourceId

For the backend, the Mongo-Express-Node part, a lot of setup had to be done. For the frontend, thanks to AngularCli I could dive more straightforward into development and I also had more experience in developing a frontend with Angular, than a backend in typescript.

In order to being able to develop independently on the frontend and the backend, for example if you have backend devs and frontend devs on your team, I setup the basic structure of the boilerplate like this:

typescript-mongo-express-angular-node-seed
├── .git
├── backend/
│ ├── .git
│ ├── db-module
│ └── ...
└── frontend/
  ├── .git
  ├── user-module
  ├── main-module
  └── ...

At first, I only had a git-repo for the backend, one for the frontend and one that strapped both together. Much has happened since then. Now each module has it’s own repo, run it’s tests independently and can be developed without needing all modules. For me this change in workflow was hugely satisfactory. Instead of a monolithic app, now I have many small maintainable modules, that together make a solid well tested application. Each module also has it’s own npm package (all scoped under @tsmean) and if one module requires another, it is loaded as an npm module! This has upsides and downsides, for example if you have a small bug fix in one module, then you need to publish & pull again in another module. But this cost is small, since what you get is a well designed scalable architecture. I could just hire someone with no knowledge about the rest of the application to develop the user module. For this workflow it is essential to understand how to build & publish libraries, so I’ve summarised this here:

Since the starter kit / boilerplate is in constant development, I don’t cover too much specifics here. Rather try to check out the code directly at github.com/tsmean/tsmean. It might strike you as complex at first, but after getting into the fullstack typescript development with feature modules, you will never want to go back.

In the following, I also append a short “Why the MEAN stack”, because knowing the why is even more important than knowing the how.

Why the MEAN stack?

Developer experience

As opposed to backend and frontend in different languages, you’ll just need to write Typescript most of the time. This makes you a bit faster as a full-stack dev, since you don’t have to switch context that much and just need to be fluent in one language. Also, if you’ve implemented some routine in the backend, but you decide it would be better to run it in the frontend (or vice-versa), it’s much easier to migrate the code. You can share code across backend and frontend and you can use the same packaging managers. Overall, one language brings efficiency through consistency.

On the downside, with a backend in Node & Mongo, also a lot of problems can arise. Node uses the non-blocking asynchronous nature of javascript (more on that in the next section), which requires for a lot of callbacks. This makes programming harder and less linear. For example if you have java-devs in your team, they might have a hard time adjusting to this and mutter “wtf, that’s retarded” quite a few times. With the new async / await syntax introduced however, a more synchronous writing style is starting to develop, while maintaining the benefits of non-blocking.

Non-Blocking I/O

Node runs on a single thread and all it’s calls, for example to the mongodb, are asynchronous. This means that no resources are spent waiting for answers to return, which is often where most of a web-servers capacity is lost. Your classic Java server could be non-blocking, but no-one ever writes it like that, so the advantage with a node-server is that all your packages, modules, tutorials, stack overflow answers etc. will be non-blocking out of the box.

Conclusion

The Typescript MEAN (Mongo-Express-Angular-Node) stack can be the stack of choice for frontend devs going full stack. However, there doesn’t seem to be a lot of good, up-to-date and typed (typescript) boilerplate code out there for the backend part. So I set out to create & maintain this boilerplate code.  After developing for half a year with it, I’m more hooked than ever. It doesn’t make everything magically simple and you still need to learn a lot. For example if you don’t know Angular yet, that’ll be quite the journey. But you still have to learn and master less concepts, than if you were to have a frontend with some JS framework and a backend written in yet another language. Here you can find the current seed. To get the latest news, follow tsmean on Twitter. Additionally, there’s now a small homepage: tsmean.com !

Jsfiddle vs Codepen vs Plunker vs JSBin for Embedding

JSBin

Jsbin needs a pro account for many features (such as embedding), so I’d refrain from it, since the others are just as good or better.

CodePen

It’s got a beautiful design, but some things are stupid about it. For example embedding the code into an other page.  Apparently it can be found in the export menu:

But what I get from this menu is this:

So I guess it’s a pro feature? Anyways, they could at least show those items, but gray them out and let me know why they’re not available.

JSFiddle

Also some bugs, for example if I accidentally put some css in the html panel, I can’t remove it anymore because the error notifications block my view:

Apart from this, the view seemed to load a bit faster than with plunker and is a better structured with the js / html / css / result tabs in case you have something simple to demonstrate

(okay, the scrollbar wouldn’t need to be there, but that’s a detail)

Here’s an actual example:

Plunker

Since plunker supports many files, it needs to be laid out a bit differently than jsfiddle. Here’s what it looks like:

It’s also quite neat, and I had the best experiences with editing in plunker so far. Also, with the plunker preview I found that it shows the result first, while the jsfiddle shows the js/html first. Here I’d say it depends on your use case what you prefer.

Conclusion

I’d recommend JSFiddle for simple things or plunker for entire apps. However, if you have many, many frames to embed, none of the options might be ideal. As you can see here https://www.bersling.com/2017/03/22/flexbox-tutorial/ it might slow down quite a bit if there are too many frames.

How to write a library for Angular 2 and publish it to NPM

Using modular code is a good idea, and nothing is more modular than writing a library. Like this you can just publish your library to npm and then use it in all your projects. This can be useful if you have a component you want to reuse or share,  a service or just any piece of code that you think can be reused between projects. But with typescript and Angular 2 writing your own library can be a bit daunting at first. Do you publish the typescript or the compiled javascript? Where does the result go, in a dist folder? What are best practices? What should I declare as dependencies of my library in my package.json?

Fortunately, you are not the first one to have come across those questions. Unfortunately, as things are changing rapidly, there are a lot of out of date answers when you Google for this problem. And even more so is there confusion as it seems to be immensely complicated to get a working Angular2 library. So this article aims at giving an up-to-date answer on how to create your ng2lib.

The currently best way to scaffold your Angular 2 library

So without further ado, this way is the best I’ve found so far to scaffold your Angular 2 library:

I think this is currently the best approach, since everything else simply will make you pull out your hair.

I’ve last checked if this is still the best solution in  June 2017.

Can I use angular-cli to create a standalone library?

As of March 2017, the angular-cli, which is used to create Angular projects and components, services etc for your project doesn’t offer a clear way to create a standalone library.


On a sidenote, if you just need to create a typescript library, no angular involved, you can checkout how-to-write-a-typescript-library.com!

Contenteditable vs Input

Contenteditable and Input can be quite similar. Both you can click, then you can change the text inside. So where’s the difference and when should you use which one?

Contenteditable

With contenteditable, you can modify a html snippet. Usually the html on a website is “view-only” meaning you can’t just click somewhere and then edit the html. But as soon as you add the contenteditable attribute to a html-tag, all of the inner html is editable by clicking on it and hitting keys on the keyboard. So for example in:

<body contenteditable=true>
  <h1>Contenteditable</h1>
  <div style="border: 1px solid red; padding: 5px">
    Everything here is editable...
  </div>
</body>

you can edit all the html inside, which would yield:

When is this useful?

  • If you don’t want to separate the editor-view from the display-view. See example below.
  • If you want a multiline input that’s wrappable (see example below).
  • If you want to let a user mess around with the html, but don’t want to provide a wysiwyg.

This example illustrates a usecase of a contenteditable, where it can achieve something that’s not possible with an input. In this case the contenteditable is the bold title. See how it wraps around the buttons? Good luck achieving that with an input.

Many words of caution

However, you also should be cautious with contenteditable as there are a few pitfalls.

Take care if you want to store the edited string to a database and it should be a simple string, not html.

In this case you should either resort to the input field or take special precautions. In case you want still want to head down this road, here are some some things to bear in mind:

  • Different browsers will handle contenteditable differently. For example, Firefox sometimes adds a <br> tag to the edited content, whereas chrome doesn’t.
  • The user could copy/paste some html into the contenteditable area. For example try to select and copy this into the contenteditable above.

Due to those reasons, you should definitely process the html before storing it into your database. One possibility to process / sanitize the html would be:

var sanitized = htmlElt.textContent || htmlElt.innerText;

This will ensure you only get the text of the content.

What about security?

Well, you can try it for yourself: Copy/Paste this

<script>alert('hi')</script>

into the contenteditable above. As you can see, it’s escaped properly.

Input

The input field on the other hand has its use in the following cases:

  • Forms
  • One-line editing of a string, overflow is hidden.
  • One-line editing of a number.
  • Restricted input.

Conclusion

It’s not always easy to choose which one to use. The basic rule of thumb would be

  • Contenteditable to edit html
  • Input to edit strings or numbers

You should only break this rule if you have very special needs, as in the usecase above, where the contenteditable was used to minimize the difference between editor/input and display and wrap the content around the buttons.