Canonical Voices

Posts tagged with 'javascript'

Max

CSS Modules: the correct code split

There is a lot of talk going on in the JavaScript Frontend Community about using CSSinJS. One example for this are styled-components.
A use of them is explained over at https://www.toptal.com/javascript/styled-components-library .
In this Article I want to name an alternative and talk about some benefits.

What are CSS modules?

According to the css-modules github repo:

A CSS Module is a CSS file in which all class names and animation names are scoped locally by default.

For use in React or Vue f.e. this would mean:
All of the styles inside of a CSS Module will not affect anything else than the Component on which they are imported in.

Using CSS modules

Lets try this out with a a bit of React.
First we write a Box Component like this:

import React from 'react';
import styles from './Box.css';

const Box = () => <div />;

export default Box;

And apply some styles to it:

import React from 'react';
import styles from './Box.css';

const Box = () => <div className={styles.box} />;

export default Box;

Notice how we just import some CSS file into our JavaScript. This is where the magic happens.
In this case the CSS modularization happens through the webpack css-loader.

This is the Box.css file that gets imported.

.box {
    width: 400px;
    height: 300px;
    border-bottom: solid brown 4px;
    border-right: solid brown 4px;
    border-left: solid brown 4px;
}

we can check out the browser to see what the result looks like.

CSS Modules: the correct code split

Notice the class name? This is how CSS Modules achieve the local scoping. The imported class will be added on the correct Components and in the process their names will be changed and a unique hash added.
This will prevent any collisions with rules of other components.

No collision

Lets try this out in the browser.
Here are 2 components that use the same class name in their CSS:

Banana Component

import React from 'react';
import styles from './Banana.css';

const Banana = () => (
  <div className={styles.container}>
    <p className={styles.text}>Banana</p>
  </div>
);

export default Banana;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px yellow;
}

.text {
    color: yellow;
}

Apple Component

import React from 'react';
import styles from './Apple.css';

const Apple = () => (
  <div className={styles.container}>
    <p className={styles.text}>Apple</p>
  </div>
);

export default Apple;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px green;
}

.text {
    color: green;
}

Both of the components use the same classes inside of them. If we would write normal CSS this should lead to collisions, meaning that one set of rules would overwrite the other.
But checking the website again we can see that everything is normal.
Take note of the hashed class names again.

CSS Modules: the correct code split

All collisions are avoided by making the class names unique. Similar to techniques such as BEM, just in an automated way.

Conclusion

This was a very brief introduction to CSS Modules but should give a good understanding of what they do and how they work.
Some of the features are similar to techniques such as styled-components, but by being just CSS files we can benefit from some advantages.
Lets dive into the Pros and Cons to wrap it up:

Pros

  • Designers do not need to learn new ways of writing CSS in JS. Instead they can easily make changes to the correct CSS files. This way of styling, combined with mental models such a atomic design, can increase communication between Developers and Designers and development speed.
  • Separation of Concerns: We hear this all the time in Web Development and it was regarded as best practice for a long time. Given JSXs popularity one might argue that this is not true anymore, but we should still aim for it.
  • Tools and Preprocessors such as PostCss can be easily integrated to allow CSS Imports, Custom Properties etc.

Cons

  • The build task might be complicated to set up at first
  • Global styles have to be wrapped in a special :global block. Where would such a block belong?
  • The power of JavaScript is not available inside the CSS files

Cover Image is from https://unsplash.com/photos/8EzNkvLQosk

Read more
Max

CSS Modules: the correct code split

There is a lot of talk going on in the JavaScript Frontend Community about using CSSinJS. One example for this are styled-components.
A use of them is explained over at https://www.toptal.com/javascript/styled-components-library .
In this Article I want to name an alternative and talk about some benefits.

What are CSS modules?

According to the css-modules github repo:

A CSS Module is a CSS file in which all class names and animation names are scoped locally by default.

For use in React or Vue f.e. this would mean:
All of the styles inside of a CSS Module will not affect anything else than the Component on which they are imported in.

Using CSS modules

Lets try this out with a a bit of React.
First we write a Box Component like this:

import React from 'react';
import styles from './Box.css';

const Box = () => <div />;

export default Box;

And apply some styles to it:

import React from 'react';
import styles from './Box.css';

const Box = () => <div className={styles.box} />;

export default Box;

Notice how we just import some CSS file into our JavaScript. This is where the magic happens.
In this case the CSS modularization happens through the webpack css-loader.

This is the Box.css file that gets imported.

.box {
    width: 400px;
    height: 300px;
    border-bottom: solid brown 4px;
    border-right: solid brown 4px;
    border-left: solid brown 4px;
}

we can check out the browser to see what the result looks like.

CSS Modules: the correct code split

Notice the class name? This is how CSS Modules achieve the local scoping. The imported class will be added on the correct Components and in the process their names will be changed and a unique hash added.
This will prevent any collisions with rules of other components.

No collision

Lets try this out in the browser.
Here are 2 components that use the same class name in their CSS:

Banana Component

import React from 'react';
import styles from './Banana.css';

const Banana = () => (
  <div className={styles.container}>
    <p className={styles.text}>Banana</p>
  </div>
);

export default Banana;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px yellow;
}

.text {
    color: yellow;
}

Apple Component

import React from 'react';
import styles from './Apple.css';

const Apple = () => (
  <div className={styles.container}>
    <p className={styles.text}>Apple</p>
  </div>
);

export default Apple;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px green;
}

.text {
    color: green;
}

Both of the components use the same classes inside of them. If we would write normal CSS this should lead to collisions, meaning that one set of rules would overwrite the other.
But checking the website again we can see that everything is normal.
Take note of the hashed class names again.

CSS Modules: the correct code split

All collisions are avoided by making the class names unique. Similar to techniques such as BEM, just in an automated way.

Conclusion

This was a very brief introduction to CSS Modules but should give a good understanding of what they do and how they work.
Some of the features are similar to techniques such as styled-components, but by being just CSS files we can benefit from some advantages.
Lets dive into the Pros and Cons to wrap it up:

Pros

  • Designers do not need to learn new ways of writing CSS in JS. Instead they can easily make changes to the correct CSS files. This way of styling, combined with mental models such a atomic design, can increase communication between Developers and Designers and development speed.
  • Separation of Concerns: We hear this all the time in Web Development and it was regarded as best practice for a long time. Given JSXs popularity one might argue that this is not true anymore, but we should still aim for it.
  • Tools and Preprocessors such as PostCss can be easily integrated to allow CSS Imports, Custom Properties etc.

Cons

  • The build task might be complicated to set up at first
  • Global styles have to be wrapped in a special :global block. Where would such a block belong?
  • The power of JavaScript is not available inside the CSS files

Cover Image is from https://unsplash.com/photos/8EzNkvLQosk

Read more
Max

CSS Modules: the correct code split

There is a lot of talk going on in the JavaScript Frontend Community about using CSSinJS. One example for this are styled-components.
A use of them is explained over at https://www.toptal.com/javascript/styled-components-library .
In this Article I want to name an alternative and talk about some benefits.

What are CSS modules?

According to the css-modules github repo:

A CSS Module is a CSS file in which all class names and animation names are scoped locally by default.

For use in React or Vue f.e. this would mean:
All of the styles inside of a CSS Module will not affect anything else than the Component on which they are imported in.

Using CSS modules

Lets try this out with a a bit of React.
First we write a Box Component like this:

import React from 'react';
import styles from './Box.css';

const Box = () => <div />;

export default Box;

And apply some styles to it:

import React from 'react';
import styles from './Box.css';

const Box = () => <div className={styles.box} />;

export default Box;

Notice how we just import some CSS file into our JavaScript. This is where the magic happens.
In this case the CSS modularization happens through the webpack css-loader.

This is the Box.css file that gets imported.

.box {
    width: 400px;
    height: 300px;
    border-bottom: solid brown 4px;
    border-right: solid brown 4px;
    border-left: solid brown 4px;
}

we can check out the browser to see what the result looks like.

CSS Modules: the correct code split

Notice the class name? This is how CSS Modules achieve the local scoping. The imported class will be added on the correct Components and in the process their names will be changed and a unique hash added.
This will prevent any collisions with rules of other components.

No collision

Lets try this out in the browser.
Here are 2 components that use the same class name in their CSS:

Banana Component

import React from 'react';
import styles from './Banana.css';

const Banana = () => (
  <div className={styles.container}>
    <p className={styles.text}>Banana</p>
  </div>
);

export default Banana;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px yellow;
}

.text {
    color: yellow;
}

Apple Component

import React from 'react';
import styles from './Apple.css';

const Apple = () => (
  <div className={styles.container}>
    <p className={styles.text}>Apple</p>
  </div>
);

export default Apple;
.container {
    display: flex;
    justify-content: center;
    align-items: center;
    border: solid 1px green;
}

.text {
    color: green;
}

Both of the components use the same classes inside of them. If we would write normal CSS this should lead to collisions, meaning that one set of rules would overwrite the other.
But checking the website again we can see that everything is normal.
Take note of the hashed class names again.

CSS Modules: the correct code split

All collisions are avoided by making the class names unique. Similar to techniques such as BEM, just in an automated way.

Conclusion

This was a very brief introduction to CSS Modules but should give a good understanding of what they do and how they work.
Some of the features are similar to techniques such as styled-components, but by being just CSS files we can benefit from some advantages.
Lets dive into the Pros and Cons to wrap it up:

Pros

  • Designers do not need to learn new ways of writing CSS in JS. Instead they can easily make changes to the correct CSS files. This way of styling, combined with mental models such a atomic design, can increase communication between Developers and Designers and development speed.
  • Separation of Concerns: We hear this all the time in Web Development and it was regarded as best practice for a long time. Given JSXs popularity one might argue that this is not true anymore, but we should still aim for it.
  • Tools and Preprocessors such as PostCss can be easily integrated to allow CSS Imports, Custom Properties etc.

Cons

  • The build task might be complicated to set up at first
  • Global styles have to be wrapped in a special :global block. Where would such a block belong?
  • The power of JavaScript is not available inside the CSS files

Cover Image is from https://unsplash.com/photos/8EzNkvLQosk

Read more
Max

When setting up a new theme for this blog, I ran into the issue of quickly setting up a development environment on my local machine to test out ideas.

So I went on to duckduckgo.com and quickly found this blog article.
Taking it as an inspiration I wrote a small script to publish on npm to make it easier for future themes and other people as well. You can find it at https://github.com/b-m-f/gotede.

This blog post will summarize how to get started with it.

Installing gotede

The first thing you will need to do is to install the script with npm install -g gotede.

You also have to make sure to install both docker and docker-compose.

Now all you need to do is to switch to a folder in which the theme folder should be created. For example your home directory with cd ~.

Running gotede will ask you for the name of your new theme and which port the Ghost developemt instance should run under on your localhost (4000 f.e.).

Once youve entered your answers a new folder will be in that directory, with the name of the theme you entered. Go into that folder with cd THEME_NAME_YOU_ENTERED and start up the instance with docker-compose up -d.

For this example I am going to continue with a supplied port number of 4000.

To make sure that everthing worked correctly open your browser and go to http://localhost:PORT_YOU_PROVIDED, so in my case http://localhost:4000.

You should be greeted with the familiar skeleton instance of ghost, looking like this:

Screenshot_2018-09-11-Ghost

Setting up the Ghost instance

If everything worked, you should head over to http://localhost:4000/ghost to set up your admin account. Just follow the steps shown on the website. You can just use a test account here, since it will only be running locally.

Creating the theme and activating it

For this step make sure that you are still in the folder that gotede created for you and where all your files for the theme are located in.
This will be where you do the actual work.

To get everything running you will first need to install all the required npm dependencies. Do this with npm install.

Once it is completed we can start the local development server that will take care of compiling our css and supplying the ghost instance with our theme with npm run start.

Since the theme is newly added to the instance, we will have to restart it with either docker restart THEME_NAME_YOU_PROVIDED or docker-compose restart so that it becomes available.

As a last step head over to http://localhost:4000/ghost/#/settings/design and activate your theme at the bottom of the page:

Screenshot_2018-09-11-Settings---Design---test

Developing the theme

With the npm run start running in the terminal you can now start editing your theme according to the official docs.

Have fun, and feel free to improve the script and the skeleton theme from gotede by submitting a Pull Request to the Repo.

Read more
Max

When setting up a new theme for this blog, I ran into the issue of quickly setting up a development environment on my local machine to test out ideas.

So I went on to duckduckgo.com and quickly found this blog article.
Taking it as an inspiration I wrote a small script to publish on npm to make it easier for future themes and other people as well. You can find it at https://github.com/b-m-f/gotede.

This blog post will summarize how to get started with it.

Installing gotede

The first thing you will need to do is to install the script with npm install -g gotede.

You also have to make sure to install both docker and docker-compose.

Now all you need to do is to switch to a folder in which the theme folder should be created. For example your home directory with cd ~.

Running gotede will ask you for the name of your new theme and which port the Ghost developemt instance should run under on your localhost (4000 f.e.).

Once youve entered your answers a new folder will be in that directory, with the name of the theme you entered. Go into that folder with cd THEME_NAME_YOU_ENTERED and start up the instance with docker-compose up -d.

For this example I am going to continue with a supplied port number of 4000.

To make sure that everthing worked correctly open your browser and go to http://localhost:PORT_YOU_PROVIDED, so in my case http://localhost:4000.

You should be greeted with the familiar skeleton instance of ghost, looking like this:

Screenshot_2018-09-11-Ghost

Setting up the Ghost instance

If everything worked, you should head over to http://localhost:4000/ghost to set up your admin account. Just follow the steps shown on the website. You can just use a test account here, since it will only be running locally.

Creating the theme and activating it

For this step make sure that you are still in the folder that gotede created for you and where all your files for the theme are located in.
This will be where you do the actual work.

To get everything running you will first need to install all the required npm dependencies. Do this with npm install.

Once it is completed we can start the local development server that will take care of compiling our css and supplying the ghost instance with our theme with npm run start.

Since the theme is newly added to the instance, we will have to restart it with either docker restart THEME_NAME_YOU_PROVIDED or docker-compose restart so that it becomes available.

As a last step head over to http://localhost:4000/ghost/#/settings/design and activate your theme at the bottom of the page:

Screenshot_2018-09-11-Settings---Design---test

Developing the theme

With the npm run start running in the terminal you can now start editing your theme according to the official docs.

Have fun, and feel free to improve the script and the skeleton theme from gotede by submitting a Pull Request to the Repo.

Read more
Max

When setting up a new theme for this blog, I ran into the issue of quickly setting up a development environment on my local machine to test out ideas.

So I went on to duckduckgo.com and quickly found this blog article.
Taking it as an inspiration I wrote a small script to publish on npm to make it easier for future themes and other people as well. You can find it at https://github.com/b-m-f/gotede.

This blog post will summarize how to get started with it.

Installing gotede

The first thing you will need to do is to install the script with npm install -g gotede.

You also have to make sure to install both docker and docker-compose.

Now all you need to do is to switch to a folder in which the theme folder should be created. For example your home directory with cd ~.

Running gotede will ask you for the name of your new theme and which port the Ghost developemt instance should run under on your localhost (4000 f.e.).

Once youve entered your answers a new folder will be in that directory, with the name of the theme you entered. Go into that folder with cd THEME_NAME_YOU_ENTERED and start up the instance with docker-compose up -d.

For this example I am going to continue with a supplied port number of 4000.

To make sure that everthing worked correctly open your browser and go to http://localhost:PORT_YOU_PROVIDED, so in my case http://localhost:4000.

You should be greeted with the familiar skeleton instance of ghost, looking like this:

Screenshot_2018-09-11-Ghost

Setting up the Ghost instance

If everything worked, you should head over to http://localhost:4000/ghost to set up your admin account. Just follow the steps shown on the website. You can just use a test account here, since it will only be running locally.

Creating the theme and activating it

For this step make sure that you are still in the folder that gotede created for you and where all your files for the theme are located in.
This will be where you do the actual work.

To get everything running you will first need to install all the required npm dependencies. Do this with npm install.

Once it is completed we can start the local development server that will take care of compiling our css and supplying the ghost instance with our theme with npm run start.

Since the theme is newly added to the instance, we will have to restart it with either docker restart THEME_NAME_YOU_PROVIDED or docker-compose restart so that it becomes available.

As a last step head over to http://localhost:4000/ghost/#/settings/design and activate your theme at the bottom of the page:

Screenshot_2018-09-11-Settings---Design---test

Developing the theme

With the npm run start running in the terminal you can now start editing your theme according to the official docs.

Have fun, and feel free to improve the script and the skeleton theme from gotede by submitting a Pull Request to the Repo.

Read more
Max

Alot of Webpack introdutions focus on how to set up your first webpack configuration.
What I want to do in this article is taking a step back and look at what webpack is actually doing before going into the setup and all its complications.

The problem

In the frontend we have a variety of targets that we want our applications to run in, and they are all browsers. Some old, some new.

While the Javascript ecosystem is evolving at an extreme pace and SPAs + more complex web applications become the norm, the browsers evolve slowly.
In addition alot of people still use outdated ones, which might be due to workplace restrictions or the proper longterm use of hardware.

Now we have a dilemma. We want to write complex applications in Javascript, but in the browser we usually load an app via a single file.
React is a good example.
You would usually load it with sth like:

<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>

But the developers definitely did not develop this complex piece of software in just one file! The complexity of doing that with a team of developers and even alone would have too many downsides.

Looking at the repository we can see a lot of different folders and files, that all host their specific part of the application logic.

Screen-Shot-2018-07-05-at-16.59.22-1

In order for the different parts to use each other and share functonalities they are using the import syntax in Javascript. If you do not know how this works you can read about it here or just do a quick duckduckgo search.

The downside of this approach is that most of the aforementioned browsers do not yet, or never will support it. In addition to that all of those would have to be loaded over the web individually. While this is one thing that HTTP/2 might make less of a problem, it still is one to date.

Enter webpack

This exact situation is where webpack fits in perfectly!

In order to get all your different components and Javascript files that import each other to work in the browser you will need to bundle them together somehow and in essence webpack will do just that.
It take multiple files and figures out how they import each other and then bundles them together into 1 big file that provides wrapper functions to mimic the process of importing.

Lets make this a bit clearer with the following example:

// mathHelpers.js

export function add(a, b) {
  return a + b;
}
// main.js
import { add } from "./mathHelpers.js";

function main() {
  const a = 5;
  const b = 37;
  const result = add(a, b);
  console.log(`The result of ${a} + ${b} = ${result}`);
}

We have the main file which imports the mathHelpers to do some difficult calculations. This helps us to split out different parts of the application into different files. The benefit being a good structure and reusability of f.e. the math functions.

Now we are going to have to bundle those 2 together into 1 file.

The first step is installing webpack. To use it on the command line we need to install it on the computer with npm i -g webpack-cli.

Afterwards we can just run webpack main.js --mode=none in our application folder and the bundling is done! Its really that easy.

The mode option just says that no optimizations should be done, and all we want is a bundled file to check out its contents.

After the operation has completed we can inspect the bundle in the newly generated dist folder.

The content should be similar to this:

/******/ (function(modules) { // webpackBootstrap
/******/ 	// The module cache
/******/ 	var installedModules = {};
/******/
/******/ 	// The require function
/******/ 	function __webpack_require__(moduleId) {
/******/
/******/ 		// Check if module is in cache
/******/ 		if(installedModules[moduleId]) {
/******/ 			return installedModules[moduleId].exports;
/******/ 		}
/******/ 		// Create a new module (and put it into the cache)
/******/ 		var module = installedModules[moduleId] = {
/******/ 			i: moduleId,
/******/ 			l: false,
/******/ 			exports: {}
/******/ 		};
/******/
/******/ 		// Execute the module function
/******/ 		modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ 		// Flag the module as loaded
/******/ 		module.l = true;
/******/
/******/ 		// Return the exports of the module
/******/ 		return module.exports;
/******/ 	}
/******/
/******/
/******/ 	// expose the modules object (__webpack_modules__)
/******/ 	__webpack_require__.m = modules;
/******/
/******/ 	// expose the module cache
/******/ 	__webpack_require__.c = installedModules;
/******/
/******/ 	// define getter function for harmony exports
/******/ 	__webpack_require__.d = function(exports, name, getter) {
/******/ 		if(!__webpack_require__.o(exports, name)) {
/******/ 			Object.defineProperty(exports, name, { enumerable: true, get: getter });
/******/ 		}
/******/ 	};
/******/
/******/ 	// define __esModule on exports
/******/ 	__webpack_require__.r = function(exports) {
/******/ 		if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ 			Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ 		}
/******/ 		Object.defineProperty(exports, '__esModule', { value: true });
/******/ 	};
/******/
/******/ 	// create a fake namespace object
/******/ 	// mode & 1: value is a module id, require it
/******/ 	// mode & 2: merge all properties of value into the ns
/******/ 	// mode & 4: return value when already ns object
/******/ 	// mode & 8|1: behave like require
/******/ 	__webpack_require__.t = function(value, mode) {
/******/ 		if(mode & 1) value = __webpack_require__(value);
/******/ 		if(mode & 8) return value;
/******/ 		if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/ 		var ns = Object.create(null);
/******/ 		__webpack_require__.r(ns);
/******/ 		Object.defineProperty(ns, 'default', { enumerable: true, value: value });
/******/ 		if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));
/******/ 		return ns;
/******/ 	};
/******/
/******/ 	// getDefaultExport function for compatibility with non-harmony modules
/******/ 	__webpack_require__.n = function(module) {
/******/ 		var getter = module && module.__esModule ?
/******/ 			function getDefault() { return module['default']; } :
/******/ 			function getModuleExports() { return module; };
/******/ 		__webpack_require__.d(getter, 'a', getter);
/******/ 		return getter;
/******/ 	};
/******/
/******/ 	// Object.prototype.hasOwnProperty.call
/******/ 	__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };
/******/
/******/ 	// __webpack_public_path__
/******/ 	__webpack_require__.p = "";
/******/
/******/
/******/ 	// Load entry module and return exports
/******/ 	return __webpack_require__(__webpack_require__.s = 0);
/******/ })
/************************************************************************/
/******/ ([
/* 0 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony import */ var _mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(1);


function main() {
  const a = 5;
  const b = 37;
  const result = Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"])(1 + 2);
  console.log(`The result of ${a} + ${b} = ${result}`);
}


/***/ }),
/* 1 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "add", function() { return add; });
function add(a, b) {
  return a + b;
}


/***/ })
/******/ ]);

Dont worry too much about all of this machine generated code.

Whats important to understand is that the top of the file is the functionality that webpack provides to fake the importing that we are using in our files.

If you inspect the bottom of the file though, you can see that we still have our main and add function.
The difference here is though, that the main function now calls Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"]) instead of the add function directly.

We have now reached a basic understanding of webpack and can say that the different modules are bundled together under an object and named in a way that all other modules know where to find the functions that they need. All the calls to other modules are then also automatically replaced with the correct call to the Object+Name.

What now?

With this knowledge as a basis you can now start with understanding how the basic webpack main.js --mode=none can be used with configurations to apply some transformations to your codebase.
This will allow things like using the latest EcmaScript features, code splitting and many more, some of which I will cover in later posts.

Read more
Max

Alot of Webpack introdutions focus on how to set up your first webpack configuration.
What I want to do in this article is taking a step back and look at what webpack is actually doing before going into the setup and all its complications.

The problem

In the frontend we have a variety of targets that we want our applications to run in, and they are all browsers. Some old, some new.

While the Javascript ecosystem is evolving at an extreme pace and SPAs + more complex web applications become the norm, the browsers evolve slowly.
In addition alot of people still use outdated ones, which might be due to workplace restrictions or the proper longterm use of hardware.

Now we have a dilemma. We want to write complex applications in Javascript, but in the browser we usually load an app via a single file.
React is a good example.
You would usually load it with sth like:

<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>

But the developers definitely did not develop this complex piece of software in just one file! The complexity of doing that with a team of developers and even alone would have too many downsides.

Looking at the repository we can see a lot of different folders and files, that all host their specific part of the application logic.

Screen-Shot-2018-07-05-at-16.59.22-1

In order for the different parts to use each other and share functonalities they are using the import syntax in Javascript. If you do not know how this works you can read about it here or just do a quick duckduckgo search.

The downside of this approach is that most of the aforementioned browsers do not yet, or never will support it. In addition to that all of those would have to be loaded over the web individually. While this is one thing that HTTP/2 might make less of a problem, it still is one to date.

Enter webpack

This exact situation is where webpack fits in perfectly!

In order to get all your different components and Javascript files that import each other to work in the browser you will need to bundle them together somehow and in essence webpack will do just that.
It take multiple files and figures out how they import each other and then bundles them together into 1 big file that provides wrapper functions to mimic the process of importing.

Lets make this a bit clearer with the following example:

// mathHelpers.js

export function add(a, b) {
  return a + b;
}
// main.js
import { add } from "./mathHelpers.js";

function main() {
  const a = 5;
  const b = 37;
  const result = add(a, b);
  console.log(`The result of ${a} + ${b} = ${result}`);
}

We have the main file which imports the mathHelpers to do some difficult calculations. This helps us to split out different parts of the application into different files. The benefit being a good structure and reusability of f.e. the math functions.

Now we are going to have to bundle those 2 together into 1 file.

The first step is installing webpack. To use it on the command line we need to install it on the computer with npm i -g webpack-cli.

Afterwards we can just run webpack main.js --mode=none in our application folder and the bundling is done! Its really that easy.

The mode option just says that no optimizations should be done, and all we want is a bundled file to check out its contents.

After the operation has completed we can inspect the bundle in the newly generated dist folder.

The content should be similar to this:

/******/ (function(modules) { // webpackBootstrap
/******/ 	// The module cache
/******/ 	var installedModules = {};
/******/
/******/ 	// The require function
/******/ 	function __webpack_require__(moduleId) {
/******/
/******/ 		// Check if module is in cache
/******/ 		if(installedModules[moduleId]) {
/******/ 			return installedModules[moduleId].exports;
/******/ 		}
/******/ 		// Create a new module (and put it into the cache)
/******/ 		var module = installedModules[moduleId] = {
/******/ 			i: moduleId,
/******/ 			l: false,
/******/ 			exports: {}
/******/ 		};
/******/
/******/ 		// Execute the module function
/******/ 		modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ 		// Flag the module as loaded
/******/ 		module.l = true;
/******/
/******/ 		// Return the exports of the module
/******/ 		return module.exports;
/******/ 	}
/******/
/******/
/******/ 	// expose the modules object (__webpack_modules__)
/******/ 	__webpack_require__.m = modules;
/******/
/******/ 	// expose the module cache
/******/ 	__webpack_require__.c = installedModules;
/******/
/******/ 	// define getter function for harmony exports
/******/ 	__webpack_require__.d = function(exports, name, getter) {
/******/ 		if(!__webpack_require__.o(exports, name)) {
/******/ 			Object.defineProperty(exports, name, { enumerable: true, get: getter });
/******/ 		}
/******/ 	};
/******/
/******/ 	// define __esModule on exports
/******/ 	__webpack_require__.r = function(exports) {
/******/ 		if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ 			Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ 		}
/******/ 		Object.defineProperty(exports, '__esModule', { value: true });
/******/ 	};
/******/
/******/ 	// create a fake namespace object
/******/ 	// mode & 1: value is a module id, require it
/******/ 	// mode & 2: merge all properties of value into the ns
/******/ 	// mode & 4: return value when already ns object
/******/ 	// mode & 8|1: behave like require
/******/ 	__webpack_require__.t = function(value, mode) {
/******/ 		if(mode & 1) value = __webpack_require__(value);
/******/ 		if(mode & 8) return value;
/******/ 		if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/ 		var ns = Object.create(null);
/******/ 		__webpack_require__.r(ns);
/******/ 		Object.defineProperty(ns, 'default', { enumerable: true, value: value });
/******/ 		if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));
/******/ 		return ns;
/******/ 	};
/******/
/******/ 	// getDefaultExport function for compatibility with non-harmony modules
/******/ 	__webpack_require__.n = function(module) {
/******/ 		var getter = module && module.__esModule ?
/******/ 			function getDefault() { return module['default']; } :
/******/ 			function getModuleExports() { return module; };
/******/ 		__webpack_require__.d(getter, 'a', getter);
/******/ 		return getter;
/******/ 	};
/******/
/******/ 	// Object.prototype.hasOwnProperty.call
/******/ 	__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };
/******/
/******/ 	// __webpack_public_path__
/******/ 	__webpack_require__.p = "";
/******/
/******/
/******/ 	// Load entry module and return exports
/******/ 	return __webpack_require__(__webpack_require__.s = 0);
/******/ })
/************************************************************************/
/******/ ([
/* 0 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony import */ var _mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(1);


function main() {
  const a = 5;
  const b = 37;
  const result = Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"])(1 + 2);
  console.log(`The result of ${a} + ${b} = ${result}`);
}


/***/ }),
/* 1 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "add", function() { return add; });
function add(a, b) {
  return a + b;
}


/***/ })
/******/ ]);

Dont worry too much about all of this machine generated code.

Whats important to understand is that the top of the file is the functionality that webpack provides to fake the importing that we are using in our files.

If you inspect the bottom of the file though, you can see that we still have our main and add function.
The difference here is though, that the main function now calls Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"]) instead of the add function directly.

We have now reached a basic understanding of webpack and can say that the different modules are bundled together under an object and named in a way that all other modules know where to find the functions that they need. All the calls to other modules are then also automatically replaced with the correct call to the Object+Name.

What now?

With this knowledge as a basis you can now start with understanding how the basic webpack main.js --mode=none can be used with configurations to apply some transformations to your codebase.
This will allow things like using the latest EcmaScript features, code splitting and many more, some of which I will cover in later posts.

Read more
Max

Alot of Webpack introdutions focus on how to set up your first webpack configuration.
What I want to do in this article is taking a step back and look at what webpack is actually doing before going into the setup and all its complications.

The problem

In the frontend we have a variety of targets that we want our applications to run in, and they are all browsers. Some old, some new.

While the Javascript ecosystem is evolving at an extreme pace and SPAs + more complex web applications become the norm, the browsers evolve slowly.
In addition alot of people still use outdated ones, which might be due to workplace restrictions or the proper longterm use of hardware.

Now we have a dilemma. We want to write complex applications in Javascript, but in the browser we usually load an app via a single file.
React is a good example.
You would usually load it with sth like:

<script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script>

But the developers definitely did not develop this complex piece of software in just one file! The complexity of doing that with a team of developers and even alone would have too many downsides.

Looking at the repository we can see a lot of different folders and files, that all host their specific part of the application logic.

Screen-Shot-2018-07-05-at-16.59.22-1

In order for the different parts to use each other and share functonalities they are using the import syntax in Javascript. If you do not know how this works you can read about it here or just do a quick duckduckgo search.

The downside of this approach is that most of the aforementioned browsers do not yet, or never will support it. In addition to that all of those would have to be loaded over the web individually. While this is one thing that HTTP/2 might make less of a problem, it still is one to date.

Enter webpack

This exact situation is where webpack fits in perfectly!

In order to get all your different components and Javascript files that import each other to work in the browser you will need to bundle them together somehow and in essence webpack will do just that.
It take multiple files and figures out how they import each other and then bundles them together into 1 big file that provides wrapper functions to mimic the process of importing.

Lets make this a bit clearer with the following example:

// mathHelpers.js

export function add(a, b) {
  return a + b;
}
// main.js
import { add } from "./mathHelpers.js";

function main() {
  const a = 5;
  const b = 37;
  const result = add(a, b);
  console.log(`The result of ${a} + ${b} = ${result}`);
}

We have the main file which imports the mathHelpers to do some difficult calculations. This helps us to split out different parts of the application into different files. The benefit being a good structure and reusability of f.e. the math functions.

Now we are going to have to bundle those 2 together into 1 file.

The first step is installing webpack. To use it on the command line we need to install it on the computer with npm i -g webpack-cli.

Afterwards we can just run webpack main.js --mode=none in our application folder and the bundling is done! Its really that easy.

The mode option just says that no optimizations should be done, and all we want is a bundled file to check out its contents.

After the operation has completed we can inspect the bundle in the newly generated dist folder.

The content should be similar to this:

/******/ (function(modules) { // webpackBootstrap
/******/ 	// The module cache
/******/ 	var installedModules = {};
/******/
/******/ 	// The require function
/******/ 	function __webpack_require__(moduleId) {
/******/
/******/ 		// Check if module is in cache
/******/ 		if(installedModules[moduleId]) {
/******/ 			return installedModules[moduleId].exports;
/******/ 		}
/******/ 		// Create a new module (and put it into the cache)
/******/ 		var module = installedModules[moduleId] = {
/******/ 			i: moduleId,
/******/ 			l: false,
/******/ 			exports: {}
/******/ 		};
/******/
/******/ 		// Execute the module function
/******/ 		modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
/******/
/******/ 		// Flag the module as loaded
/******/ 		module.l = true;
/******/
/******/ 		// Return the exports of the module
/******/ 		return module.exports;
/******/ 	}
/******/
/******/
/******/ 	// expose the modules object (__webpack_modules__)
/******/ 	__webpack_require__.m = modules;
/******/
/******/ 	// expose the module cache
/******/ 	__webpack_require__.c = installedModules;
/******/
/******/ 	// define getter function for harmony exports
/******/ 	__webpack_require__.d = function(exports, name, getter) {
/******/ 		if(!__webpack_require__.o(exports, name)) {
/******/ 			Object.defineProperty(exports, name, { enumerable: true, get: getter });
/******/ 		}
/******/ 	};
/******/
/******/ 	// define __esModule on exports
/******/ 	__webpack_require__.r = function(exports) {
/******/ 		if(typeof Symbol !== 'undefined' && Symbol.toStringTag) {
/******/ 			Object.defineProperty(exports, Symbol.toStringTag, { value: 'Module' });
/******/ 		}
/******/ 		Object.defineProperty(exports, '__esModule', { value: true });
/******/ 	};
/******/
/******/ 	// create a fake namespace object
/******/ 	// mode & 1: value is a module id, require it
/******/ 	// mode & 2: merge all properties of value into the ns
/******/ 	// mode & 4: return value when already ns object
/******/ 	// mode & 8|1: behave like require
/******/ 	__webpack_require__.t = function(value, mode) {
/******/ 		if(mode & 1) value = __webpack_require__(value);
/******/ 		if(mode & 8) return value;
/******/ 		if((mode & 4) && typeof value === 'object' && value && value.__esModule) return value;
/******/ 		var ns = Object.create(null);
/******/ 		__webpack_require__.r(ns);
/******/ 		Object.defineProperty(ns, 'default', { enumerable: true, value: value });
/******/ 		if(mode & 2 && typeof value != 'string') for(var key in value) __webpack_require__.d(ns, key, function(key) { return value[key]; }.bind(null, key));
/******/ 		return ns;
/******/ 	};
/******/
/******/ 	// getDefaultExport function for compatibility with non-harmony modules
/******/ 	__webpack_require__.n = function(module) {
/******/ 		var getter = module && module.__esModule ?
/******/ 			function getDefault() { return module['default']; } :
/******/ 			function getModuleExports() { return module; };
/******/ 		__webpack_require__.d(getter, 'a', getter);
/******/ 		return getter;
/******/ 	};
/******/
/******/ 	// Object.prototype.hasOwnProperty.call
/******/ 	__webpack_require__.o = function(object, property) { return Object.prototype.hasOwnProperty.call(object, property); };
/******/
/******/ 	// __webpack_public_path__
/******/ 	__webpack_require__.p = "";
/******/
/******/
/******/ 	// Load entry module and return exports
/******/ 	return __webpack_require__(__webpack_require__.s = 0);
/******/ })
/************************************************************************/
/******/ ([
/* 0 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony import */ var _mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(1);


function main() {
  const a = 5;
  const b = 37;
  const result = Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"])(1 + 2);
  console.log(`The result of ${a} + ${b} = ${result}`);
}


/***/ }),
/* 1 */
/***/ (function(module, __webpack_exports__, __webpack_require__) {

"use strict";
__webpack_require__.r(__webpack_exports__);
/* harmony export (binding) */ __webpack_require__.d(__webpack_exports__, "add", function() { return add; });
function add(a, b) {
  return a + b;
}


/***/ })
/******/ ]);

Dont worry too much about all of this machine generated code.

Whats important to understand is that the top of the file is the functionality that webpack provides to fake the importing that we are using in our files.

If you inspect the bottom of the file though, you can see that we still have our main and add function.
The difference here is though, that the main function now calls Object(_mathHelpers_js__WEBPACK_IMPORTED_MODULE_0__["add"]) instead of the add function directly.

We have now reached a basic understanding of webpack and can say that the different modules are bundled together under an object and named in a way that all other modules know where to find the functions that they need. All the calls to other modules are then also automatically replaced with the correct call to the Object+Name.

What now?

With this knowledge as a basis you can now start with understanding how the basic webpack main.js --mode=none can be used with configurations to apply some transformations to your codebase.
This will allow things like using the latest EcmaScript features, code splitting and many more, some of which I will cover in later posts.

Read more
Max

Better boundaries in JS

JavaScript code structuring is quite difficult. A lot of frameworks and projects have come up with ways that work for them.

In this post I want to show a new technique that I have used on a side project.

In essence you define one Object for each domain, that clearly encapsulates everything that belongs to its domain. The implementation details are not written in that file though, as it would probably get huge.
Instead the actual implementations will be imported, reduced onto it and optionally get some dependencies injected.

I think some code will describe it best.

Code

Exposer

The top of the file just imports the helpers, or the actual implementations (should be named better).

The interesting part is the const Api.
All of the helpers functions will be reduced onto one object, a check should be implemented so that functions wont be overriden.
Additionally some dependencies will be injected, which makes the implementatios easier, as they can just destructure the first parameter to get the needed data.

import Entries from "./Entries";
import Repos from "./Repos";
import Logger from "../Logger";
import config from "../../../configs/appConfig";

const helpers = [Entries, Repos];

async function get(url) {
  const response = await fetch(url);
  const json = await response.json();
  if (json.statusCode === 500) {
    const error = new Error(`
       Could not GET data from ${url}. \n
       Message from Server: ${json.message}`);

    Logger.error(error);
    throw error;
  }
  return json;
}

async function post(url, data) {
  try {
    const response = await fetch(url, {
      body: JSON.stringify(data),
      headers: {
        "content-type": "application/json"
      },
      method: "POST"
    });
    const json = response.json();
    if (json.statusCode === 500)
      throw new Error(`
       Could not POST to ${url}. \n
       Message from Server: ${json.message}`);
  } catch (e) {
    Logger.error(e);
    return e;
  }
}

const apiConfig = {
  get,
  post,
  config
};

const Api = helpers.reduce((prevHelper, nextHelper) => {
  const keys = Object.keys(nextHelper);
  const wrappedHelper = keys.reduce((prevKey, nextKey) => {
    const nextValue = nextHelper[nextKey];
    if (typeof nextValue === "function") {
      const wrappedFunction = (...args) => nextValue(apiConfig, ...args);
      return Object.assign({}, prevKey, { [nextKey]: wrappedFunction });
    } else {
      return Object.assign({}, prevKey, nextKey);
    }
  }, {});
  return Object.assign({}, prevHelper, wrappedHelper);
}, {});

export default Object.assign(Api, { get, post });

Repos

This is a very simple function that will fetch all the repos. But you can see that the actual get functionality and the config object are injected.

async function getRepos({ get, config }) {
  try {
    const repos = await get(`${config.baseUrl}/repos`);
    return repos;
  } catch (e) {
    return [];
  }
}

export default {
  getRepos
};

Entries

Another simple function with dependency injection, but note the parameter repo. This will be the only one exposed by the API object, since the dependencies are injected via a curried function.

async function getEntries({ get, config }, repo) {
  try {
    const entries = await get(`${config.baseUrl}/entries/${repo}`);
    return entries;
  } catch (e) {
    return [];
  }
}

export default {
  getEntries
};

Conclusion

My believe is, that this can be of alot of help for more junior developers. All code is clearly structured inside its domain and only exposed through one single point.
In essence you are writing small libraries for each project. The good thing is though, that imports are not used in many different places.

Cross boundary imports would always just be on one object for each domain.

This way we have a nicely decoupled architecture inside of our repository, which will make it more accessible and easy to switch out certain pieces when necessary.

This of course isnt revolutionary, but something you might want to consider for a new project.

Read more
Max

Better boundaries in JS

JavaScript code structuring is quite difficult. A lot of frameworks and projects have come up with ways that work for them.

In this post I want to show a new technique that I have used on a side project.

In essence you define one Object for each domain, that clearly encapsulates everything that belongs to its domain. The implementation details are not written in that file though, as it would probably get huge.
Instead the actual implementations will be imported, reduced onto it and optionally get some dependencies injected.

I think some code will describe it best.

Code

Exposer

The top of the file just imports the helpers, or the actual implementations (should be named better).

The interesting part is the const Api.
All of the helpers functions will be reduced onto one object, a check should be implemented so that functions wont be overriden.
Additionally some dependencies will be injected, which makes the implementatios easier, as they can just destructure the first parameter to get the needed data.

import Entries from "./Entries";
import Repos from "./Repos";
import Logger from "../Logger";
import config from "../../../configs/appConfig";

const helpers = [Entries, Repos];

async function get(url) {
  const response = await fetch(url);
  const json = await response.json();
  if (json.statusCode === 500) {
    const error = new Error(`
       Could not GET data from ${url}. \n
       Message from Server: ${json.message}`);

    Logger.error(error);
    throw error;
  }
  return json;
}

async function post(url, data) {
  try {
    const response = await fetch(url, {
      body: JSON.stringify(data),
      headers: {
        "content-type": "application/json"
      },
      method: "POST"
    });
    const json = response.json();
    if (json.statusCode === 500)
      throw new Error(`
       Could not POST to ${url}. \n
       Message from Server: ${json.message}`);
  } catch (e) {
    Logger.error(e);
    return e;
  }
}

const apiConfig = {
  get,
  post,
  config
};

const Api = helpers.reduce((prevHelper, nextHelper) => {
  const keys = Object.keys(nextHelper);
  const wrappedHelper = keys.reduce((prevKey, nextKey) => {
    const nextValue = nextHelper[nextKey];
    if (typeof nextValue === "function") {
      const wrappedFunction = (...args) => nextValue(apiConfig, ...args);
      return Object.assign({}, prevKey, { [nextKey]: wrappedFunction });
    } else {
      return Object.assign({}, prevKey, nextKey);
    }
  }, {});
  return Object.assign({}, prevHelper, wrappedHelper);
}, {});

export default Object.assign(Api, { get, post });

Repos

This is a very simple function that will fetch all the repos. But you can see that the actual get functionality and the config object are injected.

async function getRepos({ get, config }) {
  try {
    const repos = await get(`${config.baseUrl}/repos`);
    return repos;
  } catch (e) {
    return [];
  }
}

export default {
  getRepos
};

Entries

Another simple function with dependency injection, but note the parameter repo. This will be the only one exposed by the API object, since the dependencies are injected via a curried function.

async function getEntries({ get, config }, repo) {
  try {
    const entries = await get(`${config.baseUrl}/entries/${repo}`);
    return entries;
  } catch (e) {
    return [];
  }
}

export default {
  getEntries
};

Conclusion

My believe is, that this can be of alot of help for more junior developers. All code is clearly structured inside its domain and only exposed through one single point.
In essence you are writing small libraries for each project. The good thing is though, that imports are not used in many different places.

Cross boundary imports would always just be on one object for each domain.

This way we have a nicely decoupled architecture inside of our repository, which will make it more accessible and easy to switch out certain pieces when necessary.

This of course isnt revolutionary, but something you might want to consider for a new project.

Read more
Max

Better boundaries in JS

JavaScript code structuring is quite difficult. A lot of frameworks and projects have come up with ways that work for them.

In this post I want to show a new technique that I have used on a side project.

In essence you define one Object for each domain, that clearly encapsulates everything that belongs to its domain. The implementation details are not written in that file though, as it would probably get huge.
Instead the actual implementations will be imported, reduced onto it and optionally get some dependencies injected.

I think some code will describe it best.

Code

Exposer

The top of the file just imports the helpers, or the actual implementations (should be named better).

The interesting part is the const Api.
All of the helpers functions will be reduced onto one object, a check should be implemented so that functions wont be overriden.
Additionally some dependencies will be injected, which makes the implementatios easier, as they can just destructure the first parameter to get the needed data.

import Entries from "./Entries";
import Repos from "./Repos";
import Logger from "../Logger";
import config from "../../../configs/appConfig";

const helpers = [Entries, Repos];

async function get(url) {
  const response = await fetch(url);
  const json = await response.json();
  if (json.statusCode === 500) {
    const error = new Error(`
       Could not GET data from ${url}. \n
       Message from Server: ${json.message}`);

    Logger.error(error);
    throw error;
  }
  return json;
}

async function post(url, data) {
  try {
    const response = await fetch(url, {
      body: JSON.stringify(data),
      headers: {
        "content-type": "application/json"
      },
      method: "POST"
    });
    const json = response.json();
    if (json.statusCode === 500)
      throw new Error(`
       Could not POST to ${url}. \n
       Message from Server: ${json.message}`);
  } catch (e) {
    Logger.error(e);
    return e;
  }
}

const apiConfig = {
  get,
  post,
  config
};

const Api = helpers.reduce((prevHelper, nextHelper) => {
  const keys = Object.keys(nextHelper);
  const wrappedHelper = keys.reduce((prevKey, nextKey) => {
    const nextValue = nextHelper[nextKey];
    if (typeof nextValue === "function") {
      const wrappedFunction = (...args) => nextValue(apiConfig, ...args);
      return Object.assign({}, prevKey, { [nextKey]: wrappedFunction });
    } else {
      return Object.assign({}, prevKey, nextKey);
    }
  }, {});
  return Object.assign({}, prevHelper, wrappedHelper);
}, {});

export default Object.assign(Api, { get, post });

Repos

This is a very simple function that will fetch all the repos. But you can see that the actual get functionality and the config object are injected.

async function getRepos({ get, config }) {
  try {
    const repos = await get(`${config.baseUrl}/repos`);
    return repos;
  } catch (e) {
    return [];
  }
}

export default {
  getRepos
};

Entries

Another simple function with dependency injection, but note the parameter repo. This will be the only one exposed by the API object, since the dependencies are injected via a curried function.

async function getEntries({ get, config }, repo) {
  try {
    const entries = await get(`${config.baseUrl}/entries/${repo}`);
    return entries;
  } catch (e) {
    return [];
  }
}

export default {
  getEntries
};

Conclusion

My believe is, that this can be of alot of help for more junior developers. All code is clearly structured inside its domain and only exposed through one single point.
In essence you are writing small libraries for each project. The good thing is though, that imports are not used in many different places.

Cross boundary imports would always just be on one object for each domain.

This way we have a nicely decoupled architecture inside of our repository, which will make it more accessible and easy to switch out certain pieces when necessary.

This of course isnt revolutionary, but something you might want to consider for a new project.

Read more
Kyle Nitzsche

Noting that this week I delivered three hour long sessions of interest to those interested in writing HTML5 apps for Ubuntu:




Read more
Kyle Nitzsche

Cordova 3.3 adds Ubuntu

Upstream Cordova 3.3.0 is released just in time for the holidays with a gift we can all appreciate: built-in Ubuntu support!

Cordova: multi-platform HTML5 apps

Apache Cordova is a framework for HTML5 app development that simplifies building and distributing HTML5 apps across multiple platforms, like Android and iOS. With Cordova 3.3.0, Ubuntu is an official platform!

The cool idea Cordova starts with is a single www/ app source directory tree that is built to different platforms for distribution. Behind the scenes, the app is built as needed for each target platform. You can develop your HTML5 app once and build it for many mobile platforms, with a single command.

With Cordova 3.3.0, one simply adds the Ubuntu platform, builds the app, and runs the Ubuntu app. This is done for Ubuntu with the same Cordova commands as for other platforms. Yes, it is as simple as:

$ cordova create myapp REVERSEDOMAINNAME.myapp myapp
$ cd myapp
(Optionally modify www/*)
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Plugins

Cordova is a lot more than an HTML5 cross-platform web framework though.
It provides JavaScript APIs that enable HTML5 apps to use of platform specific back-end code to access a common set of devices and capabilities. For example, you can access device Events (battery status, physical button clicks, and etc.), Gelocation, and a lot more. This is the Cordova "plugin" feature.

You can add Cordova standard plugins to an app easily with commands like this:

$ cordova plugin add org.apache.cordova.battery-status
(Optionally modify www/* to listen to the batterystatus event )
$ cordova build [ ubuntu ]
$ cordova run ubuntu

Keep an eye out for news about how Ubuntu click package cross compilation capabilities will soon weave together with Cordova to enable deployment of plugins that are compiled to specified target architecture, like the armhf architecture used in Ubuntu touch images (for phones, tablets and etc.).

Docs

As a side note, I'm happy to note that my documentation of initial Ubuntu platform support has landed and has been published at Cordova 3.3.0 docs.


Read more
Kyle Nitzsche

Ubuntu HTML5 API docs

HTML5 API docs published

I'm pleased to note that the Ubuntu HTML5 API docs I wrote are now done and published on developer.ubuntu.com. These cover the complete set of JavaScript objects that are involved in the UbuntuUI framework for HTML5 apps (at this time). For each object, the docs show how the corresponding HTML is declared and, of course, all public methods are documented.

A couple notes:
  • I wrote an html5APIexerciser app that implements every available public method in the framework. This was helpful to ensure that what I wrote matched reality ;) It may be useful to folks exploring development of  Ubuntu HTML5 apps. The app can be run directly in a browser by opening its index.html, but it is also an Ubuntu SDK project, so it can be opened and run from the Ubuntu SDK, locally and on an attached device.
  • The html5APIexerciser app does not demonstrate the full set of Ubuntu CSS styles available. For example, the styles provide gorgeous toggle buttons and progress spinnners, but since they have no JavaScript objects and methods they are not included in the API docs. So be sure to explore the Gallery by installing the ubuntu-html5-theme-examples package and then checking out /usr/share/ubuntu-html5-theme/0.1/examples/
  • I decided to use yuidoc as the framework for adding source code comments as the basis for auto generated web docs.  After you install yuidoc using npm you can build the docs from source as follows:
  1. Get the ubuntu-html5-theme branch: bzr branch lp:ubuntu-html5-theme
  2. Move to the JavaScript directory: cd ubuntu-html5-theme/0.1/ambiance/js/
  3. Build the docs: yuidoc -c yuidoc.json . This creates the ./build directory.
  4. Launch the docs by opening build/index.html in your browser. They should look something like this 
Thanks to +Adnane Belmadiaf for some theme work and his always helpful consultation, to +Daniel Beck for his initial writeup of the Ubuntu HTML5 framework, and of course to the developer.ubuntu.com team for their always awesome work!




Read more
Anthony Dillon

I was recently asked to attend a cloud sprint in San Francisco as a front-end developer for the new Juju GUI product. I had the pleasure of finally meeting the guys that I have collaboratively worked with and ultimately been helped by on the project.

Here is a collection of things I learnt during my week overseas.

Mocha testing

Mocha is a JavaScript test framework that tests asynchronously in a browser. Previously I found it difficult to imagine a use case when developing a site, but I now know that any interactive element of a site could benefit from Mocha testing.

This is by no means a full tutorial or features set of Mocha but my findings from a week with the UI engineering team.

Breakdown small elements of your app or website its logic test

If you take a system like a user’s login and register, it is much easier to test each function of the system. For example, if the user hits the signup button you should test the registration form is then visible to the user. Then work methodically through each step of the process, testing as many different inputs you can think of.

Saving your bacon

Testing undoubtedly slows down initial development but catches a lot of mistakes and flaws in the system before anything lands in the main code base. It also means if a test fails you don’t have to manually check each test again by hand — you simply run the test suite and see the ticks roll in.

Speeds up bug squashing

Bug fixing becomes easier to the reporter and the developer. If the reporter submits a test that fails due to a bug, the developer will get the full scope of the issue and once the test passes the developer and reporter can be confident the problem no longer exists.

Linting

While I have read a lot about linting in the past but have not needed to use it on any projects I have worked on to date. So I was very happy to use and be taught the linting performed by the UI engineering team.

Enforces a standard coding syntax

I was very impressed with the level of code standards it enforces. It requires all code to be written in a certain way, from indenting and commenting to unused variables. This results in anyone using the code, being able to pick up it up and read it as if created by one person when in fact it may have contributed by many.

Code reviews

In my opinion code reviews should be performed on all front-end work to discourage sloppy code and encourage shared knowledge.

Mark up

Mark up should be very semantic. This can be a case of opinion, but shared discussion will get the team to an agreed solution, which will then be reused again by others in the similar situations.

CSS

CSS can be difficult as there are different ways to achieve a similar result, but with a code review the style used will be common practise within the team.

JavaScript

A perfect candidate as different people have different methods of coding. With a review, it will catch any sloppy or short cuts in the code. A review makes sure  your code is refactored to best-practise the first time.

Conclusion

Test driven development (TDD) does slow the development process down but enforces better output from your time spend on the code and less bugs in the future.

If someone writes a failing test for your code which is expected to pass, working on the code to produce a passing test is a much easier way to demonstrate the code now works, along with all the other test for that function.

I truly believe in code reviews now. Previously I was sceptical about them. I used to think that  “because my code is working” I didn’t need reviews and it would slow me down. But a good reviewer will catch things like “it works but didn’t you take a shortcut two classes ago which you meant to go back and refactor”. We all want our code to be perfect and to learn from others on a daily basis. That is what code reviews give us.

Read more
Michael

I’ve spent a few evenings this week implementing a derbyjs version of the Todo spec for the TodoMVC project [1] – and it was a great way to learn more about the end-to-end framework, and appreciate how neat the model-view bindings really are. Here’s a 2 minute demo showing the normal TodoMVC functionality as well as the collaborative editing which Derby brings out of the box:

 

It’s amazing how simple Derby’s model-view bindings enable the code to be. It’s really just two files containing the functionality:

The other files are just setup (define which queries are allowed, define the express server, and some custom style on top of the base.css from TodoMVC). Well done Nate and Brian, and the DerbyJS community (which seems to be growing quite a bit over the last few weeks)!

[1] I’ve still got a few things todo before I can submit a pull-request to get this added to TodoMVC.


Filed under: javascript

Read more
Michael

Over the last week or so I’ve spent a few hours learning a bit about DerbyJS – an all-in-one app framework for developing collaborative apps for the web [1]. You can read more about DerbyJS itself at derbyjs.com, but here are six highlights that I’m excited about (text version below the video):

 

1. The browser reflects dev changes immediately. While developing, derbyjs automatically reflects any changes you make to styles, templates (and scripts?) as soon as you save. No need to switch windows and refresh, instead they’re pushed out to your browser(s).

2. Separation of templates (views) and controllers. Derbyjs provides a model-view-controller framework that we’ve come to expect, with Handlebars-like templates and trivial binding to any event handlers defined in your controller. Derby also provides standard conventions for file locations and bundles your files for you.

3. Model data is bound to the view – derbyjs automatically updates other parts of your templates that refer to any data which the user changes, but that’s not all…

4. Model data is synced real-time (as you type/edit) – updating the data you are changing in all browsers viewing the same page. The data just synchronises (and resolves conflicts) without me caring how. (OK, well I really do care how, but I don’t *need* to care).

5. The same code runs on both the (node) server and the browser. The code that renders a page after a server request (great for initial page load and search indexing) is one and the same code that renders pages in the browser without (necessarily) hitting the server.

6. My app works offline out of the box. That’s right – as per the demo – any changes made while offline are automatically synced back to the server and pushed out to other clients as soon as a connection is re-established.

It’s still early days for DerbyJS, but it looks very promising – opening up the doors to loads of people with great ideas for collaborative apps who don’t have the time to implement their own socket.io real-time communication or conflict resolution. Hats off to the DerbyJS team and community!

[1] After doing similar experiments in other JS frameworks (see here and here), and being faced with the work of implementing all the server-sync, sockets, authentication etc. myself, I went looking and found both meteorjs and derbyjs. You can read a good comparison of meteorjs and derbyjs by the derbyjs folk (Note: both are now MIT licensed).


Filed under: javascript

Read more
Michael

After experimenting recently with the YUI 3.5.0 Application framework, I wanted to take a bit of time to see what other HTML5 app development setups were offering while answering the question: “How can I make HTML5 app development more fun on Ubuntu” – and perhaps bring some of this back to my YUI 3.5 setup.

I’m quite happy with the initial result – here’s a brief (3 minute) video overview highlighting:

  • Tests running not only in a browser but also automatically on file save without even a headless browser (with pretty Ubuntu notifications)
  • Modular code in separate files (including html templates, via requirejs and its plugins)

 

 

Things that I really like about this setup:

  • requirejs - YUI-like module definitions and dependency specification makes for very clear code. It’s an implementation of the Asynchronous Module Definition “standard” which allows me to require dependencies on my own terms, like this:
    require(["underscore", "backbone"], function(_, Backbone) {
        // In here the underscore and backbone modules are loaded and
        // assigned to _ and Backbone respectively.
    })

    There’s some indication that YUI may also implement AMD in its loader also. RequireJS also has a built in optimiser to combine and minify all your required JS during your build step. With two plugins for RequireJS I can also use CoffeeScript instead of Javascript, and load my separate HTML templates as resources into my modules (no more stuffing them all into your index.html.

  • mocha tests running on nodejs or in the browser – as shown in the above screencast. Once configured, this made it pretty trivial to add a `make watch` command to my project which runs tests automatically (using nodejs’ V8 engine) when files change, displaying the results using Ubuntu’s built-in notification system. (Mocha already has built in growl support for Mac users, it’d be great to get similar OSD notifications built in too).

The setup wasn’t without its difficulties [1], but the effort was worth it as now I have a fun environment to start building my dream app (we’ve all got one right?) and continue learning. I think it should also be possible for me to go back and re-create this nodejs dev environment using YUI also – which I’m keen to try if someone hasn’t already done something similar – or even possibly without needing nodejs? I think the challenge for YUI will be if and when most other modules can be loaded via AMD why, as an app developer, would I want to commit to one monolithic framework release when I can cleanly pick-n-chose the specific versions of small tightly-focused modules that I need (assuming my tests pass). Or perhaps YUI will join in and begin versioning modules (and module dependencies) rather than the one complete framework so that they were available via any AMD loader – that would rock!

Thanks to James Burke (author of RequireJS) and the brunch team for their help!

[1] For those interested, things that were difficult getting this setup were:

  • Many JS libraries are not yet AMD ready (or yet giving support), which means adding shims to load them correctly (or using the use plugin in some cases). And sometimes this gets complicated (as it did for me with expect.js). I don’t know if AMD will get widespread adoption, who knows? A result of this is that many JS libraries are designed to work within the browser or node only (ie. they assume that either window or module/exports will be available globally).
  • Using the coffeescript plugin is great for looking at the code, but the errors displayed when running tests that result from  coffeescript parse errors are often hard to decipher (although I could probably use an editor plugin to check the code on save and highlight errors).
  • A recent nodejs version isn’t yet available in the ubuntu archives for Precise. It wasn’t difficult, but I had to install Node 0.6.12 to my home directory and put its bin directory on my path before I could get started.

If you want to try it out or look at the code, just make sure NodeJS 0.6.12 is available on your path and do:

tmp$ bzr branch lp:~michael.nelson/open-goal-tracker/backbone-test/
Branched 42 revisions. 
tmp$ cd backbone-test/
backbone-test$ make
[snip]
backbone-test$ make test
...........
? 12 tests complete (47ms)

Filed under: javascript, open-goal-tracker, ubuntu

Read more
Michael

I had a few hours recently to try updating my Open Goal Tracker javascript client prototype to use jQuery Mobile for the UI… and wow – it is so nice, as a developer with an idea, not having to think about certain UI issues (such as a touch interface, or just basic widget design). I can see now how ugly my previous play-prototype looked. Here’s a brief demo of the jQueryMobile version (sorry for the mumbling):

 

That’s using jQuery Mobile 1.01 for the UI and YUI 3.5.0PR2 for the MVC client-side framework, although I’m tempted to move over to backbone.js (which is what the YUI application framework is based on, it seems). Backbone.js has beautifully annotated source and a book – Developing backbone applications - which so far seems like very worthwhile reading material.

The prototype can be played with at http://opengoaltracker.org/prototype_fun/


Filed under: javascript, jquery, open-goal-tracker, yui

Read more