This Calendar facilitates moving between months using the chevron icons on the top right corner. You can even jump to a particular month of choice by clicking on the month name in the header that will pop open a month selector. Sweet 🥳! Demo is at the end of this article.
This article will describe how you can make a simple calendar using React.js without resorting to any third party plugin and meanwhile using the CSS Grid to place the month dates at proper positions.
The simplest thing you can do is to iterate over the dates of a month and get some HTML elements for each of the dates, e.g., span HTML element.
The code that does this trick is below:
class Calendar extends Component {
render() {
return (
<div className="calendar">
<DaysOfMonth />
</div>
);
}
}
class DaysOfMonth extends Component {
const days = daysInMonth(4); // Get the days in month of May. Defined inside date.js file.
render() {
return days.map((day, i) => {
return (
<span key={i}>
{day}
</span>
);
};
}
}
By using this code you’ll get nothing except a long list of span
elements. To convert them into proper calendar like grid, we can give the calendar
rule display: grid;
and grid-template-columns: repeat(7, 1fr);
properties.
.calendar {
width: 400px;
display: grid;
grid-template-columns: repeat(7, 1fr);
}
Tada!! This will get you something like shown below. I’m using here Firefox DevTools to inspect the grid.
This is not a perfect calendar but it looks like a bit calender-ish! If you notice properly then you’ll find the Month of May 2020 starts with Friday and if we consider that the first day of the week is Sunday then we have to move the date 1 of May 2020 month on column 6 (1-based index). How we can do this? Well, CSS Grid provides a property grid-column-start
which takes a 1-based index value that specifies on which column a certain Grid-item should be placed. We just now need to calculate that value.
We have to create a function that calculates the First Day of a Given Month. Updated code is after the following image.
class DaysOfMonth extends Component {
const days = daysInMonth(4); // Get the days in month of May. Defined inside date.js file.
const dayToBeginTheMonthFrom = firstDayOfMonth(this.props.month); // Get the first day of a given month. Defined inside date.js file.
const style = { gridColumnStart: dayToBeginTheMonthFrom + 1 }; // We are adjusting here for the 1-based index that the CSS Grid expects.
render() {
return days.map((day, i) => {
return (
<span key={i}
style={i === 0 ? style : {}} {/* <-- This line will set the correct column. */}
>
{day}
</span>
);
};
}
}
The result you can see in the image below:
We have finished with the core of displaying the Calendar. Now all that remains is to style this Grid and Grid-items and the Calendar header, week-row and its behaviour, which I think you can see on the CodeSandbox — Demo is below of this article 😎.
Oh yes, I used CSS gradients a lot as I was reading about them a lot recently. Please don’t shout at me if I have offended you! 😇.
]]>Thanks for stopping by. See you next time.
A SW implementation depends on your build system.
Either you can go with manual setup (Instantiating SW, handling its lifecycle events, handle the cache invalidations, the list goes on), and all these tasks come with their own overheads, and at the end you’ll have a long SW JavaScript file that is little hard to manage in the long run like all the other JavaScript code we write.
Or better to use some 3rd party solution. Specifically, Workbox, again from Google.
I went with the 2nd option because of the following reasons:
Utilizing well tested approach instead of inventing my own (one of the software design principle), and,
We are employing Webpack as our build tool, and Workbox provides a Webpack plugin that integrates well with Webpack assets generation pipeline.
Let’s see how we integrated this Workbox plugin within our project, and later I’ll discuss the problems we ran into and putting the whole project in jeopardy because of little to less knowledge on the topics like How the Browser’s Cache storage works and How to handle the headers on the CloudFront. Head on to Part 2 if you want to skip this implementation article altogether.
The first task is to install the Workbox Webpack plugin. Install it using the command npm install workbox-webpack-plugin --save-dev
.
Create a new file serverWorker.js
and add this to your application’s client entry point (There will be a server entry point also in case you have an Isomorphic application).
// serviceWorker.js
if ('serviceWorker' in navigator) {
window.addEventListener('load', () => registerSW());
}
function registerSW() {
navigator.serviceWorker.register('/sw.js')
.then((registration) => {
console.info('ServiceWorker registration successful: ', registration, ' ', '😍');
}, (err) => {
console.error('ServiceWorker registration failed: 😠', err);
});
}
Now import this file in your client entry point (in my case client.js
) using import '<path>/serviceWorker';
. This will cause this SW to install whenever your application is loaded inside a Browser. If you notice closely then you can see we are loading an HTTP path that goes to sw.js file. This sw.js file will actually contain your SW code. You also need to provide a way to serve this file from your server. Let’s create this file next.
Create a new file sw.js
and put inside this some Workbox related configuration. We’ll talk about it just in a bit.
// webpack/sw.js
workbox.skipWaiting();
workbox.clientsClaim();
workbox.core.setCacheNameDetails({
prefix: 'myappname',
suffix: 'msiv1'
});
workbox.precaching.precacheAndRoute(self.__precacheManifest);
To understand what those skipWaiting
and clientsClaim
method calls are we need to understand a bit of SW life cycle12.
This is not an exhaustive introduction, just a little bit overview, please check footnote links for more information.
Every SW has some lifecycle events And out of those, Install and Activate are the ones we are interested in. Whenever a new page is requested very first time a SW’s Install event is fired, and as soon as it has finished installing its Activate event is fired, and the SW activates and starts to intercept the network calls.
So far so good.
Now if you refresh the page the SW will install again, but this time after installing it will go to Waiting state, instead of Activate. The reason is we already have existing SW from last time. Now, the existing SW will give up its client (the page), and the new SW will Activate. So, this is a bit of delay before new functionality is available to our page.
This is what above two statement does essentially. It’ll skip the SW waiting phase and it will claim all the clients as soon as it Activates.
Let’s come back from that little detour. The method setCacheNameDetails
merely let the Workbox know the name by which it should name the cache.
Let’s talk a bit about the precaching3, which is what our last line basically does. Precaching essentially means to cache our assets (JavaScript and CSS) in the background into the Browser cache store as soon as our SW Installs and Activates. That mysterious variable self.__precacheManifest
is an array, which is usually generated by the Workbox Webpack plugin in a separate file usually named precache-manifest.<revision>.js
in your dist
or build
directory, that contains all our assets along with their hash/revision.
// precahce-manifest.<revision>.js
self.__precacheManifest = [
{
"url": "/mobile_assets/home-182321.js"
},
{
"url": "/mobile_assets/icons.svg",
"revision": "932723"
}
];
Time to take a step back and think about what are the things we would like to cache onto the user’s browser. There are few items to be considered, Images, Fonts, API calls, HTML, CSS and JavaScript. Anything else? I think this list will do for now.
If you remember, we have already cached our CSS and JavaScript using the Workbox precaching. What about the next items? Let’s take them one by one.
a. Images: You might not want to cache these, as these will quickly fill the cache quota you have been allotted by the browser. So, my advice is to ignore these.
b. HTML: This will also quickly fill the cache quota if your page is rendered from the server in case of your application is isomorphic. So, ignore this one also. You can however always cache the Home page. Put this inside your sw.js file.
// webpack/sw.js
workbox.routing.registerRoute(/(\/$|\/\?.*$)/, workbox.strategies.networkFirst({
cacheName: 'pages-cache',
plugins: [
new workbox.expiration.Plugin({
maxAgeSeconds: 1 * 24 * 60 * 60 // 1 Days
})
]
}));
c. API calls: You can always cache the API calls. They don’t take up much quota space. Put this inside your sw.js file.
// webpack/sw.js
workbox.routing.registerRoute(/.*\/my_api\/v1.*/, workbox.strategies.staleWhileRevalidate({
cacheName: 'apis-cache',
plugins: [
new workbox.expiration.Plugin({
maxAgeSeconds: 1 * 24 * 60 * 60 // 1 Days
})
]
}));
d. Font cache: You can cache your fonts also. Put this inside your sw.js file.
// webpack/sw.js
workbox.routing.registerRoute(/.*woff/, workbox.strategies.cacheFirst({
cacheName: 'fonts-cache',
plugins: [
new workbox.expiration.Plugin({
maxAgeSeconds: 1 * 24 * 60 * 60 // 1 Days
})
]
}));
Let’s see what we did actually. We are telling Workbox to intercept some URLs based on the first parameter to the regiserRoute
method. E.g, in our apis-cache
case we are using a RegEx to intercept our API calls to the server. The second parameter to each of the registerRoute
methods is a Workbox Strategy. A Workbox Strategy is simply a Caching Pattern that determines how a SW handles the fetch
request and then respond to the client (the browser).
We have used three types of strategies networkFirst
, staleWhileRevalidate
and cacheFirst
. Let’s define what these three strategies actually do in brief:
You can read in detail (with diagrams) about these strategies here4.
Our SW file sw.js implementation is now complete. But it’ll not work on its own. Our next step is to configure the Workbox Webpack plugin5, that’ll utilize our sw.js file.
Workbox Webpack plugin provides two classes GenerateSW
and InjectManifest
.
Here is the Webpack configuration snapshot on how to include this plugin.
// webpack/client.dev.js and webpack/client.prod.js
module.exports = {
// ...
plugins: [
new WorkboxPlugin.InjectManifest({
swSrc: path.join(__dirname, 'sw.js'),
swDest: 'sw.js',
})
]
};
InjectManifest plugin expects an object of properties. Here we are passing two properties.
output.path
property of Webpack.Well, we have come a long way but there are couple of things to take care of. Remember, in section Implementing SW Loader/Instantiator, I talked about serving sw.js file from your server? It’s essential that you serve this file at the root of your application without any redirect so that visiting http://<your-app>/sw.js
should open the sw.js file without any redirect be it temporary or permanent one.
// server.js
const app = new express();
app.get('/sw.js', (req, res) => {
res.setHeader('Cache-Control', 'max-age=0, no-cache, no-store, must-revalidate');
res.sendFile('sw.js', { root: path.join(__dirname, 'dist') });
});
Everything looks good to go now. Congrats, that’s it! We have implemented our SW, which you can see in your Application tab of Dev Tools. Here I’m referencing TravelTriangle’s website. (Yep, I work here 😉)
But wait, there is more to it. PWA apps are known to be accessible offline to the users, so they don’t have to open the browser and visit our website. We can allow them to Add our app to their home screen. We can achieve this one too, by introducing another file, manifest.json
6 and adding this to our HTML when we initially send the HTML to the client using the code <link rel="manifest" href="/dist/manifest.json" />
.
{
"//": "webpack/manifest.json",
"short_name": "TravelTriangle",
"name": "TravelTriangle",
"icons": [
{
"src": "http://www.cdn-site.com/192/logo.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "http://www.cdn-site.com/512/logo.png",
"sizes": "512x512",
"type": "image/png"
}
],
"start_url": "/",
"display": "standalone",
"theme_color": "#2f847d",
"background_color": "#ffffff"
}
Now your application will display an Add to Home Screen link at the bottom of the browser window.
The application will have its own splash window.
And, it will be running in full screen. Sweet!
So far so good, and everyone is happy!!
But, while developing a feature, it is not that straightforward process. You run into multiple issues. You try different things to make the feature work. This was the case here too. We ran into some issues and we took different steps eventually making the feature fully functional.
This is the backstory. Follow to Part 2 for this adventure.
Thanks for stopping by. See you next time.
https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle. ↩︎
https://developers.google.com/web/fundamentals/primers/service-workers/. ↩︎
https://developers.google.com/web/tools/workbox/guides/precache-files/ ↩︎
https://developers.google.com/web/tools/workbox/modules/workbox-strategies ↩︎
https://developers.google.com/web/tools/workbox/modules/workbox-webpack-plugin ↩︎
https://developers.google.com/web/fundamentals/web-app-manifest/ ↩︎
This article is about the backstory on how we accomplished the SW and what were the issues we ran into, and how we approached them. Quite a task it was.
There were basically two main problems which we encountered:
Let’s talk about these issues.
Our SW implementation was working fine while we were serving assets from our project’s dist
or build
directory. But it was having some problems while we were serving those from the CloudFront on our Staging and Production environments. The main culprit was Cache Storage Quota was exceeding than the browser allotted to the application. This resulted in not processing/caching remaining assets by the SW.
What was the problem? The problem was simple. We were getting lots of errors about CORS (Cross-Origin Resource Sharing). It’s a simple mechanism that makes use of some additional HTTP headers that allows one application running on one server to access the resources running on a separate server. In our case we were serving our assets from the CloudFront, that was different from our application server, and we had a little idea on how to allow CORS requests on it. Although, we knew that somewhere on the CloudFront we need to add Access-Control-Allow-Origin
header, but WHERE?
Solution? We approached the Infrastructure team for the help. Sadly, they had a little idea on it too, 😐. Now what? Fortunately, that error also suggested one solution to request the resources from our SW file with no-cors
header.
But this solution had the limitation that when we request the resource using no-cors
header, the resulted response is considered Opaque Response
. More on this next. We quickly tried that out and updated our sw.js
file as follows:
// sw.js
const customPreCacheName = 'traveltriangle-precache-msiv1';
self.addEventListener('install', (e) => {
e.waitUntil(
caches.open(customPreCacheName)
.then(cache => {
self.__precacheManifest.forEach(a => {
const request = new Request(a.url, { mode: 'no-cors' });
fetch(request).then(response => cache.put(request, response));
});
})
);
});
// Commented default precaching supplied by the Workbox
// workbox.precaching.precacheAndRoute(self.__precacheManifest);
Here, we are intercepting the SW install
event and opening the custom cache and manually creating a Request
object and putting the response in the cache. BUT, it didn’t work. We were still exceeding the allotted quota! 😐.
We were struggling in understanding the characteristics of this issue properly. But we were having one thing clear, the problem was with the Opaque Response. We researched more and stumbled upon one Stackoverflow link, where we came to know that whenever we get the Opaque Response, the Cache Storage API
had problems with dealing it.
Cache Storage API adds extra 7 Megabytes content padding around the response, due to the reason of Opaque Response resulted in the Response.status set to 0
and not 2XX
, even when the request was successful.
So we concluded that the two problems were related. The quota was exceeding because our CORS headers were not correct on the CloudFront. We were back to the original problem on how to set the CORS headers on the CloudFront.
We went through the CloudFront documentation and found a link that mentioned where to add the required headers. This was the time now to fix the problem. We altered one of the Behavior and whitelisted the Access-Control-Allow-Origin
header.
So, we changed the headers setting from this:
to this:
And all was working for good. Right? NO! Now, what on earth could possibly go wrong? We had whitelisted the header already, 😐.
There was one thing we overlooked that browser always sends encoding header. We verified this by copying the request from the Network
tab as Curl
. The curl command had a switch --compressed
. Voila, this was missing from the CloudFront. Thus we also whitelisted Accept-Encoding
header as shown below:
Peace! Everything is sorted now. SW was happy, fetch
was happy, Cache
was happy, We were too!
From the starting to the end we were sure of one thing, we need to set Access-Control-Allow-Origin
on the CloudFront. But it was a miss from our Infrastructure team. The cost was the delay in rolling out the feature. Sometimes only coding doesn’t solve the problem alone, we also need to pay attention to the environment configuration.
This was really not our task to alter the CloudFront configuration, but we had to stretch out a bit. I think this is what we all meant when we write in our Resume, Willing to go for an extra mile, right?
I would like to pay special thanks to one of my colleagues Rahul Jain for his valuable efforts in debugging and solving this issue. And yes, we reverted our custom caching code you just witnessed above.
]]>Thanks for stopping by. See you next time.
Although browser consoles have improved a lot, they don’t provide complete flexibility in writing the code in it, e.g., code assistance, syntax highlighting, indentations are a few to be named, but are crucial too.
This is where Scratch files come into the picture. Scratch files are a feature of JetBrains IDEs, and, as we used to work on WebStorm, let’s see how we can employ them for our own use and ease our life a little bit in testing the complex code without relying on difficult to tame Browser Consoles windows.
Scratch files can be created by right-clicking on your project name in Project window and select New==>Scratch File, or alternatively just focus the Project window and press Alt+Ins keyboard shortcut to bring the context menu. A much better option is to just use the Ctrl+Alt+Shift+Ins keyboard shortcut. Then just select the JavaScript option and press Enter to generate the new scratch file.
Now, let’s pretend we have a list of items with their price and quantity. And, we want to calculate total order price, but at the same time, we have a requirement that we have to filter out the items those have zero (0) price on them.
const items = [
{ name: 'Cookies', price: 35, quantity: 5 },
{ name: 'Chocolates', price: 65, quantity: 2 },
{ name: 'Juice', price: 20, quantity: 1 },
{ name: 'KissMe Toffee', price: 0, quantity: 5 }
];
const totalPrice = items.filter(item => item.price)
.reduce((acc, item) => acc += (item.price * item.quantity), 0);
console.info(`Total order price is : Rs. ${totalPrice}`);
I’m certain that you can’t do this much code in Browser Console window. Let’s put this code in the scratch file we had created. After writing/pasting this code we need to execute this. We can do this in two ways. Either right click the scratch file’s tab and select Run item or just use Ctrl+Shift+F10 keyboard shortcut. Consecutively, you can just press Shift+F10 to run your scratch file.
Then your scratch file executes and the results are shown in the output window.
You can even debug your code right in this scratch file. Just put some debug points and select Debug from the context menu instead of Run.
This is just a little feature we mostly don’t know about. I’ll come up with one more feature of WebStorm that you can use to simplify your life and the code management.
]]>Thanks for stopping by. See you next time.
However, it can be found in the history that every technology has always promised something fabulous, and in this glittering we sometimes forget how basic things work out. E.g., when .NET
was introduced people quickly started to build amazing UI for the Desktop applications, but they overlooked one very important detail, Delegates
! The underlying mechanism that facilitates the Event
and P/Invoke
(calling Native Windows C APIs).
Similarly, when we work with this awesome React.js library we forget to pay attention to one of the basic things, FORMS
1!
So, lets get started with it.
Every element in a form, be it, input[radio]
, input[text]
, input[checkbox]
or select
, uses two or three basic props to communicate with us what is their current state.
value
: This prop can be used by radio
, checkbox
, text
and select
.checked
: This props can be used by radio
and checkbox
.onChange
: This prop is used by all of the elements, and is an event handler. This event handler is fired whenever the element changes its state.Moreover, these properties are necessary so that your elements remains controlled component. By being Controlled here means React.js should know what is the component’s/element’s state, so that you can save that state inside your custom state
variable.
Let’s talk about every prop one by one.
value
propThis prop is used to set the default value or the new value (whenever it’s available from some source) of a component.
<input type="text" className="form-control" id="name" placeholder="Your name"
value={this.state.name} onChange={this.nameHandler} />
Here, we are setting the value of input[text]
element from our state
, and whenever that state is changed due to some operations, in this case whenever nameHandler
gets executed in response of onChange
event.
nameHandler = (e) => this.setState({ name: e.target.value });
This value
prop can also be used with radio
and checkbox
apart from checked
prop. The difference is, value
is used to generate only one-way output from these elements. For example suppose, if we want to use two checkboxes for the languages English and Hindi, in this case, checked
prop is inefficient, so the value
prop is used instead. E.g.,
<input className="custom-control-input" type="checkbox" name="language"
id="hindi" value="hindi" onChange={this.languageHandler} />
<input className="custom-control-input" type="checkbox" name="language"
id="english" value="english" onChange={this.languageHandler} />
languageHandler = (e) => {
const languageIndex = this.state.languages.findIndex(l => l === e.target.value);
if (languageIndex === -1) {
this.setState({ languages: [...this.state.languages, e.target.value] });
} else {
this.setState({
languages: [
...this.state.languages.slice(0, languageIndex),
...this.state.languages.slice(languageIndex + 1)
]
});
}
};
See, how we are utilizing the value
prop. This prop also work similarly with the select
.
checked
propThis prop is used to select either radio
or checkbox
.
<input className="custom-control-input" type="checkbox" name="married"
id="marriedYes" checked={this.state.married} onChange={this.marriedHandler} />
marriedHandler = (e) => this.setState({ married: e.target.checked });
onChange
propI think by now, you must have guessed, what it does. It let us bind a function that is executed whenever the value of the component/element gets change.
You can find the sample application on the Github2. Here is the screenshot of running application.
I thought of showing a custom Select Box, but I guess, that’s for later. Stay tuned!
]]>Thanks for stopping by. See you next time.
+
signs in between the variables and the user strings. Many server-side languages have much flexible string concatenation system built in. Scala and Groovy are those I know about currently on JVM (Java Virtual Machine). Although CoffeeScript has supported this new type of string concatenation for a long time, recently ES6 has got them too and are known as Template Literals.
What does a Template Literals look like? Consider the example where you need to create a combo box of people, from a provided data set.
const people = [
{
name: 'Manvendra Singh',
id: 'manvendrask'
},
{
name: 'Kirti Nandwani',
id: 'knandwani'
},
{
name: 'Brij Kishor',
id: 'hackishor'
}
];
const options = people.map(person => '<option value="' + person.id + '">' + person.name + '</option>').join('');
const html = '<select name="people" id="people">' + options + '</select>';
document.querySelector('#people_div').innerHTML = html; # Given we have some div with id of people_div on it.
Do you see how we are creating the option
elements and finally the select
element? Isn’t that difficult? Can you understand that? Can you write that? Neither do I! I myself faced many difficulties in creating those option
elements. I was constantly confused between those "double quotes"
and 'single quotes'
.
Let’s see now how the new Literal Templates feature can rescue us here:
const options = people.map(person => `<option value="${person.id}">${person.name}</option>`).join('');
const html = `<select name="people" id="people">${options}</select>`;
document.querySelector('#people_div').innerHTML = html; # Given we have some div with id of people_div on it.
Can you get it now? Let me explain. Literal Templates are the strings which are surrounded by a pair of backticks ` instead of the double or single quotes. Between these backticks, we can write our user string along with putting the variables inside a pair of curly braces which are preceded by a dollar sign.
Can you get it now? Of course, you can! By the way, if you noticed I’m using a join
method on the return value of map
function. It’s because by default arrays elements are converted to comma separated string when used inside some string operation.
Here is the output by the way:
Now if you ask what are the Tagged Templates
? Then, it’s a way of passing the Template Literal to a user defined function that returns the final string. Let’s take an example:
const string = `This string contains a \t tab and\n one new line character.`;
console.log(string);
# Output
# This string contains a tab and
# one new line character.
As the output shows the JavaScript engine process the special characters. What if we didn’t want those special characters to be expanded? Consider following:
const string = String.raw`This string contains a \t tab and\n one new line character.`;
console.log(string);
# Output
# This string contains a \t tab and\n one new line character.
What happened? What is that String.raw
doing there before our Template Literal?
My friend, that String.raw
is called a Tag Function
. This Tag Function process the Template Literal and returns a new string. Getting it now? What is happening here is there is a raw
function defined on the String
object. The code snippet below shows its signature and how it’s being internally called by the JavaScript runtime while processing the Template Literal.
String.prototype.raw = (strings, ...values) => {
# strings is an array
# ...values is also an array that contains all of the rest parameters passed to this function.
};
String.raw(["This string contains a \t tab and\n one new line character."]);
To understand more on this function signature lets consider following example where we would be creating our Tagging Function to return the same string which would otherwise be returned by the JavaScript runtime (we are just mimicking the built-in behavior):
const taggingFunc = (strings, ...values) => strings.reduce((sentence, string, index) => `${sentence}${string}${values[index] || ''}`, '');
const name = 'Manvendra Singh';
const id = 'manvendrask';
const introduction = taggingFunc`Hello I'm ${name}, and my id is ${id}`;
console.log(introduction); // Hello I'm Manvendra Singh, and my id is manvendrask
# It's how JavaScript runtime have called this taggingFunc
# taggingFunc(["Hello I'm ", ", and my id is ", ""], name, id);
Let’s see what’s happening here. The taggingFunc
receives two parameters, the first one contains an array that contains all of the user defined string literals, as can be seen from the last comment, and the second one contains rest of the variables values which is also an array, but it contains values at the runtime because there can be any number of arguments from the Template Literal.
Here we are just building a whole string by using the Array.reduce
method, along with adding the values from the values
array.
Here are two things to notice though:
strings
variable would always be one more in the size than that of values
.strings
array contains an empty string value at the end, and if the variable substitution starts the Template Literal then the strings
array contains an empty string value at the start.Still confused? Let’s take an example:
function tagFunc(strings, ...values) {
}
# tagFunc(["Hello this is a variable at the ", ""], "end");
const string = tagFunc`Hello this is a variable at the ${'end'}`;
# tagFunc(["", " this line with a variable"], "Starting");
const string2 = tagFunc`${'Starting'} this line with a variable`;
As you can see from the comments how the strings
and values
variables are populating. It’s simple and straight forward.
To add to more knowledge on this feature Tagged Templates of ES6, let’s go through one example. Suppose we want to build up a custom introductory string for a web developer and replacing and abbreviating the acronyms using <abbr>
HTML tag.
const dictionary = {
JS: "JavaScript",
HTML: "Hyper Text Markup Language",
CSS: "Cascading Style Sheets"
};
function abbreviate(strings, ...values) {
const abbreviations = values.map(value => {
if (dictionary[value]) {
return `<abbr title="${dictionary[value]}">${value}</abbr>`
}
return value;
});
return strings.reduce((sentence, string, i) => `${sentence}${string}${abbreviations[i] || ''}`, '');
}
const name = 'Manvendra Singh';
const introduction = abbreviate`Hello, I'm ${name}, and I blog about ${'HTML'}, ${'JS'} and ${'CSS'}!`;
document.querySelector('.bio').innerHTML = introduction; # Given we have a div with bio class on it.
Here is the output:
I think this last example might have shed some light on how important this concept can be. We can use this concept to build custom DSLs (Domain Specific Languages), e.g., we can wrap the DOMPurify library in a Tagging Function and sanitize our strings, much like String.raw
does.
Last but not least, ES6 String Literals and Tagging Templates are as cools as other features of the language, like Destructuring, Generators and Iterators, and the list goes on!
]]>Thanks for being here till now. See you next time.
In essence, Node.js debugging has come a long way since its inception, console.log
statements to command line client to VS Code to Chrome DevTools integration.
Let’s get started with each of these in the reverse order when they appeared.
Let’s say you have some script in file index.js
const os = require('os');
const cpus = os.cpus();
console.log(`${cpus.length} cpus found. Details follows.`);
cpus.forEach((cpu, i) => {
console.log(`Model: ${cpu.model}`);
});
This simple script just finds out how many CPUs are on the host machine and prints their model. To debug this script in DevTools you can execute the command node --inspect --debug-brk index.js
. This command would spin up a V8 inspector on default port 9229 (can be changed by the option --inspect=8888
) and attaches the Chrome DevTools to Node.js instance, allowing us debugging and profiling. This command would result in something like as shown in the figure below:
Copy the URL starting with chrome-devtools:// and paste inside Chrome’s new tab. You’ll be welcome by the DevTools with your script loaded in the Sources tab, breaking on the first line. The script has a default breakpoint on the first line due to the presence of the --debug-brk
switch in the previous command. If you forget to specify this switch then nothing will get load in the inspector and the script execution would come to an end instantly (which you can verify in the previous image that we didn’t get our prompt back). Here is how it looks like:
Now you can just add few breakpoint by clicking on the gutter in the left and then step into and step over in the code using the toolbar on the right pane. You can add watch statements in the right pane, as can be seen in the image below:
Verify that watch pane is showing the value of cpu.length
. We are inspecting cpus
constant by hovering our mouse over it.
To debug the scripts in VisualStudio Code we again need to do some homework. Let’s see what’s that.
The first step is to open your code folder in the VS Code. Manvendra, that’s obvious, please don’t tell us.
From the left side panel select the 4th icon, that has Bug on it. You’ll see in the following screen.
On the top part of the Debug panel, we can see some icons and drop-down menu. The drop-down menu says No configuration. Select the Gear icon next to it and a pop-up will open. Select the Node.js menu item in it.
VS Code will create a new file at .vscode/launch.json
. This file contains various settings, but the minimal settings is shown in the image below:
The most important one is program key. It points to your main file. In our case, we have the index.js filename entry. workspaceRoot is a special variable that VS Code uses internally to point to the currently loaded project directory. Now all we need to do is add some breakpoints in the editor gutter and hit the play button in the debug panel. A new toolbar would then visible on top which contains all the usual debugging buttons. See in the image below, we have our watch setup and mouse hovering in action.
More information on this configuration file is available at https://code.visualstudio.com/docs/nodejs/nodejs-debugging.
For those who love command prompt or terminal, there is a separate workflow for the debugging. It’s similar to the GNU C Debugger popularly know as GDB. To start the script execution in debug mode you would need to execute the command node debug index.js
. The debugger would by default listen on the port 5858
. You can type help
for the available commands.
By default, the debugger would launch the program in break
state, so, to execute the program type cont
and you would see the script result instantly. But that’s not what we are using node debugger for, right? So let’s start script execution again using the restart
debugger command and then set the breakpoint on line number 5 and 8 using sb(5)
and sb(8)
. You can add watches, e.g., to watch the variable cpus
and i
, type watch('cpus')
and watch('i')
.
Now type cont
to begin the execution of the program, note how the program execution stops at line 5 denoted by break in statement. See how the debugger showing us our watchers. If you note carefully, the second watcher i
is showing "<error>"
. Why is that? It’s simple, i
is not in the scope of debugging context, simple!
You can evaluate the script in the debugging context by typing repl
command. This would drop you in debugger repl (read eval print loop). You can inspect variables by using exec
command. E.g., See in the image below, I’m inspecting cpu
using exec cpu
. Note how watchers list showing the current value of i
but cpus
in error state. To go to next line of execution type next
command. You can list all the available watchers by using the command watchers
.
A full list of commands and their usages is available at https://nodejs.org/dist/latest-v6.x/docs/api/debugger.html.
Technology has always made the human life simple, and this debugging technology is much useful for the people writing software.
Debugging support has traveled a great distance. In my opinion, I would always use VisualStudio Code as I won’t have to leave my development environment where I have already opened the files I’m working on. If I rely on Chrome DevTools I would have to open each file again and then set the breakpoints, which is neither productive nor logical.
I recommend staying far away from Node.js’ debugging client. It’s not super easy and intuitive to use. I highly advocate debugging your code while writing instead after deploying.
]]>Thanks for being here till now. See you next time.
NPM — The Node Package Manager is a program to find and install Node packages. It is really not a part of the Node, it just comes bundled with the Node binaries, as it is very popular package manager out there. But truth to be told, it is not only available package manager for the Node. Recently Facebook released its in-house made package manager called Yarn. Facebook claims Yarn is faster than the NPM, which to a certain extent is true.
Most people gets confused when talking about NPM. There are essentially two different things. First is an NPM registry which is available at https://www.npmjs.com and second is an NPM CLI (Command Line Interface) available at https://github.com/npm/npm, and they both work together out of the box.
NPM CLI can be configured to work with the different registries, e.g., your own private registry. NPM CLI can also be used with Git repositories directly. e.g., if you want to install Express.js then you can point to Github repo while installing it with the command npm install expressjs/express
Here expressjs is the organization that is hosting the express project on the Github. This command would install the Express.js from the latest commit of the master branch of the Github repo. You can verify this by executing the command npm ls --depth=0
, which would list all the packages currently installed inside the node_modules directory of the current directory.
As you can see it is showing the HEAD commit.
We can also install a Github repo package from a specific commit or tag or branch. e.g., if we want to install Express.js version 4.15.3 directly we can execute the command npm i expressjs/express#4.15.3
(note, npm i is an alias to npm install).
Normally, when we execute the command npm i
, it would go over the network, resolve and install packages. But there is an option --dry-run
that would let you know what packages are going to be installed, instead installing them actually.
We normally install some packages globally, like grunt-cli, gulp-cli, express-generator and so on. To list all of them just execute the command npm ls -g
, but it would print all of the trees. To print only the first level of the tree instead, we can provide additional argument --depth=0
to the command.
You probably might have guessed, this command would also work for your project directory that contains a package.json file, e.g. npm ls --depth=0
.
By the way, you can use the command npm ll -g --depth=0
to get more details about the installed packages.
You can even get the result in JSON format with the command npm ls -g --depth=0 --json
, in case you need to access programmatically what modules are installed either locally or globally.
This particular ls command has various options, check out them all with the command npm help ls
.
If you remember from the beginning, we installed Express.js. That command created a node_modules directory and installed Express.js into it without documenting anywhere what we installed. This is a bad practice. Whenever we install anything, we should document it. The best way to document something is inside the package.json file. A package.json file should at least contain two keys name and version as shown below:
{
"name": "npmify",
"version": "1.0.0"
}
The name field should all be in lowercase and version should follow the semantic versioning system of the Node which consists of major.minor.patch.
We can generate a package.json file with the command npm init
, and answering all of the questions it asks. If you are too lazy to answer them all, just use the command npm init -y
, which would generate a package.json file with all the default values.
To save something inside this package.json use --save
or --save-dev
parameters (-S
and -D
are short aliases, respectively). This would save the dependency either under the dependencies key or under the devDependencies key.
E.g., executing npm i -S jquery
and npm i -D qunitjs
would result in the following package.json.
{
"name": "npmify",
"version": "1.0.0",
"dependencies": {
"jquery": "^3.2.1"
},
"devDependencies": {
"qunitjs": "^2.3.2"
}
}
There is one more option to npm i
command, --save-optional
or -O
, which would install the optional dependencies under the optionalDependencies key.
To update all of the installed packages we can use the command npm update
or npm update jquery
(to update only one particular package). This command would update the packages according to the version range specified in the package.json file, which we will talk about in a new post.
Sometimes we need to update the npm itself. We can update it with the command npm i -g npm
.
To check if installed packages are outdated we can use the command npm outdated
or npm outdated -g
(to list global outdated packages). E.g.,
NPM itself provides many configuration options that we can find with the command npm config list -l
. We can modify these configurations at per our ease, e.g., if you use the command npm init
a lot, then it makes sense to provide a default author name, which we can do with the command npm config set init-author-name "Manvendra Singh"
.
To delete any particular config you can use the delete subcommand, e.g., to delete the default author name, we just added, use the command npm config delete init-author-name
.
I highly recommend that you set the save configuration to true as it would automatically document any package inside the package.json you install without specifying --save
or -S
option. Just use the command npm config set save true
.
Another useful command is npm search
. This would search the NPM registry right from your command line, instead of you going to NPM website and search there, e.g., to search about babel, issue the command npm search babel
.
If you notice carefully in the following image you can see the packages locally installed, but pay special attention to express@4.15.3 package and the error at the bottom. What is an extraneous package anyway? It is a package that is installed inside the node_modules directory but does not actually exist inside the package.json file. To remove any extraneous package just use the command npm prune
.
Many times when we execute npm i
and find out to our darkest surprise that our project breaks and does not build anymore. The main reason behind it is how npm i works. By default, NPM tries to upgrade our package to latest minor version, as it is listed on the package.json file with a caret character. To remedy this issue we can use npm shrinkwrap
command. This command would generate a npm-shrinkwrap.json file containing all of your dependencies locked down to exact versions which should be installed upon next npm i command. This file should be committed to the VCS (Version Control System).
Be sure to include the --dev
argument, because by default devDependencies will not be added to the npm-shrinkwrap.json file. However, starting with version 4.0.1, npm shrinkwrap will also include the devDependencies.
Keep in mind that the npm shrinkwrap command will fail, if you have any extraneous packages installed, or in other words you do not have a package listed inside the package.json, but is available/installed in the node_modules directory.
Apart from these helpful commands we learned, there are also some fun commands waiting for you!
E.g., the command npm home jquery
would open the website of the jQuery library and npm repo jquery
would open the Github repo right from the command prompt.
How cool is that? PRETTY MUCH, eh!
There are some easter eggs which npm has up its sleeves. E.g, Try executing npm visnup
and you would get following in the terminal
And one other is Christmas easter egg which you can get by executing the command npm xmas
I hope you really did enjoy this post. Please let me know in the comments 🙂.
Thanks for reading till here. See you next time.
]]>
But, to write JavaScript is a pain in hands and pressure on mind too.
The major cause that contributes to the side effects of writing JavaScript is bad editors. YES, bad editors. If you have ever worked on Java or .NET, then you probably have an idea what it feels like if the editor is supporting you to type the members of a particular object.
There are very fine editors available e.g., Sublime Text 3 and VIM. This site is itself written in VIM but let’s not discuss them here. Welcome VisualStudio Code. Its tagline Code editing. Redefined. is true, believe me. It’s a cross-platform open source editor available for Windows, Linux, and Mac.
Let’s come to the point.
VisualStudio Code has an amazing and impressive JavaScript code editing experience up to its sleeves. For example, see the screenshot below, how it’s displaying the console in the IntelliSense list. You can open the IntelliSense by pressing Ctrl+Space key combination.
If we go further and try to type .
(single dot/period), the editor will display all the available members, and as you type it’ll filter them and then display a brief description of the selected member:
It can even display the members of our variables. For example, we have created manvendra variable that holds JavaScript object. If we try to type manvendra. it’ll display its members:
Isn’t it awesome? Yes, surely is. VisualStudio can provide IntelliSense for Browser objects, as well as Node.js objects. For example, let’s say we have Electron project setup in our editor and we would like to use the ES6 destructuring feature and want to import app, BrowserWindow and Menu items from the electron object. Here is how VisualStudio Code can help us:
As you can see, when we put the cursor between the curly braces and press Ctrl+space, IntelliSense will pop up and display all the available members which we can destruct from the electron variable. Isn’t it cool? Yes, surely is, folks.
If you might have guessed by now, then yes, VisualStudio Code can even list the events which an object can listen too. For example, app object has a ready event, which we can listen to and create a BrowserWindow, thus creating a new application window:
Okay, it was some awesome show off of the program. But let’s face the truth. How can VisualStudio Code display the IntelliSense for such a typeless language? How is it possible in the first place? The answer lies in TypeScript. VisualStudio Code installs, what are called Type Definition Files *d.ts
and a little TypeScript server called tsserver. You can read about them later in the reference section at the end of this blog post.
BUT, this will not work by default, unless you are on the Windows 10 or Windows 7. To get the IntelliSense working on other Operating Systems you have to install typescript globally using the command npm i -g typescript
, and configure a property in the VisualStudio Code settings file to point your path to the installed location of typescript package’s lib folder which you can open by following File -> Preferences -> Settings. For example, On my Linux machine I have a setting as follows:
{
"typescript.tsdk": "/opt/node6/lib/node_modules/typescript/lib"
}
Here is the screenshot of the IntelliSense in action on the Ubuntu 16.04 LTS:
You can also install the typescript locally in your app, but that’s kind of overkill for a package that you are actually not using.
You need at least Node version 6.x to get it working.
Thanks for reading till here. See you next time.
https://code.visualstudio.com/docs/editor/intellisense#_intellisense-features https://code.visualstudio.com/docs/languages/javascript#_intellisense
]]>
And that is called the first impression is the last impression! So the language gained a bad reputation.
Language in itself is very flexible in the programming style. You can use it to do structural programming, object oriented programming or pure functional programming. Consider the following example which will capitalize the words of a given sentence. This approach is purely structural.
function capitalizeText() {
const text = prompt('Enter some string to capitalize.', 'lorem ipsum dolar');
const result = capitalize(text);
alert(result);
}
function capitalize(text) {
const words = text.split(' ');
const transformedSentence = [];
for (let inc = 0; inc < words.length; inc++) {
let word = words[inc];
let capitalizedWord = [word.substring(0, 1).toUpperCase(), word.substring(1)].join('');
transformedSentence.push(capitalizedWord);
}
return transformedSentence.join(' ');
}
window.addEventListener('DOMContentLoaded', capitalizeText);
This is a very cool program, right? Maybe yes, but NO. There are various moving parts in this little program. We have many variable declarations, two functions (one of them is ugly named — capitalizeText
). The worst thing can be, it’s not modular and we saved ourselves from that by moving our result
variable inside capitalizeText
function.
Functions in themselves are good, but how we are using them is not! Well, this is how structural programming is done, and it’s called imperative code (You are explicitly telling the program to loop over the words. Is this what we suppose to do? NO, we suppose to capitalize the sentence. We’ll see how to fix this later.).
You see it’s just an ordinary code you may write in other languages. JavaScript is not that bad either!
Here is the same program in object orientation fashion. But wait, there are no classes in JavaScript, then how can we do object oriented programming? Well, this is called modular programming, and we are really dealing with Objects and not their Blueprints AKA Classes. This technique is achieved by wrapping whole code in IIFE (Immediately Invoked Function Expression).
(function() {
'use strict';
const Text = function (text) {
this.text = text;
};
Text.prototype.capitalizeWord = function (word) {
return [word.substring(0, 1).toUpperCase(), word.substring(1)].join('');
};
Text.prototype.capitalizeSentence = function () {
const words = this.text.split(' ');
const transformedSentence = [];
for (let inc = 0; inc < words.length; inc++) {
transformedSentence.push(this.capitalizeWord(words[inc]));
}
return transformedSentence.join(' ');
};
window.addEventListener('DOMContentLoaded', function () {
const text = prompt('Enter some string to capitalize.', 'loram ipsum dolar');
let textObj = new Text(text);
alert(textObj.capitalizeSentence());
});
})();
The real benefit that comes from this technique is, we are not contaminating the global scope with our random function names. What if somebody else defined capitalizeText
already? Who knows?
Here you can see we are defining an Object that can contain the state and the behavior that would act upon that state (See, no more global function names and thanks to IIFE that our Text
object is also not available in the global scope, hence in the browser console.).
Though we were able to remove some of the caveats of our program, but in essence, we are still creating those useless variables while capitalizing the words in the for loop! Moreover, our program is now little more verbose! Remember Java? Let’s try to write that feature again in the functional approach.
(function() {
'use strict';
function capitalizeWord(word) {
return [word.substring(0, 1).toUpperCase(), word.substring(1)].join('');
}
function capitalize(wordsCapitalizer, text) {
return text.split(' ').map(wordsCapitalizer).join(' ');
}
window.addEventListener('DOMContentLoaded', function () {
const text = prompt('Enter some string to capitalize.', 'loram ipsum dolar');
alert(capitalize(capitalizeWord, text));
});
})();
Well, the program has become too short, which is obvious, thanks to the functional approach of programming. But the real benefit here is, we no more are declaring intermediate variables and keeping a particular computation to only respective function.
This behavior where functions are not modifying any global value or creating any side effect (like opening a DB connection, writing to a file), always give us a particular result for the provided input. These functions are called pure functions.
In this program, you can see we are utilizing a technique called function composition. This is possible because functions are treated as a first class citizen in the language, it means you can assign them to variables and pass to other functions as arguments just like any ordinary data type.
One thing you might have overlooked in the code is, we no more have a for loop! We are actually utilizing Array.map()
function. map
function calls the specified function wordsCapitalizer
with the individual elements of the array we get from the split
function call. We can think this this way that map is internally looping and calling the wordsCapitalizer
function for each array element.
Let’s talk about a little bit of Java now.
You might think that Java is only an object oriented language, but folks, let me introduce you to the functional aspects of the Java 8 language. Java 8 introduced functional constructs, thus facilitating functional programming approach along with object oriented programming approach.
package com.manvendrask;
import java.util.Arrays;
import java.util.function.BiFunction;
import java.util.function.Function;
import java.util.stream.Collectors;
public class Main {
public static void main(String[] args) {
String text = "lorem ipsum dolar";
Function<String, String> capitalizeWord = word ->
Arrays.stream(new String[]{word.substring(0, 1).toUpperCase(), word.substring(1)})
.collect(Collectors.joining(""));
BiFunction<Function<String, String>, String, String> capitalize = (capitalizeWordFn, textToSplit) ->
Arrays.stream(textToSplit.split(" "))
.map(capitalizeWord)
.collect(Collectors.joining(" "));
System.out.println(capitalize.apply(capitalizeWord, text));
}
}
In this Java code, you can see how we are assigning the block of code (methods) to variables capitalizeWord
and capitalize
. These are called Lambda Expressions. They represent the underlying methods of the target FunctionalInterface. The important part to notice is we are passing these methods around much like JavaScript’s functions. Moreover, that map
method is available on the separate API layer called Streams in Java 8.
In a nutshell, Java 8 now provides the same functional capabilities much like other functional languages, but not yet fully supported. The real caveat is it’s still verbose, because of the fact that it’s type safe, as you can see yourself we are using BiFunction<Function<String, String>, String, String>
functional interface to define capitalize
method, that internally is the implementation of the BiFunction.apply
method.
The conclusion is functional approach is a much flexible approach of the programming, but only certain languages are available to do it efficiently. We have here many choices, but for me, it’s either JavaScript or Java/Scala/Groovy. I would choose JavaScript because it’s less cluttered and less verbose, and Scala over the Java (in JVM) because Java is more verbose.
Follow the functional approach and be a better programmer. Long live JavaScript!!!
Thanks for reading till here. See you next time.
]]>