Published
- 11 min read
JavaScript Modules Unpacked: The Journey of Code Sharing

Introduction: From a Messy Workshop to Clean Engineering
Imagine a giant, messy workshop. Hundreds of tools—hammers, screwdrivers, saws—are scattered all over the floor. You need a specific screwdriver, but there are three of the same kind lying around. Which one is yours? Which one works best? As you reach for a hammer, you accidentally knock over a saw, making an even bigger mess.
This was the world of JavaScript before module systems. Every piece of code you wrote was thrown onto the “global scope”—that messy workshop floor. Variable names clashed, dependencies were a nightmare to track, and chaos was the norm.
To solve this, the community developed a brilliant solution: toolboxes, or as we call them, modules. Each JavaScript file becomes its own clean, organized toolbox. The tools (variables, functions) inside belong only to that box. If you need a tool from another box, you explicitly ask for it (import
), and if you want to share a tool, you label it for others to use (export
).
This article follows a Q&A format based on a real conversation, starting with the basics and diving deep into the “why” behind modern JavaScript modules. Let’s organize that workshop.
Q: I see files with .mjs
and .cjs
extensions. I know they’re related to modules, but what are they really for?
A: That’s the perfect place to start! These extensions are all about telling JavaScript which “rulebook” or module system a file follows. There are two main systems you’ll encounter.
1. The Traditional Master: CommonJS (CJS)
This is the original, battle-tested system that Node.js was built on. It’s reliable and straightforward.
- Sharing a Tool:
module.exports = myTool;
- Requesting a Tool:
const myTool = require('./tool-box');
Think of require()
as sending an assistant to get a tool. Your work stops, and you wait until the assistant returns with the toolbox. This is called synchronous execution. On a server where files are on a fast local disk, this wait is negligible and works perfectly fine.
The .cjs
extension is a way to explicitly tell Node.js: “Hey, this file is written in the traditional CommonJS style. Expect to see require
and module.exports
.”
2. The Modern Engineer: ES Modules (ESM)
This is the official, standardized module system built into the JavaScript language itself. It’s designed to work everywhere, including browsers, and is much more flexible and efficient.
- Sharing a Tool:
export default myTool;
- Requesting a Tool:
import myTool from './tool-box.js';
Think of import
as creating a list of all the tools you’ll need at the start of your project. The system then gathers all these tools for you in the most efficient way possible, often in parallel. You don’t wait for them one by one. This is asynchronous by design, which is a massive advantage, especially in browsers.
The .mjs
extension explicitly says: “Attention! This is a modern ES Module. You’ll find import
and export
keywords here.”
Q: What’s the purpose of "type": "module"
in package.json
? Is it just so I can use import
?
A: You’re touching on a crucial point. While .cjs
and .mjs
are explicit, what about the most common file extension, .js
? Node.js needs to know how to interpret it. Is it a traditional CJS file or a modern ESM file?
This is where your project’s “ID card,” package.json
, comes in.
When you add "type": "module"
to your package.json
, you’re giving a clear instruction to Node.js:
For this entire project, treat every file ending in
.js
as a modern ES Module by default. Assume I’ll be usingimport
andexport
.
If you don’t specify this, Node.js defaults to the traditional CommonJS system for backward compatibility.
So, it’s not just “to use import
.” It fundamentally changes how Node.js sees your project, setting the modern, standard module system as the default.
When should you use it? If you’re starting any new project today, you should absolutely start with "type": "module"
. It’s the future and current standard of the JavaScript ecosystem.
Q: I’m a bit confused about require
being synchronous and import
being asynchronous. Can you explain how that works?
A: This is the “aha!” moment for many developers. Let’s use a librarian analogy.
The Synchronous Librarian (require
)
You go to the library and ask the librarian for “Book A”. The librarian goes to the shelves, finds it, and brings it back. You cannot do anything else while you wait. Once you have Book A, you ask for “Book B”. Again, the librarian goes, and you wait. It’s a one-by-one, blocking process.
// 1. Program HALTS here.
const bookA = require('./book-a')
// 2. Only after book-a.js is fully loaded and run does the program continue.
// 3. Program HALTS again here.
const bookB = require('./book-b')
// 4. And so on...
The Asynchronous System (import
)
You go to a modern library and submit a list of all the books you need at a terminal. The system dispatches multiple robotic assistants that fetch all your books in parallel. You are notified only when your entire order is ready.
The JavaScript engine does exactly this with ESM. The process happens in distinct phases:
- Construction: Before running any code, the engine reads all
import
statements to build a “dependency map.” It knows exactly which module needs which other module. - Fetching: It then fetches all the required module files from the disk or network, often simultaneously.
- Execution: Once all modules in the map are loaded and ready, the engine executes the code from top to bottom.
This initial, parallel-loading step is what makes ESM so much more performant, especially in browsers where network latency is a major factor.
Q: This parallel loading seems incredibly useful for web apps. Is there a connection to modern protocols like HTTP/2?
A: You’ve just uncovered the masterstroke of modern web development! The synergy between ESM and HTTP/2 is the reason today’s web applications can be so fast and efficient.
The Old World: Bundling for HTTP/1.1
The old HTTP/1.1 protocol was like a single-lane road. You could only handle one request and response at a time. Asking for 100 separate .js
files would create a massive traffic jam. To get around this, we invented bundlers (like Webpack). We’d take all our small JavaScript files and bundle them into one giant bundle.js
file—like loading all our goods onto a single, massive truck. This worked, but it was inefficient. A user might only need a tiny piece of your site but would have to download the entire truckload of code first.
The New World: ESM + HTTP/2
HTTP/2 introduced multiplexing, turning that single-lane road into a multi-lane superhighway. You can now send dozens of requests over a single connection, all at the same time.
This is a perfect match for ESM!
- The browser gets the initial HTML file.
- It sees a
<script type="module" src="main.js">
. - It fetches
main.js
, and before executing it, it seesimport a from './a.js'
andimport b from './b.js'
. - Using the HTTP/2 superhighway, it immediately fires off parallel requests for
a.js
andb.js
.
The era of the giant “bundle-truck” is fading. We can now ship smaller, more targeted code pieces, and the browser can assemble them with incredible speed. Modern tools like Vite are built entirely on this principle.
Q: What if my ESM project needs an old NPM package written in CommonJS? I’ve noticed import pkg from 'some-cjs-pkg'
sometimes fails, but await import('some-cjs-pkg')
works. Why?
A: The difference is Static vs. Dynamic. That dives into the core philosophies of the two systems.
-
ESM is Static: An
import
statement is like a clear label on a toolbox. Before any code runs, the JavaScript engine can look at the file and know exactly what tools (exports
) it provides. This static nature is what allows for powerful optimizations like Tree Shaking. -
CommonJS is Dynamic:
module.exports
is a plain object that can be changed at any time while the code is running. It’s like a surprise box. You don’t know what’s inside until you open it (run the code).
When you use a top-level import
, the static ESM engine tries to read the “label” on the CommonJS package, but there isn’t one—only a surprise box. This mismatch can cause errors.
However, await import()
is a dynamic import. It’s a runtime command that tells the engine:
Forget the static analysis. Right now, at this very moment, go run that CommonJS file, open the surprise box, and give me whatever
module.exports
contains.
This works perfectly because you’re treating the CommonJS module on its own dynamic terms. Node.js helps by neatly wrapping the contents of module.exports
into the default
export of the dynamically imported module.
Q: Was code sharing impossible before module.exports
existed? Was all code written in a single file?
A: That’s a fair question! This takes us back to the “wild west” era of JavaScript, before standards like module.exports
(CommonJS) and import
(ESM) existed.
Code sharing was certainly possible; otherwise, massive libraries like jQuery or large-scale websites could never have been written. However, this sharing wasn’t as safe, organized, or standardized as it is today. Projects were typically split into multiple .js
files; the crucial difference was in how these files talked to each other.
Method 1: The Global Scope and <script>
Tags
The most basic method was that every JavaScript file had access to a single, massive, shared space called the window
object in browsers—the messy workshop floor we talked about in the introduction. A function or variable defined in one file would automatically be added to this global space and become accessible to other files.
Contents of helpers.js
:
function sayHello(name) {
console.log('Hello, ' + name)
}
var version = '1.0'
Contents of main.js
:
// We are calling the sayHello function from helpers.js
sayHello('World')
console.log('Helper version:', version)
To run these two files, you would order them in your HTML like this:
<!DOCTYPE html>
<html>
<head>
<title>Old-school JS</title>
<script src="helpers.js"></script>
<script src="main.js"></script>
</head>
<body>
...
</body>
</html>
The Huge Problems with This Method:
- Order Dependency: If you loaded
main.js
beforehelpers.js
in the HTML above, the application would crash because thesayHello
function would not have been defined yet whenmain.js
tried to call it. Manually managing this load order became a nightmare as projects grew. - Name Collisions (Global Scope Pollution): What if
main.js
also had a variable namedversion
? Or what if a third-party library you used defined a function with the same name as yoursayHello
function? One would overwrite the other, and which one “won” would depend entirely on the load order, leading to unpredictable and hard-to-detect bugs.
The Search for Solutions: Clever Coding Patterns
The community developed intelligently designed coding patterns to solve these problems. These are considered the ancestors of modern module systems.
1. The Namespace Pattern Instead of throwing everything into the global space, you would create a single global object for your project and put everything inside it. This greatly reduced the risk of name collisions.
// Create the MY_AWESOME_APP object if it doesn't exist, otherwise use the existing one.
var MY_AWESOME_APP = MY_AWESOME_APP || {}
MY_AWESOME_APP.sayHello = function (name) {
/*...*/
}
MY_AWESOME_APP.version = '1.0'
2. The Module Pattern and the IIFE
This was the pattern that came closest to what module.exports
does. It used a technique called an IIFE (Immediately Invoked Function Expression) to create a “private room” for code. Variables inside the function were invisible from the outside. You would then return
an object containing only the functions you wanted to make public.
var CounterModule = (function () {
// --- THIS AREA IS PRIVATE ---
var _count = 0 // It was a tradition to prefix private variables with an underscore
// --- THIS AREA IS PUBLIC ---
// We are exposing the public functions and variables
return {
increment: function () {
_count++
},
getCount: function () {
return _count
}
}
})()
In main.js
:
CounterModule.increment()
console.log(CounterModule.getCount()) // 1
console.log(CounterModule._count) // undefined (Cannot be accessed!)
In summary, code sharing existed, but it was fragile, unsafe, and manual. The developer had to know which file depended on which and manually order them correctly in the HTML.
require
and import
were invented to solve these exact problems. They allowed us to explicitly declare dependencies at the top of our files and automated the loading order for us. This was a revolution for the JavaScript ecosystem.
Q: Are there any other ‘finer details’ I should know about?
A: Now that you have the foundation, here are a few advanced concepts that build directly on these principles:
-
Tree Shaking: Because ESM is static, when you bundle your code for production, your bundler can intelligently analyze which
export
s you actuallyimport
and use. Any unused code from a library is “shaken off” like dead leaves from a tree, resulting in a much smaller final application size. -
Code Splitting: The dynamic
await import()
we just discussed is the key to code splitting. You can intentionally leave parts of your application out of the initial download and load them on demand. For instance, loading the code for an admin panel only when a user clicks the “Admin Login” button. This dramatically improves initial page load times. -
Top-Level
await
: Another ESM-only superpower. You can now use theawait
keyword at the top level of a module, outside of anasync
function. This is incredibly useful for modules that need to perform an async operation on startup, like fetching configuration data or connecting to a database.
Conclusion
Understanding JavaScript modules is about more than just syntax. It’s about grasping the evolution of the language from a chaotic workshop to a highly organized, efficient, and performant engineering discipline. By understanding the “why”—the problems that CJS, ESM, and tools like bundlers were created to solve—you’re no longer just using the tools; you’re on your way to mastering them.