Can I use jQuery with Node.js?

Can I use jQuery with Node.js?

Is it possible to use jQuery selectors/DOM manipulation on the server-side using Node.js?


Solution 1:

Update (27-Jun-18): It looks like there was a major update to jsdom that causes the original answer to no longer work. I found this answer that explains how to use jsdom now. I’ve copied the relevant code below.

var jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { window } = new JSDOM();
const { document } = (new JSDOM('')).window;
global.document = document;

var $ = jQuery = require('jquery')(window);

Note: The original answer fails to mention that it you will need to install jsdom as well using npm install jsdom

Update (late 2013): The official jQuery team finally took over the management of the jquery package on npm:

npm install jquery


require("jsdom").env("", function (err, window) {
    if (err) {
    var $ = require("jquery")(window);

Solution 2:

Yes you can, using a library I created called nodeQuery

var Express = require('express')
    , dnode = require('dnode')
    , nQuery = require('nodeQuery')
    , express = Express.createServer();

var app = function ($) {
    $.on('ready', function () {
        // do some stuff to the dom in real-time
        $('body').append('Hello World');
        $('body').append('<input type="text" />');
        $('input').live('click', function () {
            console.log('input clicked');
            // ...


    .use(Express.static(__dirname + '/public'))


Solution 3:

At the time of writing there also is the maintained Cheerio.

Fast, flexible, and lean implementation of core jQuery designed
specifically for the server.

Solution 4:

Using jsdom you now can. Just look at their jquery example in the examples directory.

Solution 5:

A simple crawler using Cheerio

This is my formula to make a simple crawler in Node.js. It is the main reason for wanting to do DOM manipulation on the server side and probably it’s the reason why you got here.

First, use request to download the page to be parsed. When the download is complete, handle it to cheerio and begin DOM manipulation just like using jQuery.

Working example:

    request = require('request'),
    cheerio = require('cheerio');

function parse(url) {
    request(url, function (error, response, body) {
            $ = cheerio.load(body);

        $('.question-summary .question-hyperlink').each(function () {


This example will print to the console all top questions showing on SO home page. This is why I love Node.js and its community. It couldn’t get easier than that 🙂

Install dependencies:

npm install request cheerio

And run (assuming the script above is in file crawler.js):

node crawler.js


Some pages will have non-english content in a certain encoding and you will need to decode it to UTF-8. For instance, a page in brazilian portuguese (or any other language of latin origin) will likely be encoded in ISO-8859-1 (a.k.a. “latin1”). When decoding is needed, I tell request not to interpret the content in any way and instead use iconv-lite to do the job.

Working example:

    request = require('request'),
    iconv = require('iconv-lite'),
    cheerio = require('cheerio');

    PAGE_ENCODING = 'utf-8'; // change to match page encoding

function parse(url) {
        url: url,
        encoding: null  // do not interpret content yet
    }, function (error, response, body) {
            $ = cheerio.load(iconv.decode(body, PAGE_ENCODING));

        $('.question-summary .question-hyperlink').each(function () {


Before running, install dependencies:

npm install request iconv-lite cheerio

And then finally:

node crawler.js

Following links

The next step would be to follow links. Say you want to list all posters from each top question on SO. You have to first list all top questions (example above) and then enter each link, parsing each question’s page to get the list of involved users.

When you start following links, a callback hell can begin. To avoid that, you should use some kind of promises, futures or whatever. I always keep async in my toolbelt. So, here is a full example of a crawler using async:

    url = require('url'),
    request = require('request'),
    async = require('async'),
    cheerio = require('cheerio');

    baseUrl = '';

// Gets a page and returns a callback with a $ object
function getPage(url, parseFn) {
        url: url
    }, function (error, response, body) {

getPage(baseUrl, function ($) {

    // Get list of questions
    questions = $('.question-summary .question-hyperlink').map(function () {
        return {
            title: $(this).text(),
            url: url.resolve(baseUrl, $(this).attr('href'))
    }).get().slice(0, 5); // limit to the top 5 questions

    // For each question, function (question, questionDone) {

        getPage(question.url, function ($$) {

            // Get list of users
            question.users = $$('.post-signature .user-details a').map(function () {
                return $$(this).text();

            questionDone(null, question);

    }, function (err, questionsWithPosters) {

        // This function is called by async when all questions have been parsed

        questionsWithPosters.forEach(function (question) {

            // Prints each question along with its user list
            question.users.forEach(function (user) {
      '\t%s', user);

Before running:

npm install request async cheerio

Run a test:

node crawler.js

Sample output:

Is it possible to pause a Docker image build?
PHP Image Crop Issue
    Houston Molinar
Add two object in rails
Asymmetric encryption discrepancy - Android vs Java
    Cookie Monster
    Wand Maker
Objective-C: Adding 10 seconds to timer in SpriteKit
    Christian K Rider

And that’s the basic you should know to start making your own crawlers 🙂

Libraries used

Solution 6:

in 2016 things are way easier. install jquery to node.js with your console:

npm install jquery

bind it to the variable $ (for example – i am used to it) in your node.js code:

var $ = require("jquery");

do stuff:

    url: 'gimme_json.php',
    dataType: 'json',
    method: 'GET',
    data: { "now" : true }

also works for gulp as it is based on node.js.