How do I convert an integer to binary in JavaScript?

How do I convert an integer to binary in JavaScript?

I’d like to see integers, positive or negative, in binary.
Rather like this question, but for JavaScript.

Solutions/Answers:

Solution 1:

This answer attempts to address integers with absolute values between Number.MAX_SAFE_INTEGER (or 2**53-1) and 2**31. The current solutions only address signed integers within 32 bits, but this solution will output in 64-bit two’s complement form using float64ToInt64Binary():

// IIFE to scope internal variables
var float64ToInt64Binary = (function () {
  // create union
  var flt64 = new Float64Array(1)
  var uint16 = new Uint16Array(flt64.buffer)
  // 2**53-1
  var MAX_SAFE = 9007199254740991
  // 2**31
  var MAX_INT32 = 2147483648

  function uint16ToBinary() {
    var bin64 = ''

    // generate padded binary string a word at a time
    for (var word = 0; word < 4; word++) {
      bin64 = uint16[word].toString(2).padStart(16, 0) + bin64
    }

    return bin64
  }

  return function float64ToInt64Binary(number) {
    // NaN would pass through Math.abs(number) > MAX_SAFE
    if (!(Math.abs(number) <= MAX_SAFE)) {
      throw new RangeError('Absolute value must be less than 2**53')
    }

    var sign = number < 0 ? 1 : 0

    // shortcut using other answer for sufficiently small range
    if (Math.abs(number) <= MAX_INT32) {
      return (number >>> 0).toString(2).padStart(64, sign)
    }

    // little endian byte ordering
    flt64[0] = number

    // subtract bias from exponent bits
    var exponent = ((uint16[3] & 0x7FF0) >> 4) - 1022

    // encode implicit leading bit of mantissa
    uint16[3] |= 0x10
    // clear exponent and sign bit
    uint16[3] &= 0x1F

    // check sign bit
    if (sign === 1) {
      // apply two's complement
      uint16[0] ^= 0xFFFF
      uint16[1] ^= 0xFFFF
      uint16[2] ^= 0xFFFF
      uint16[3] ^= 0xFFFF
      // propagate carry bit
      for (var word = 0; word < 3 && uint16[word] === 0xFFFF; word++) {
        // apply integer overflow
        uint16[word] = 0
      }

      // complete increment
      uint16[word]++
    }

    // only keep integer part of mantissa
    var bin64 = uint16ToBinary().substr(11, Math.max(exponent, 0))
    // sign-extend binary string
    return bin64.padStart(64, sign)
  }
})()

console.log('8')
console.log(float64ToInt64Binary(8))
console.log('-8')
console.log(float64ToInt64Binary(-8))
console.log('2**33-1')
console.log(float64ToInt64Binary(2**33-1))
console.log('-(2**33-1)')
console.log(float64ToInt64Binary(-(2**33-1)))
console.log('2**53-1')
console.log(float64ToInt64Binary(2**53-1))
console.log('-(2**53-1)')
console.log(float64ToInt64Binary(-(2**53-1)))
console.log('2**52')
console.log(float64ToInt64Binary(2**52))
console.log('-(2**52)')
console.log(float64ToInt64Binary(-(2**52)))
console.log('2**52+1')
console.log(float64ToInt64Binary(2**52+1))
console.log('-(2**52+1)')
console.log(float64ToInt64Binary(-(2**52+1)))
.as-console-wrapper {
  max-height: 100% !important;
}

This answer heavily deals with the IEEE-754 Double-precision floating-point format, illustrated here:

Related:  How do you get the rendered height of an element?

IEEE-754 Double-precision floating-point format

   seee eeee eeee ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff
   ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
   [    uint16[3]    ] [    uint16[2]    ] [    uint16[1]    ] [    uint16[0]    ]
   [                                   flt64[0]                                  ]

   little endian byte ordering

   s = sign = uint16[3] >> 15
   e = exponent = (uint16[3] & 0x7FF) >> 4
   f = fraction

The way the solution works is it creates a union between a 64-bit floating point number and an unsigned 16-bit integer array in little endian byte ordering. After validating the integer input range, it casts the input to a double precision floating point number on the buffer, and then uses the union to gain bit access to the value and calculate the binary string based on the unbiased binary exponent and fraction bits.

The solution is implemented in pure ECMAScript 5 except for the use of String#padStart(), which has an available polyfill here.

Solution 2:

function dec2bin(dec){
    return (dec >>> 0).toString(2);
}

dec2bin(1);    // 1
dec2bin(-1);   // 11111111111111111111111111111111
dec2bin(256);  // 100000000
dec2bin(-256); // 11111111111111111111111100000000

You can use Number.toString(2) function, but it has some problems when representing negative numbers. For example, (-1).toString(2) output is "-1".

To fix this issue, you can use the unsigned right shift bitwise operator (>>>) to coerce your number to an unsigned integer.

If you run (-1 >>> 0).toString(2) you will shift your number 0 bits to the right, which doesn’t change the number itself but it will be represented as an unsigned integer. The code above will output "11111111111111111111111111111111" correctly.

This question has further explanation.

-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two’s complement representation of -3.


Note 1: this answer expects a Number as argument, so convert it accordingly.

Related:  Allowing a child Iframe to call a function on its parent window from a different domain

Note 2: the result is the a string without leading zeros, so apply padding as you need.

Solution 3:

Try

num.toString(2);

The 2 is the radix and can be any base between 2 and 36

source here

UPDATE:

This will only work for positive numbers, Javascript represents negative binary integers in two’s-complement notation. I made this little function which should do the trick, I haven’t tested it out properly:

function dec2Bin(dec)
{
    if(dec >= 0) {
        return dec.toString(2);
    }
    else {
        /* Here you could represent the number in 2s compliment but this is not what 
           JS uses as its not sure how many bits are in your number range. There are 
           some suggestions https://stackoverflow.com/questions/10936600/javascript-decimal-to-binary-64-bit 
        */
        return (~dec).toString(2);
    }
}

I had some help from here

Solution 4:

The binary in ‘convert to binary’ can refer to three main things. The positional number system, the binary representation in memory or 32bit bitstrings. (for 64bit bitstrings see Patrick Roberts’ answer)

1. Number System

(123456).toString(2) will convert numbers to the base 2 positional numeral system. In this system negative numbers are written with minus signs just like in decimal.

2. Internal Representation

The internal representation of numbers is 64 bit floating point and some limitations are discussed in this answer. There is no easy way to create a bit-string representation of this in javascript nor access specific bits.

3. Masks & Bitwise Operators

MDN has a good overview of how bitwise operators work. Importantly:

Bitwise operators treat their operands as a sequence of 32 bits (zeros and ones)

Before operations are applied the 64 bit floating points numbers are cast to 32 bit signed integers. After they are converted back.

Here is the MDN example code for converting numbers into 32-bit strings.

function createBinaryString (nMask) {
  // nMask must be between -2147483648 and 2147483647
  for (var nFlag = 0, nShifted = nMask, sMask = ""; nFlag < 32;
       nFlag++, sMask += String(nShifted >>> 31), nShifted <<= 1);
  return sMask;
}

createBinaryString(0) //-> "00000000000000000000000000000000"
createBinaryString(123) //-> "00000000000000000000000001111011"
createBinaryString(-1) //-> "11111111111111111111111111111111"
createBinaryString(-1123456) //-> "11111111111011101101101110000000"
createBinaryString(0x7fffffff) //-> "01111111111111111111111111111111"

Solution 5:

A simple way is just…

Number(42).toString(2);

// "101010"

Solution 6:

Note- the basic (x>>>0).toString(2); has a slight issue when x is positive. I have some example code at the end of my answer that corrects that problem with the >>> method while still using >>>.

(-3>>>0).toString(2);

prints -3 in 2s complement.

1111111111101

A working example

C:\>type n1.js
console.log(   (-3 >>> 0).toString(2)    );
C:\>
C:\>node n1.js
11111111111111111111111111111101

C:\>

This in the URL bar is another quick proof

javascript:alert((-3>>>0).toString(2))

Note- The result is very slightly flawed, in that it always starts with a 1, which for negative numbers is fine. For positive numbers you should prepend a 0 to the beginning so that the result is really 2s complement. So (8>>>0).toString(2) produces 1000 which isn’t really 8 in 2s complement, but prepending that 0, making it 01000, is correct 8 in 2s complement. In proper 2s complement, any bit string starting with 0 is >=0, and any bit string starting with 1, is negative.

Related:  Express-mysql-session preventing passport deserializeUser from running

e.g. this gets round that problem

// or x=-5  whatever number you want to view in binary  
x=5;   
if(x>0) prepend="0"; else prepend=""; 
alert(prepend+((x>>>0)).toString(2));

The other solutions are the one from Annan(though Annan’s explanations and definitions are full of errors, he has code that produces the right output), and the solution from Patrick.

Anybody that doesn’t understand the fact of positive numbers starting with 0 and negative numbers with 1, in 2s complement, could check this SO QnA on 2s complement. What is “2’s Complement”?