Why is extending native objects a bad practice?

Why is extending native objects a bad practice?

Every JS opinion leader says that extending the native objects is a bad practice. But why? Do we get a perfomance hit? Do they fear that somebody does it “the wrong way”, and adds enumerable types to Object, practically destroying all loops on any object?
Take TJ Holowaychuk’s should.js for example. He adds a simple getter to Object and everything works fine (source).
Object.defineProperty(Object.prototype, ‘should’, {
set: function(){},
get: function(){
return new Assertion(Object(this).valueOf());
},
configurable: true
});

This really makes sense. For instance one could extend Array.
Array.defineProperty(Array.prototype, “remove”, {
set: function(){},
get: function(){
return removeArrayElement.bind(this);
}
});
var arr = [0, 1, 2, 3, 4];
arr.remove(3);

Are there any arguments against extending native types?

Solutions/Answers:

Solution 1:

When you extend an object, you change its behaviour.

Changing the behaviour of an object that will only be used by your own code is fine. But when you change the behaviour of something that is also used by other code there is a risk you will break that other code.

When it comes adding methods to the object and array classes in javascript, the risk of breaking something is very high, due to how javascript works. Long years of experience have taught me that this kind of stuff causes all kinds of terrible bugs in javascript.

If you need custom behaviour, it is far better to define your own class (perhaps a subclass) instead of changing a native one. That way you will not break anything at all.

The ability to change how a class works without subclassing it is an important feature of any good programming language, but it is one that must be used rarely and with caution.

Solution 2:

There’s no measurable drawback, like a performance hit. At least nobody mentioned any. So this is a question of personal preference and experiences.

The main pro argument: It looks better and is more intuitive: syntax sugar. It is a type/instance specific function, so it should be specifically bound to that type/instance.

The main contra argument: Code can interfere. If lib A adds a function, it could overwrite lib B’s function. This can break code very easily.

Both have a point. When you rely on two libraries that directly change your types, you will most likely end up with broken code as the expected functionality is probably not the same. I totally agree on that. Macro-libraries must not manipulate the native types. Otherwise you as a developer won’t ever know what is really going on behind the scenes.

And that is the reason I dislike libs like jQuery, underscore, etc. Don’t get me wrong; they are absolutely well-programmed and they work like a charm, but they are big. You use only 10% of them, and understand about 1%.

That’s why I prefer an atomistic approach, where you only require what you really need. This way, you always know what happens. The micro-libraries only do what you want them to do, so they won’t interfere. In the context of having the end user knowing which features are added, extending native types can be considered safe.

TL;DR When in doubt, don’t extend native types. Only extend a native type if you’re 100% sure, that the end user will know about and want that behavior. In no case manipulate a native type’s existing functions, as it would break the existing interface.

If you decide to extend the type, use Object.defineProperty(obj, prop, desc); if you can’t, use the type’s prototype.


I originally came up with this question because I wanted Errors to be sendable via JSON. So, I needed a way to stringify them. error.stringify() felt way better than errorlib.stringify(error); as the second construct suggests, I’m operating on errorlib and not on error itself.

Solution 3:

In my opinion, it’s a bad practice. The major reason is integration. Quoting should.js docs:

OMG IT EXTENDS OBJECT???!?!@ Yes, yes it does, with a single getter
should, and no it won’t break your code

Well, how can the author know? What if my mocking framework does the same? What if my promises lib does the same?

If you’re doing it in your own project then it’s fine. But for a library, then it’s a bad design. Underscore.js is an example of the thing done the right way:

var arr = [];
_(arr).flatten()
// or: _.flatten(arr)
// NOT: arr.flatten()

Solution 4:

If you look at it on a case by case basis, perhaps some implementations are acceptable.

String.prototype.slice = function slice( me ){
  return me;
}; // Definite risk.

Overwriting already created methods creates more issues than it solves, which is why it is commonly stated, in many programming languages, to avoid this practice. How are Devs to know the function has been changed?

String.prototype.capitalize = function capitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // A little less risk.

In this case we are not overwriting any known core JS method, but we are extending String. One argument in this post mentioned how is the new dev to know whether this method is part of the core JS, or where to find the docs? What would happen if the core JS String object were to get a method named capitalize?

What if instead of adding names that may collide with other libraries, you used a company/app specific modifier that all the devs could understand?

String.prototype.weCapitalize = function weCapitalize(){
  return this.charAt(0).toUpperCase() + this.slice(1);
}; // marginal risk.

var myString = "hello to you.";
myString.weCapitalize();
// => Hello to you.

If you continued to extend other objects, all devs would encounter them in the wild with (in this case) we, which would notify them that it was a company/app specific extension.

This does not eliminate name collisions, but does reduce the possibility. If you determine that extending core JS objects is for you and/or your team, perhaps this is for you.

Solution 5:

Extending prototypes of built-ins is indeed a bad idea. However, ES2015 introduced a new technique that can be utilized to obtain the desired behavior:

Utilizing WeakMaps to associate types with built-in prototypes

The following implementation extends the Number and Array prototypes without touching them at all:

// new types

const AddMonoid = {
  empty: () => 0,
  concat: (x, y) => x + y,
};

const ArrayMonoid = {
  empty: () => [],
  concat: (acc, x) => acc.concat(x),
};

const ArrayFold = {
  reduce: xs => xs.reduce(
   type(xs[0]).monoid.concat,
   type(xs[0]).monoid.empty()
)};


// the WeakMap that associates types to prototpyes

types = new WeakMap();

types.set(Number.prototype, {
  monoid: AddMonoid
});

types.set(Array.prototype, {
  monoid: ArrayMonoid,
  fold: ArrayFold
});


// auxiliary helpers to apply functions of the extended prototypes

const genericType = map => o => map.get(o.constructor.prototype);
const type = genericType(types);


// mock data

xs = [1,2,3,4,5];
ys = [[1],[2],[3],[4],[5]];


// and run

console.log("reducing an Array of Numbers:", ArrayFold.reduce(xs) );
console.log("reducing an Array of Arrays:", ArrayFold.reduce(ys) );
console.log("built-ins are unmodified:", Array.prototype.empty);

As you can see even primitive prototypes can be extended by this technique. It uses a map structure and Object identity to associate types with built-in prototypes.

My example enables a reduce function that only expects an Array as its single argument, because it can extract the information how to create an empty accumulator and how to concatenate elements with this accumulator from the elements of the Array itself.

Please note that I could have used the normal Map type, since weak references doesn’t makes sense when they merely represent built-in prototypes, which are never garbage collected. However, a WeakMap isn’t iterable and can’t be inspected unless you have the right key. This is a desired feature, since I want to avoid any form of type reflection.

Solution 6:

One more reason why you should not extend native Objects:

We use Magento which uses prototype.js and extends a lot of stuff on native Objects.
This works fine until you decide to get new features in and that’s where big troubles start.

We have introduced Webcomponents on one of our pages, so the webcomponents-lite.js decides to replace the whole (native) Event Object in IE (why?).
This of course breaks prototype.js which in turn breaks Magento.
(until you find the problem, you may invest a lot of hours tracing it back)

If you like trouble, keep doing it!