Frequently Asked Questions
-
General
- How is Jasmine versioned?
- How can I use scripts from external URLs in jasmine-browser-runner?
- Can Jasmine test code that's in ES modules?
- Why does Jasmine allow multiple expectation failures in a spec? How can I disable that?
- How can I get Jasmine to fail specs that don't have any assertions?
- How can I use Jasmine on my TypeScript project?
- Other software that works with Jasmine
- Writing Specs
-
Async Testing
- Which async style should I use, and why?
- Why are some asynchronous spec failures reported as suite errors or as failures of a different spec?
- How can I stop Jasmine from running my specs in parallel?
- Why can't I write a spec that both takes a callback and returns a promise (or is an async function)? What should I do instead?
- But I really have to test code that signals success and failure through different channels. I can't (or don't want to) change it. What can I do?
- Why can't my asynchronous function call `done` more than once? What should I do instead?
- Why can't I pass an async function to `describe`? How can I generate specs from asynchronously loaded data?
- How do I test async behavior that I don't have a promise or callback for, like a UI component that renders something after fetching data asynchronously?
- I need to assert something about the arguments passed to an async callback that happens before the code under test is finished. What's the best way to do that?
- Why doesn't Jasmine always display a stack trace when a spec fails due to a rejected promise?
- I'm getting an unhandled promise rejection error but I think it's a false positive.
-
Spies
- How can I mock AJAX/fetch/XMLHTTPRequest calls?
- Why can't I spy on localStorage methods in some browsers? What can I do instead?
- How can I spy on a property of a module? I'm getting an error like "aProperty does not have access type get", "is not declared writable or has no setter", or "is not declared configurable".
- How can I configure a spy to return a rejected promise without triggering an unhandled promise rejection error?
- Contributing to Jasmine
General
How is Jasmine versioned?
Jasmine attempts as best as possible to follow semantic versioning. This means we reserve major versions (1.0, 2.0, etc.) for breaking changes or other significant work. Most Jasmine releases end up being minor releases (2.3, 2.4, etc.). Major releases are infrequent.
Many people use Jasmine via either the jasmine
package
which runs specs in Node or the jasmine-browser-runner
package.
For historical reasons, those packages have different versioning strategies:
jasmine
major and minor versions matchjasmine-core
, so that when you update yourjasmine
dependency you’ll also get the latestjasmine-core
. Patch versions are handled separately: a patch release ofjasmine-core
does not require a corresponding patch release ofjasmine
, or vice versa.jasmine-browser-runner
version numbers are not related tojasmine-core
version numbers. It declaresjasmine-core
as a peer dependency.yarn
andnpm
will automatically install a compatible version ofjasmine-core
for you, or you can specify a version by adding it as a direct dependency of your package.
Jasmine generally avoids dropping support for browser or Node versions except in major releases. The exceptions to this are Node versions that are past end of life, browsers that we can no longer install locally and/or test against in our CI builds, browsers that no longer receive security updates, and browsers that only run on operating systems that no longer receive security updates. We’ll make reasonable efforts to keep Jasmine working in those environments but won’t necessarily do a major release if they break.
How can I use scripts from external URLs in jasmine-browser-runner?
You can add the script’s URL to srcFiles
in your jasmine-browser.json
or
jasmine-browser.js
file:
// ...
srcFiles: [
"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.0/jquery.min.js",
"**/*.js"
],
// ...
Can Jasmine test code that's in ES modules?
Yes. The exact process depends on how you’re using Jasmine:
- If you’re using the standalone distribution or any other in-browser setup
where you control the HTML tags, use
<script type="module">
. - If you’re using the jasmine NPM package, your scripts will be loaded using
dynamic import.
This means that files will be treated as ES modules if they’re in a package
that has
"type": "module"
in itspackage.json
or if their names end in.mjs
. - jasmine-browser-runner will load scripts as ES modules
if their names end in
.mjs
. You can override this with theesmFilenameExtension
configuration property. - If you’re using a third-party tool such as Karma to run Jasmine, check that tool’s documentation.
Why does Jasmine allow multiple expectation failures in a spec? How can I disable that?
Sometimes it takes more than one expectation to assert a particular result. In those situations it can be helpful to see all of the expectations fail before trying to make any of them pass. This is particularly useful when a single code change will make multiple expectations pass.
If you want each spec to stop at the first expectation failure, you can set the
oneFailurePerSpec
option to true
:
- If you’re using the standalone distribution, click “Options” and then “stop
spec on expectation failure”, or edit
boot.js
to set the option permanently. - If you’re using the
jasmine
NPM package, setstopSpecOnExpectationFailure
totrue
in your config file (usuallyspec/support/jasmine.json
). - If you’re using a third party tool that wraps jasmine-core, check that tool’s documentation for how to pass configuration options.
- If you’re using jasmine-core directly, add it to the object that you pass to Env#configure.
Note that any afterEach or afterAll functions associated with the spec will still run.
How can I get Jasmine to fail specs that don't have any assertions?
By default, Jasmine doesn’t require specs to contain any expectations. You can
enable that behavior by setting the failSpecWithNoExpectations
option to
true
:
- If you’re using the standalone distribution, add it to the
config
object inlib/jasmine-<VERSION>/boot.js
. - If you’re using the
jasmine
NPM package, add it to your config file (usuallyspec/support/jasmine.json
). - If you’re using a third party tool that wraps jasmine-core, check that tool’s documentation for how to pass configuration options.
- If you’re using jasmine-core directly, add it to the object that you pass to Env#configure.
We don’t recommend relying on the failSpecWithNoExpectations
option.
All it ensures is that each spec has at least one expectation, not
that the spec will actually fail for the right reason if the behavior it’s
trying to verify doesn’t work. The only way to be sure that a spec is actually
correct is to try it both ways and see that it passes when the code under test
is working and fails in the intended way when the code under test is broken.
Very few people can consistently write good specs without doing that, just like
very few people can consistently deliver working non-test code without trying
it out.
How can I use Jasmine on my TypeScript project?
There are two common ways to use Jasmine and TypeScript together.
The first is to compile TypeScript files to JavaScript on the fly as they’re imported:
- If you’re using Vite-specific syntax such as extensionless ES module imports, use tsx.
- If you’re using standard TypeScript, you can use
@babel/register
. See Testing a React app with Jasmine NPM for an example.
The second approach is to compile your TypeScript spec files to JavaScript files on disk and configure Jasmine to run the compiled TypeScript files:
- If you’re using Vite-specific syntax such as extensionless ES module imports, use esbuild.
- If you’re using standard TypeScript, use tsc.
The compile-on-the-fly approach is usually easy to set up and provides the
fastest possible edit-compile-run-specs cycle. However, it doesn’t do any type
checking by default. You can add type checking by creating a separate TypeScript
config file for your specs with noEmit
set to true
, and running tsc
on it
either before or after running your specs. Compiling to files on disk gives a
slower edit-compile-run-specs cycle, but it’s a more familiar workflow for
people who are used to compiled languages. It’s also the only option if you want
to write specs in TypeScript and run them in a browser.
Other software that works with Jasmine
Can I use Jasmine 5.x with Karma?
Probably. karma-jasmine 5.1 (the latest version as of this writing, and likely
the final version) appears to be compatible with jasmine-core 5.x. You should
be able to use an NPM override in package.json
to override karma-jasmine’s
dependency specification:
{
// ...
"overrides": {
"karma-jasmine": {
"jasmine-core": "^5.0.0"
}
}
}
Why aren't newer Jasmine features available in Karma?
You might be using an older jasmine-core version than you think you are. karma-jasmine declares a dependency on jasmine-core 4.x. As a result, Karma will use jasmine-core 4.x even if you’ve also installed a newer version. You may be able to fix that by adding an NPM override as described in the previous question.
I ran into a problem involving zone.js. Can you help?
Please report any zone.js related issues to the Angular project.
Zone.js monkey patches Jasmine extensively, replacing a number of key internals with its own implementations. Most of the time this works fine. But any problems that it causes are by definition bugs in zone.js rather than in Jasmine.
How can I use Jasmine matchers with testing-library's waitFor function?
Use throwUnless
instead of expect
:
await waitFor(function() {
throwUnless(myDialogElement).toHaveClass('open');
});
Why doesn't expect() work right in webdriver.io?
@wdio/jasmine-framework
replaces Jasmine’s expect
with
a different one that is incompatible with Jasmine’s. See
the Webdriver.IO docs
for information about its expect
API.
In addition to replacing expect
, Webdriver.IO monkey patches some Jasmine
internals. Bugs that only occur when Webdriver.IO is present should be reported
to Webdriver.IO, not to Jasmine.
Writing Specs
How can I run code before a containing describe
's beforeEach
? Does Jasmine have an equivalent of rspec's let
?
The short answer is that you can’t, and you should refactor your test setup so
that inner describe
s don’t need to undo or override setup that was done by
an outer describe
.
This question usually comes up when people try to write suites that look like this:
// DOES NOT WORK
describe('When the user is logged in', function() {
let user = MyFixtures.anyUser
beforeEach(function() {
// Do something, potentially complicated, that causes the system to run
// with `user` logged in.
});
it('does some things that apply to any user', function() {
// ...
});
describe('as an admin', function() {
beforeEach(function() {
user = MyFixtures.adminUser;
});
it('shows the admin controls', function() {
// ...
});
});
describe('as a non-admin', function() {
beforeEach(function() {
user = MyFixtures.nonAdminUser;
});
it('does not show the admin controls', function() {
// ...
});
});
});
That doesn’t work, in part because the inner beforeEach
functions run after
the user is already logged in. Some test frameworks provide a way to re-order
the test setup so that parts of the setup in an inner describe
can run before
parts of the setup in an outer describe
. RSpec’s let
blocks are an example
of this. Jasmine doesn’t provide such functionality. We’ve learned through
experience that having the setup flow control bounce back and forth between
inner and outer describes
leads to suites that are hard to understand
and hard to modify. Instead, try refactoring the setup code so that each part
happens after all of the setup that it depends on. Usually this means taking
the contents of an outer beforeEach
and inlining it into the inner specs or
beforeEach
es. If this leads to excessive code duplication, that can be handled
with regular functions, just like in non-test code:
describe('When the user is logged in', function() {
it('does some things that apply to any user', function() {
logIn(MyFixtures.anyUser);
// ...
});
describe('as an admin', function() {
beforeEach(function() {
logIn(MyFixtures.adminUser);
});
it('shows the admin controls', function() {
// ...
});
});
describe('as a non-admin', function() {
beforeEach(function() {
logIn(MyFixtures.nonAdminUser);
});
it('does not show the admin controls', function() {
// ...
});
});
function logIn(user) {
// Do something, potentially complicated, that causes the system to run
// with `user` logged in.
}
});
Why is Jasmine showing an exception with no stack trace?
JavaScript allows you to throw any value or reject a promise with any value.
However, only Error
objects have stack traces. So if a non-Error
is thrown
or a promise is rejected with something other than an Error
, Jasmine can’t
show the stack trace because there is no stack trace to show.
This behavior is controlled by the JavaScript runtime and isn’t something that Jasmine can change.
// NOT RECOMMENDED
describe('Failures that will not have stack traces', function() {
it('throws a non-Error', function() {
throw 'nope';
});
it('rejects with a non-Error', function() {
return Promise.reject('nope');
});
});
// RECOMMENDED
describe('Failures that will have stack traces', function() {
it('throws an Error', function() {
throw new Error('nope');
});
it('rejects with an Error', function() {
return Promise.reject(new Error('nope'));
});
});
Does Jasmine support parameterized testing?
Not directly. But test suites are just JavaScript, so you can do it anyway.
function add(a, b) {
return a + b;
}
describe('add', function() {
const cases = [
{first: 3, second: 3, sum: 6},
{first: 10, second: 4, sum: 14},
{first: 7, second: 1, sum: 8}
];
for (const {first, second, sum} of cases) {
it(`returns ${sum} for ${first} and ${second}`, function () {
expect(add(first, second)).toEqual(sum);
});
}
});
How can I add more information to matcher failure messages?
When a spec has multiple, similar expectations, it can be hard to tell which failure corresponds to which expectation:
it('has multiple expectations', function() {
expect(munge()).toEqual(1);
expect(spindle()).toEqual(2);
expect(frobnicate()).toEqual(3);
});
Failures:
1) has multiple expectations
Message:
Expected 0 to equal 1.
Stack:
Error: Expected 0 to equal 1.
at <Jasmine>
at UserContext.<anonymous> (withContextSpec.js:2:19)
at <Jasmine>
Message:
Expected 0 to equal 2.
Stack:
Error: Expected 0 to equal 2.
at <Jasmine>
at UserContext.<anonymous> (withContextSpec.js:3:21)
at <Jasmine>
There are three ways to make the output of a spec like that more clear:
- Put each expectation in its own spec. (This is sometimes a good idea, but not always.)
- Write a custom matcher. (This is sometimes worth the effort, but not always.)
- Use withContext to add extra text to the matcher failure messages.
Here’s the same spec as above, but modified to use withContext
:
it('has multiple expectations with some context', function() {
expect(munge()).withContext('munge').toEqual(1);
expect(spindle()).withContext('spindle').toEqual(2);
expect(frobnicate()).withContext('frobnicate').toEqual(3);
});
Failures:
1) has multiple expectations with some context
Message:
munge: Expected 0 to equal 1.
Stack:
Error: munge: Expected 0 to equal 1.
at <Jasmine>
at UserContext.<anonymous> (withContextSpec.js:8:40)
at <Jasmine>
Message:
spindle: Expected 0 to equal 2.
Stack:
Error: spindle: Expected 0 to equal 2.
at <Jasmine>
at UserContext.<anonymous> (withContextSpec.js:9:44)
at <Jasmine>
Async Testing
Which async style should I use, and why?
The async
/await
style should be your first choice. Most developers have a
much easier time writing error-free specs in that style. Promise-returning
specs are a bit harder to write, but they can be useful in more complex
scenarios.
Callback style specs are very error-prone and should be avoided if possible.
There are two major drawbacks to callback style specs. The first is that the
flow of execution is harder to visualize. That makes it easy to write a spec
that calls its done
callback before it’s actually finished. The second is
that it’s difficult to handle errors correctly. Consider this spec:
it('sometimes fails to finish', function(done) {
doSomethingAsync(function(result) {
expect(result.things.length).toEqual(2);
done();
});
});
If result.things
is undefined, the access to result.things.length
will throw
an error, preventing done
from being called. The spec will eventually time out
but only after a significant delay. The error will be reported. But because of
the way browsers and Node expose information about unhandled exceptions, it
won’t include a stack trace or any other information that indicates the source
of the error.
Fixing that requires wrapping each callback in a try-catch:
it('finishes and reports errors reliably', function(done) {
doSomethingAsync(function(result) {
try {
expect(result.things.length).toEqual(2);
} catch (err) {
done.fail(err);
return;
}
done();
});
});
That’s tedious, error-prone, and likely to be forgotten. It’s often better to convert the callback to a promise:
it('finishes and reports errors reliably', async function() {
const result = await new Promise(function(resolve, reject) {
// If an exception is thrown from here, it will be caught by the Promise
// constructor and turned into a rejection, which will fail the spec.
doSomethingAsync(resolve);
});
expect(result.things.length).toEqual(2);
});
Callback-style specs are still useful in some situations. Some callback-based
interfaces are difficult to promisify or don’t benefit much from being
promisified. But in most cases it’s easier to write a reliable spec using
async
/await
or at least promises.
Why are some asynchronous spec failures reported as suite errors or as failures of a different spec?
When an exception is thrown from async code or an unhandled promise rejection occurs, the spec that caused it is no longer on the call stack. So Jasmine has no reliable way to determine where the error came from. The best Jasmine can do is associate the error with the spec or suite that was running when it happened. This is usually the right answer, since correctly-written specs don’t trigger errors (or do anything else) after they signal completion.
It becomes a problem when a spec signals completion before it’s actually done.
Consider these two examples, which both test a doSomethingAsync
function that
calls a callback when it’s finished:
// WARNING: does not work correctly
it('tries to be both sync and async', function() {
// 1. doSomethingAsync() is called
doSomethingAsync(function() {
// 3. The callback is called
doSomethingThatMightThrow();
});
// 2. Spec returns, which tells Jasmine that it's done
});
// WARNING: does not work correctly
it('is async but signals completion too early', function(done) {
// 1. doSomethingAsync() is called
doSomethingAsync(function() {
// 3. The callback is called
doSomethingThatThrows();
});
// 2. Spec calls done(), which tells Jasmine that it's done
done();
});
In both cases the spec signals that it’s done but continues executing, later causing an error. By the time the error occurs, Jasmine has already reported that the spec passed and started executing the next spec. Jasmine might even have exited before the error occurs. If that happens, it won’t be reported at all.
The fix is to make sure that the spec doesn’t signal completion until it’s really done. This can be done with callbacks:
it('signals completion at the right time', function(done) {
// 1. doSomethingAsync() is called
doSomethingAsync(function() {
// 2. The callback is called
doSomethingThatThrows();
// 3. If we get this far without an error being thrown, the spec calls
// done(), which tells Jasmine that it's done
done();
});
});
But
it’s easier to write reliable async specs using async
/await
or promises,
so we recommend that in most cases:
it('signals completion at the right time', async function() {
await doSomethingAsync();
doSomethingThatThrows();
});
How can I stop Jasmine from running my specs in parallel?
Jasmine only runs specs in parallel if you use at least version 5.0 of the
jasmine
NPM package and pass the --parallel
command line argument. In
all other configurations it runs one spec (or
before/after) function at a time. Even the parallel configuration runs specs and
before/after functions within each suite sequentially.
However, Jasmine depends on those user-provided functions to indicate when they’re done. If a function signals completion before it’s actually done, then the execution of the next spec will interleave with it. To fix this, make sure each asynchronous function calls its callback or resolves or rejects the returned promise only when it’s completely finished. See the async tutorial for more information.
Why can't I write a spec that both takes a callback and returns a promise (or is an async function)? What should I do instead?
Jasmine needs to know when each asynchronous spec is done so that it can move
on to the next one at the right time. If a spec takes a done
callback, that
means “I’m done when I call the callback”. If a spec returns a promise, either
explicitly or by using the async
keyword, it means “I’m done when the
returned promise is resolved or rejected”. Those two things can’t both be
true, and Jasmine has no way of resolving the ambiguity. Future readers are
also likely to have trouble understanding the intent of the spec.
Usually people who ask this question are dealing with one of two situations.
Either they’re using async
just to be able to await
and not to signal
completion to Jasmine, or they’re trying to test code that mixes multiple
async styles.
The first scenario: when a spec is async
just so it can await
// WARNING: does not work correctly
it('does something', async function(done) {
const something = await doSomethingAsync();
doSomethingElseAsync(something, function(result) {
expect(result).toBe(/*...*/);
done();
});
});
In this case the intent is for the spec to be done when the callback is called,
and the promise that’s implicitly returned from the spec is meaningless. The
best fix is to change the callback-based function so that it returns a promise
and then await
the promise:
it('does something', async function(/* Note: no done param */) {
const something = await doSomethingAsync();
const result = await new Promise(function(resolve, reject) {
doSomethingElseAsync(something, function(r) {
resolve(r);
});
});
expect(result).toBe(/*...*/);
});
If you want to stick with callbacks, you can wrap the async
function in
an IIFE:
it('does something', function(done) {
(async function () {
const something = await doSomethingAsync();
doSomethingElseAsync(something, function(result) {
expect(result).toBe(/*...*/);
done();
});
})();
});
or replace await
with then
:
it('does something', function(done) {
doSomethingAsync().then(function(something) {
doSomethingElseAsync(something, function(result) {
expect(result).toBe(170);
done();
});
});
});
The second scenario: Code that signals completion in multiple ways
// in DataLoader.js
class DataLoader {
constructor(fetch) {
// ...
}
subscribe(subscriber) {
// ...
}
async load() {
// ...
}
}
// in DataLoaderSpec.js
// WARNING: does not work correctly
it('provides the fetched data to observers', async function(done) {
const fetch = function() {
return Promise.resolve(/*...*/);
};
const subscriber = function(result) {
expect(result).toEqual(/*...*/);
done();
};
const subject = new DataLoader(fetch);
subject.subscribe(subscriber);
await subject.load(/*...*/);
});
Just like in the first scenario, the problem with this spec is that it signals
completion in two different ways: by settling (resolving or rejecting) the
implicitly returned promise, and by calling the done
callback. This mirrors
a potential design problem with the DataLoader
class. Usually people write
specs like this because the code under test can’t be relied upon to signal
completion in a consistent way. The order in which subscribers are called and
the returned promise is settled might be unpredictable. Or worse, DataLoader
might only use the returned promise to signal failure, leaving it pending in
the success case. It’s difficult to write a reliable spec for code that has
that problem.
The fix is to change the code under test to always signal completion in a
consistent way. In this case that means making sure that the last thing
DataLoader
does, in both success and failure cases, is resolve or reject the
returned promise. Then it can be reliably tested like this:
it('provides the fetched data to observers', async function(/* Note: no done param */) {
const fetch = function() {
return Promise.resolve(/*...*/);
};
const subscriber = jasmine.createSpy('subscriber');
const subject = new DataLoader(fetch);
subject.subscribe(subscriber);
// Await the returned promise. This will fail the spec if the promise
// is rejected or isn't resolved before the spec timeout.
await subject.load(/*...*/);
// The subscriber should have been called by now. If not,
// that's a bug in DataLoader, and we want the following to fail.
expect(subscriber).toHaveBeenCalledWith(/*...*/);
});
But I really have to test code that signals success and failure through different channels. I can't (or don't want to) change it. What can I do?
You can convert both sides to promises, if they aren’t already promises. Then
use Promise.race
to wait for whichever one is resolved or rejected first:
// in DataLoader.js
class DataLoader {
constructor(fetch) {
// ...
}
subscribe(subscriber) {
// ...
}
onError(errorSubscriber) {
// ...
}
load() {
// ...
}
}
// in DataLoaderSpec.js
it('provides the fetched data to observers', async function() {
const fetch = function() {
return Promise.resolve(/*...*/);
};
let resolveSubscriberPromise, rejectErrorPromise;
const subscriberPromise = new Promise(function(resolve) {
resolveSubscriberPromise = resolve;
});
const errorPromise = new Promise(function(resolve, reject) {
rejectErrorPromise = reject;
});
const subject = new DataLoader(fetch);
subject.subscribe(resolveSubscriberPromise);
subject.onError(rejectErrorPromise);
const result = await Promise.race([subscriberPromise, errorPromise]);
expect(result).toEqual(/*...*/);
});
Note that this assumes that the code under test either signals success or signals failure, but never does both. It’s generally not possible to write a reliable spec for async code that might signal both success and failure when it fails.
Why can't my asynchronous function call `done` more than once? What should I do instead?
In Jasmine 2.x and 3.x, a callback-based async function
could call its done
callback any number of times, and only the first call did
anything. This was done to prevent Jasmine from corrupting its internal state
when done
was called more than once.
We’ve learned since then that it’s important for asynchronous functions to only
signal completion when they’re actually done. When a spec keeps running after it
tells Jasmine that it’s done, it interleaves with the execution of other specs.
This can cause problems like intermittent test failures, failures not being
reported, or failures being reported on the wrong spec.
Problems like these have been a common source of user confusion and bug reports
over the years. Jasmine 4 tries to make them easier to diagnose by reporting
an error any time an asynchronous function calls done
more than once.
If you have a spec that calls done
multiple times, the best thing to do is to
rewrite it to only call done
once. See this related FAQ
for some common scenarios where specs signal completion multiple times and
suggested fixes.
If you really can’t eliminate the extra done calls, you can implement the
Jasmine 2-3 behavior by wrapping done
in a function that ignores all but the
first call, as follows. But be aware that specs that do this are still buggy
and still likely to cause the problems outlined above.
function allowUnsafeMultipleDone(fn) {
return function(done) {
let doneCalled = false;
fn(function(err) {
if (!doneCalled) {
done(err);
doneCalled = true;
}
});
}
}
it('calls done twice', allowUnsafeMultipleDone(function(done) {
setTimeout(done);
setTimeout(function() {
// This code may interleave with subsequent specs or even run after Jasmine
// has finished executing.
done();
}, 50);
}));
Why can't I pass an async function to `describe`? How can I generate specs from asynchronously loaded data?
Synchronous functions can’t call asynchronous functions, and describe
has to
be synchronous because it’s used in synchronous contexts like scripts loaded via
script tags. Making it async would break all existing code that uses Jasmine and
render Jasmine unusable in the environments where it’s most popular.
However, if you use ES modules, you can fetch data asynchronously before calling
the top-level describe
. Instead of this:
// WARNING: does not work
describe('Something', async function() {
const scenarios = await fetchSceanrios();
for (const scenario of scenarios) {
it(scenario.name, function() {
// ...
});
}
});
Do this:
const scenarios = await fetchSceanrios();
describe('Something', function() {
for (const scenario of scenarios) {
it(scenario.name, function() {
// ...
});
}
});
To use top-level await
, your spec files must be ES modules. If you are running
specs in a browser, you’ll need to use jasmine-browser-runner
2.0.0 or
later and add "enableTopLevelAwait": true
to the configuration file.
How do I test async behavior that I don't have a promise or callback for, like a UI component that renders something after fetching data asynchronously?
There are two basic ways to approach this. The first is to cause the async
behavior to complete immediately (or as close to immediately as possible) and
then await
in the spec. Here’s an example of that approach using the
enzyme and
jasmine-enzyme libraries to
test a React component:
describe('When data is fetched', () => {
it('renders the data list with the result', async () => {
const payload = [/*...*/];
const apiClient = {
getData: () => Promise.resolve(payload);
};
// Render the component under test
const subject = mount(<DataLoader apiClient={apiClient} />);
// Wait until after anything that's already queued
await Promise.resolve();
subject.update();
const dataList = subject.find(DataList);
expect(dataList).toExist();
expect(dataList).toHaveProp('data', payload);
});
});
Note that the promise that the spec awaits is unrelated to the one passed to
the code under test. People often use the same promise in both places, but that
doesn’t matter as long as the promise passed to the code under test is already
resolved. The important thing is that the await
call in the spec happens
after the one in the code under test.
This approach is simple, efficient, and fails quickly when things go wrong. But
it can be tricky to get the scheduling right when the code under test does more
than one await
or .then()
. Changes to the async operations in the code
under test can easily break the spec, requiring the addition of extra await
s.
The other approach is to poll until the desired behavior has happened:
describe('When data is fetched', () => {
it('renders the data list with the result', async () => {
const payload = [/*...*/];
const apiClient = {
getData: () => Promise.resolve(payload);
};
// Render the component under test
const subject = mount(<DataLoader apiClient={apiClient} />);
// Wait until the DataList is rendered
const dataList = await new Promise(resolve => {
function poll() {
subject.update();
const target = subject.find(DataList);
if (target.exists()) {
resolve(target);
} else {
setTimeout(poll, 50);
}
}
poll();
});
expect(dataList).toHaveProp('data', payload);
});
});
This is a bit more complex at first and can be slightly less efficient. It will
also time out (after 5 seconds by default) rather than failing immediately if
the expected component is not rendered. But it’s more resilient in the face of
change. It will still pass if more await
s or .then()
calls are added to the
code under test.
You might find
DOM Testing Library
or
React Testing Library
helpful when writing specs in the second style. The findBy*
and findAllBy*
queries in both those libraries implement the polling behavior shown above.
I need to assert something about the arguments passed to an async callback that happens before the code under test is finished. What's the best way to do that?
Consider a DataFetcher
class that fetches data, calls any registered
callbacks, does some cleanup, and then finally resolves a returned promise. The
best way to write a spec that verifies the arguments to the callback is to save
the arguments off in the callback and then assert that they have the right
values just before signalling completion:
it("calls the onData callback with the expected args", async function() {
const subject = new DataFetcher();
let receivedData;
subject.onData(function(data) {
receivedData = data;
});
await subject.fetch();
expect(receivedData).toEqual(expectedData);
});
You can also get better failure messages by using a spy:
it("calls the onData callback with the expected args", async function() {
const subject = new DataFetcher();
const callback = jasmine.createSpy('onData callback');
subject.onData(callback);
await subject.fetch();
expect(callback).toHaveBeenCalledWith(expectedData);
});
It’s tempting to write something like this:
// WARNING: Does not work
it("calls the onData callback with the expected args", async function() {
const subject = new DataFetcher();
subject.onData(function(data) {
expect(data).toEqual(expectedData);
});
await subject.fetch();
});
But that will incorrectly pass if the onData
callback is never called,
because the expectation never runs. Here’s another common but incorrect
approach:
// WARNING: Does not work
it("calls the onData callback with the expected args", function(done) {
const subject = new DataFetcher();
subject.onData(function(data) {
expect(data).toEqual(expectedData);
done();
});
subject.fetch();
});
In that version, the spec signals completion before the code under test actually finishes running. That can cause the spec’s execution to interleave with other specs, which can lead to misrouted errors and other problems.
Why doesn't Jasmine always display a stack trace when a spec fails due to a rejected promise?
This is similar to Why is Jasmine showing an exception with no stack trace?.
If the promise was rejected with an Error
object as the reason, e.g.
Promise.reject(new Error("out of cheese"))
, then Jasmine will display the
stack trace associated with the error. If the promise was rejected with no
reason or with a non-Error
reason, then there is no stack trace for Jasmine
to display.
I'm getting an unhandled promise rejection error but I think it's a false positive.
It’s important to understand that the JavaScript runtime decides which promise rejections are considered unhandled, not Jasmine. Jasmine just responds to the unhandled rejection event emitted by the JavaScript runtime.
Simply creating a rejected promise is often enough to trigger an unhandled promise rejection event if you allow control to return to the JavaScript runtime without first attaching a rejection handler. That’s true even if you don’t do anything with the promise. Jasmine turns unhandled rejections into failures because they almost always mean that something unexpectedly went wrong, and becuase there’s no way to distinguish “real” unhandled rejections from the ones that would eventually be handled in the future.
Consider this spec:
it('causes an unhandled rejection', async function() {
const rejected = Promise.reject(new Error('nope'));
await somethingAsync();
try {
await rejected;
} catch (e) {
// Do something with the error
}
});
The rejection will eventually be handled via the try
/catch
. But the JS
runtime detects the unhandled rejection before that part of the spec runs. This
happens because the await somethingAsync()
call returns control to the JS
runtime. Different JS runtimes detect unhandled rejections differently, but the
common behavior is that a rejection is not considered unhandled if a catch
handler is attached to it before control is returned to the runtime. In most
cases this can be achieved by re-ordering the code a bit:
it('causes an unhandled rejection', async function() {
const rejected = Promise.reject(new Error('nope'));
let rejection;
try {
await rejected;
} catch (e) {
rejection = e;
}
await somethingAsync();
// Do something with `rejection`
});
As a last resort, you can suppress the unhandled rejection by attaching a no-op catch handler:
it('causes an unhandled rejection', async function() {
const rejected = Promise.reject(new Error('nope'));
rejected.catch(function() { /* do nothing */ });
await somethingAsync();
let rejection;
try {
await rejected;
} catch (e) {
rejection = e;
}
// Do something with `rejection`
});
See also How can I configure a spy to return a rejected promise without triggering an unhandled promise rejection error? for how to avoid unhandled rejections when configuring spies.
As mentioned above, Jasmine doesn’t determine which rejections count as unhandled. Please don’t open issues asking us to change that.
Spies
How can I mock AJAX/fetch/XMLHTTPRequest calls?
Modern HTTP client APIs such as axios or fetch are easy to mock by hand using Jasmine spies. Simply inject the HTTP client into the code under test:
async function loadThing(thingId, thingStore, fetch) {
const url = `http://example.com/api/things/{id}`;
const response = await fetch(url);
thingStore[thingId] = response.json();
}
// somewhere else
await loadThing(thingId, thingStore, fetch);
Then, in the spec, inject a spy:
describe('loadThing', function() {
it('fetches the correct URL', function() {
const fetch = jasmine.createSpy('fetch')
.and.returnValue(new Promise(function() {}));
loadThing(17, {}, fetch);
expect(fetch).toHaveBeenCalledWith('http://example.com/api/things/17');
});
it('stores the thing', function() {
const payload = {
id: 17,
name: 'the thing you requested'
};
const response = {
json: function() {
return payload;
}
};
const thingStore = {};
const fetch = jasmine.createSpy('fetch')
.and.returnValue(Promise.resolve(response));
loadThing(17, thingStore, fetch);
expect(thingStore[17]).toEqual(payload);
});
});
If you’re using the older XMLHttpRequest
,
jasmine-ajax is a good choice. It
takes care of the sometimes intricate details of mocking XMLHttpRequest
and
provides a nice API for verifying requests and stubbing responses.
Why can't I spy on localStorage methods in some browsers? What can I do instead?
This will pass in some browsers but fail in Firefox and Safari 17:
it('sets foo to bar on localStorage', function() {
spyOn(localStorage, 'setItem');
localStorage.setItem('foo', 'bar');
expect(localStorage.setItem).toHaveBeenCalledWith('foo', 'bar');
});
As a security measure, Firefox and Safari 17 don’t allow properties of
localStorage
to be overwritten. Assigning to them, which is what spyOn
does
under the hood, is a no-op. This is a limitation imposed by the browser and
there is no way for Jasmine to get around it.
One alternative is to check the state of localStorage
rather than verifying
what calls were made to it:
it('sets foo to bar on localStorage', function() {
localStorage.setItem('foo', 'bar');
expect(localStorage.getItem('foo')).toEqual('bar');
});
Another option is to create a wrapper around localStorage
and mock the wrapper.
How can I spy on a property of a module? I'm getting an error like "aProperty does not have access type get", "is not declared writable or has no setter", or "is not declared configurable".
This error means that something (probably a transpiler, but possibly the JavaScript runtime) has marked the exported properties of the module as read-only. The ES module spec requires that exported module properties be read-only, and some transpilers conform to that requirement even when emitting CommonJS modules. If a property is marked read-only, Jasmine can’t replace it with a spy.
Regardless of the environment you’re in, you can avoid the problem by using dependency injection for things you’ll want to mock and injecting a spy or a mock object from the spec. This approach usually results in maintainability improvements in the specs and the code under test. Needing to mock modules is often a sign of tightly coupled code, and it can be wise to fix the coupling rather than work around it with testing tools.
Depending on the environment you’re in, it may be possible to enable module mocking. See the module mocking guide for more information.
How can I configure a spy to return a rejected promise without triggering an unhandled promise rejection error?
It’s important to understand that the JavaScript runtime decides which promise rejections are considered unhandled, not Jasmine. Jasmine just responds to the unhandled rejection event emitted by the JavaScript runtime.
Simply creating a rejected promise is enough to trigger an unhandled rejection event in Node and most browsers if you allow control to return to the JavaScript runtime without attaching a rejection handler. That’s true even if you don’t do anything with the promise. Jasmine turns unhandled rejections into failures because they almost always mean that something unexpectedly went wrong. (See also: I’m getting an unhandled promise rejection error but I think it’s a false positive.)
Consider this spec:
it('might cause an unhandled promise rejection', async function() {
const foo = jasmine.createSpy('foo')
.and.returnValue(Promise.reject(new Error('nope')));
await expectAsync(doSomething(foo)).toBeRejected();
});
The spec creates a rejected promise. If everything works correctly, it’ll be
handled, ultimately by the async matcher. But if doSomething
fails to call
foo
or fails to pass the rejection along, the browser or Node will trigger an
unhandled promise rejection event. Jasmine will treat that as a failure of the
suite or spec that’s running at the time of the event.
One fix is to create the rejected promise only when the spy is actually called:
it('does not cause an unhandled promise rejection', async function() {
const foo = jasmine.createSpy('foo')
.and.callFake(() => Promise.reject(new Error('nope')));
await expectAsync(doSomething(foo)).toBeRejected();
});
You can make this a bit clearer by using the rejectWith spy strategy:
it('does not cause an unhandled promise rejection', async function() {
const foo = jasmine.createSpy('foo')
.and.rejectWith(new Error('nope'));
await expectAsync(doSomething(foo)).toBeRejected();
});
As mentioned above, Jasmine doesn’t determine which rejections count as unhandled. Please don’t open issues asking us to change that.
Contributing
I want to help out with Jasmine. Where should I start?
Thanks for your help! The Jasmine team only has limited time to work on Jasmine so we appreciate all the help we get from the community.
Github Issues
When github issues are reported that seem like things Jasmine could support, we will label the issue with “help needed”. This label means that we believe there is enough information included in the conversation for someone to implement on their own. (We’re not always correct. If you have further questions, please ask).
New Ideas
Do you have an idea that’s not already captured in a GitHub issue? Feel free to propose it. We recommend (but don’t require) that you open an issue to discuss your idea before submitting a pull request. We don’t say yes to every proposal, so it’s best to ask before you put in a lot of work.
What does Jasmine use to test itself?
Jasmine uses Jasmine to test Jasmine.
Jasmine’s test suite loads two copies of Jasmine. The first is loaded from the
built files in lib/
. The second, called jasmineUnderTest
, is loaded directly
from the source files in src/
. The first Jasmine is used to run the specs, and
the specs call functions on jasmineUnderTest
.
This has several advantages:
- Developers get feedback on the design of Jasmine by using it to develop Jasmine.
- Developers can choose whether to test against the last committed version of Jasmine (by doing nothing) or against the current code (by doing a build first).
- It’s not possible to get stuck in a state where Jasmine’s tests don’t run
because of a newly introduced bug in Jasmine. Developers can avoid that
situation by not building until the specs are green, and get out of it by
simply running
git checkout lib
. - Because no build step is required, it can take less than two seconds to go from saving a file to seeing the results of a test run.
If you’re curious about how this is set up, see requireCore.js and defineJasmineUnderTest.js.
Why does Jasmine have a funny hand-rolled module system? Why not use Babel and Webpack?
The short answer is that Jasmine predates both Babel and Webpack, and converting to those tools would be a lot of work for a fairly small payoff that largely went away when Jasmine dropped support for non-ES2017 environments like Internet Explorer. Although a lot of of Jasmine is still written in ES5, newer language features can now be used.
For most of its life, Jasmine needed to run on browsers that didn’t support
newer JavaScript features. That meant that the compiled code couldn’t use newer
syntax and library features such as arrow functions, async
/await
, Promise
,
Symbol
, Map
, and Set
. As a result, it was written in ES5 syntax without
any use of non-portable library features except in certain narrow contexts like
async matchers.
So why not adopt Babel and Webpack? Partly because Jasmine fits in an odd space that breaks some of the assumptions made by those tools: It’s both an application and a library, and even when it’s acting as an application it can’t safely modify the JavaScript runtime environment. If Jasmine added polyfills for missing library features, that could cause specs for code that depends on those features to incorrectly pass on browsers that don’t have them. We’ve yet to figure out how to configure Babel and Webpack (or any other bundler) in a way that guarantees that no polyfills will be introduced. And even if we did that, the payoff would have been relatively small. Writing ES5 syntax instead of ES6 was the easy part of supporting a wide range of browsers. The hard parts, mainly dealing with missing library features and other incompatibilities, would still have needed to be solved by hand.
Jasmine’s existing build tools have the virtues of simplicity, speed, and needing extremely low maintainence. We’re not opposed to switching to something newer if the change is a significant improvement. But so far, being conservative in this area has allowed us to skip quite a bit of front end build tooling churn and use that time to work on things that benefit users.
How do I work on a feature that depends on something that's missing from some supported environments?
We try to make all features of Jasmine available on all supported browsers and
Node versions, but sometimes that doesn’t make sense. For instance, support for
promise-returning specs was added in 2.7.0 even though Jasmine continued to run
in environments that lacked promises until 4.0.0. To write a spec for something
that won’t work in all environments, check whether the required
language/runtime features are present and mark the spec pending if they’re not.
See spec/helpers/checkForUrl.js
and the uses of the requireUrls
function that it defines for an example of
how to do this.
See the is* methods in src/core/base.js for examples of how to safely check whether an object is an instance of a type that might not exist.