Quantcast
Channel: Meteor Blog - Medium
Viewing all articles
Browse latest Browse all 160

Fun with Meteor Methods

$
0
0

Exploring the concurrency of method calls

This post builds on “Using Promises and async/await in Meteor” and “Using Promises on the Client in Meteor”. It compares “traditional” methods with async methods and gives some insight into why you should (or should not) use this.unblock().

Introduction

You just can’t avoid asynchronous code in JavaScript. Not even in Meteor with sync-style coding on the server (where Meteor hides the asynchronous nature of much of JavaScript behind a facade of Fibers and Futures)— it is, after all, sync-style, not sync.

With async and await now able to replace much of that facade and clarify where asynchronous components are used, it’s a good time to step back and look at possibly unexpected pitfalls of sync-style coding around asynchronous functions.

Meteor Methods — a Recap.

Meteor methods provide predictable running of server code on behalf of the client. As requests arrive from each client they are queued for execution in the order they appear. As each method runs to completion it returns its data to the client.

We can see this at work with the following code:

The logClient method takes the parameter value (p) and simulates the effect of calling something like a REST endpoint by adding latency to the run-time. I used Meteor._sleepForMs (a Fiber based method) for adding in this latency.

On the client, we’re firing off five method calls. When each completes, we push the result onto a reactive array. We also measure the time taken to complete the execution of the calls. Note that although the calls are made sequentially, we do not wait for each to complete before firing off the next. This is Asynchronous JavaScript 101 — the code keeps running while we wait for the results.

On the server, we see this:

I20170208-14:14:06.974(0)? Starting sequence 1
I20170208-14:14:07.875(0)? Completed sequence 1
I20170208-14:14:07.876(0)? Starting sequence 2
I20170208-14:14:08.677(0)? Completed sequence 2
I20170208-14:14:08.678(0)? Starting sequence 3
I20170208-14:14:09.381(0)? Completed sequence 3
I20170208-14:14:09.381(0)? Starting sequence 4
I20170208-14:14:09.983(0)? Completed sequence 4
I20170208-14:14:09.984(0)? Starting sequence 5
I20170208-14:14:10.485(0)? Completed sequence 5

We see each call running to completion in sequence, with gaps of approximately 900, 800, 700, 600 and 500ms — the method is queuing client requests and executing them in order. The end-to-end time on the server is ~3.5 seconds. On the browser, the execution time is 1–2ms, because we’re not waiting for any call to complete before the next is queued:

Sequence: 1, Expected time: 900ms, Actual time: 902ms
Sequence: 2, Expected time: 800ms, Actual time: 801ms
Sequence: 3, Expected time: 700ms, Actual time: 704ms
Sequence: 4, Expected time: 600ms, Actual time: 603ms
Sequence: 5, Expected time: 500ms, Actual time: 502ms
Execution Time on Client = 1ms

In fact, the browser reports Execution Time on Client = 1ms long before the individual sequence report lines start to appear.

This is a useful technique for quickly kicking off actions for which you don’t need an immediate response. It may be that you are making some MongoDB calls and you have a pub/sub which will eventually ensure your client view is consistent.

However, if you need to use the result of a call in the next call, then it becomes necessary to wait until the preceding call has completed before starting the next. This example takes the returned response object and increments the value of p that was passed in (available in response.sequence) to get the next value to use:

The server output is as before, but the client now reports as follows:

Sequence: 1, Expected time: 900ms, Actual time: 904ms
Sequence: 2, Expected time: 800ms, Actual time: 802ms
Sequence: 3, Expected time: 700ms, Actual time: 700ms
Sequence: 4, Expected time: 600ms, Actual time: 602ms
Sequence: 5, Expected time: 500ms, Actual time: 501ms
Execution Time on Client = 3570ms

All is good — client and server agree on what’s happening and how long it takes for each step and also how long the total time is.

this.unblock()

From the Meteor docs (linked above):

On the server, methods from a given client run one at a time. The N+1th invocation from a client won’t start until the Nth invocation returns. However, you can change this by calling this.unblock. This will allow the N+1th invocation to start running in a new fiber.

The important part of that quote is the first sentence: “On the server, methods from a given client run one at a time.”

The unstated corollary to that is that methods from different clients run concurrently. However, oftentimes you see this.unblock() being used to allow several clients to execute a method at the same time — this is unnecessary — it’s the standard way Meteor methods work (without using this.unblock()).

The only use for this.unblock() is to allow a client to execute a method while that same client is already executing a method (the same method or another one). Unsurprisingly, this is a rare requirement.

Here’s what the server’s console shows when I call the method from two different clients at roughly the same time without using this.unblock():

I20170227-17:05:29.099(0)? Starting sequence 1
I20170227-17:05:29.910(0)? Starting sequence 1
I20170227-17:05:29.999(0)? Completed sequence 1
I20170227-17:05:30.013(0)? Starting sequence 2
I20170227-17:05:30.811(0)? Completed sequence 1
I20170227-17:05:30.812(0)? Completed sequence 2
I20170227-17:05:30.816(0)? Starting sequence 2
I20170227-17:05:30.820(0)? Starting sequence 3
I20170227-17:05:31.523(0)? Completed sequence 3
I20170227-17:05:31.527(0)? Starting sequence 4
I20170227-17:05:31.619(0)? Completed sequence 2
I20170227-17:05:31.623(0)? Starting sequence 3
I20170227-17:05:32.127(0)? Completed sequence 4
I20170227-17:05:32.131(0)? Starting sequence 5
I20170227-17:05:32.324(0)? Completed sequence 3
I20170227-17:05:32.327(0)? Starting sequence 4
I20170227-17:05:32.632(0)? Completed sequence 5
I20170227-17:05:32.928(0)? Completed sequence 4
I20170227-17:05:32.933(0)? Starting sequence 5
I20170227-17:05:33.432(0)? Completed sequence 5

You can see from the sequence numbers that the methods are already running concurrently (the end-to-end time was 4.3 seconds and associated client timings were 3569ms and 3568ms).

Now, add in this.unblock():

We’ll just run this with one client for clarity. On the server we get:

I20170227-17:20:49.267(0)? Starting sequence 1
I20170227-17:20:50.168(0)? Completed sequence 1
I20170227-17:20:50.173(0)? Starting sequence 2
I20170227-17:20:50.973(0)? Completed sequence 2
I20170227-17:20:50.979(0)? Starting sequence 3
I20170227-17:20:51.681(0)? Completed sequence 3
I20170227-17:20:51.684(0)? Starting sequence 4
I20170227-17:20:52.285(0)? Completed sequence 4
I20170227-17:20:52.288(0)? Starting sequence 5
I20170227-17:20:52.789(0)? Completed sequence 5

In the client we get:

Sequence: 1, Expected time: 900ms, Actual time: 901ms 
Sequence: 2, Expected time: 800ms, Actual time: 802ms
Sequence: 3, Expected time: 700ms, Actual time: 702ms
Sequence: 4, Expected time: 600ms, Actual time: 600ms
Sequence: 5, Expected time: 500ms, Actual time: 501ms
Execution Time on Client = 3562ms

So, no real change. Why? Well, we’re still using the (deeply nested) client code for dependent calls. Each call has to run to completion before the next is started. Let’s re-run using the original (independent client calls) code.

On the server we get:

I20170208-14:51:38.422(0)? Starting sequence 1
I20170208-14:51:38.422(0)? Starting sequence 2
I20170208-14:51:38.423(0)? Starting sequence 3
I20170208-14:51:38.423(0)? Starting sequence 4
I20170208-14:51:38.423(0)? Starting sequence 5
I20170208-14:51:38.925(0)? Completed sequence 5
I20170208-14:51:39.027(0)? Completed sequence 4
I20170208-14:51:39.124(0)? Completed sequence 3
I20170208-14:51:39.223(0)? Completed sequence 2
I20170208-14:51:39.321(0)? Completed sequence 1

An end-to-end time of 900ms — much better — and now we can clearly see how the method calls are being optimally processed. That 900ms is also entirely expected; it’s the time for the longest-running method to complete. All others fit inside that time.

In the browser we get:

Sequence: 5, Expected time: 500ms, Actual time: 502ms
Sequence: 4, Expected time: 600ms, Actual time: 603ms
Sequence: 3, Expected time: 700ms, Actual time: 701ms
Sequence: 2, Expected time: 800ms, Actual time: 801ms
Sequence: 1, Expected time: 900ms, Actual time: 900ms
Execution Time on Client = 1ms

Which is also what we expect here: method call 5 completes first, followed by 4, 3, 2 and 1. We can measure the actual time taken by rewriting the client code to use Promises and making use of the Promise.all method to collate all the returned data. For the basics of using Promises on the client, check out Using Promises on the Client in Meteor”.

Sequence: 1, Expected time: 900ms, Actual time: 900ms
Sequence: 2, Expected time: 800ms, Actual time: 801ms
Sequence: 3, Expected time: 700ms, Actual time: 700ms
Sequence: 4, Expected time: 600ms, Actual time: 601ms
Sequence: 5, Expected time: 500ms, Actual time: 501ms
Execution Time on Client = 951ms

Awesome! But wait — the server reported that requests were completed in the order 5–1, however the client is reporting it as 1–5. Here’s the thing: Promise.all orders the result array to match the order in the original request array, even though the individual Promises may resolve in a different order. This is an important point: the result array implies an order which may not be present. If you do need ordered resolution, enforce it at the client.

async methods

In Using Promises and async/await in Meteor we considered how we might use ES7 async/await in our Meteor methods. Let’s revise our method code and look at what happens when we use it in various ways from the client.

We added a small, Promise based sleep function to replace Meteor._sleepForMs().

First, we run this with the asynchronous client code we used right at the start.

On the server we see this:

I20170216-13:12:48.476(0)? Starting sequence 1
I20170216-13:12:48.477(0)? Starting sequence 2
I20170216-13:12:48.477(0)? Starting sequence 3
I20170216-13:12:48.479(0)? Starting sequence 4
I20170216-13:12:48.479(0)? Starting sequence 5
I20170216-13:12:48.978(0)? Completed sequence 5
I20170216-13:12:49.080(0)? Completed sequence 4
I20170216-13:12:49.177(0)? Completed sequence 3
I20170216-13:12:49.278(0)? Completed sequence 2
I20170216-13:12:49.377(0)? Completed sequence 1

On the client we see this:

Sequence: 5, Expected time: 500ms, Actual time: 501ms
Sequence: 4, Expected time: 600ms, Actual time: 600ms
Sequence: 3, Expected time: 700ms, Actual time: 703ms
Sequence: 2, Expected time: 800ms, Actual time: 801ms
Sequence: 1, Expected time: 900ms, Actual time: 900ms
Execution Time on Client = 1ms

This is the same behavior we got when we used this.unblock() earlier, even though we didn’t use it this time! It turns out that an async method behaves like a normal method which uses this.unblock().

Now let’s try the nested (non-Promise) client code which waits for each method to complete before starting the next.

On the server we see this:

I20170216-13:17:38.208(0)? Starting sequence 1
I20170216-13:17:39.109(0)? Completed sequence 1
I20170216-13:17:39.114(0)? Starting sequence 2
I20170216-13:17:39.915(0)? Completed sequence 2
I20170216-13:17:39.920(0)? Starting sequence 3
I20170216-13:17:40.621(0)? Completed sequence 3
I20170216-13:17:40.626(0)? Starting sequence 4
I20170216-13:17:41.227(0)? Completed sequence 4
I20170216-13:17:41.232(0)? Starting sequence 5
I20170216-13:17:41.732(0)? Completed sequence 5

On the client we see this:

Sequence: 1, Expected time: 900ms, Actual time: 900ms
Sequence: 2, Expected time: 800ms, Actual time: 802ms
Sequence: 3, Expected time: 700ms, Actual time: 701ms
Sequence: 4, Expected time: 600ms, Actual time: 602ms
Sequence: 5, Expected time: 500ms, Actual time: 501ms
Execution Time on Client = 3569ms

In other words, the expected, predictable sequential order of execution.

Finally, we’ll use the Promise.all version of the client code.

On the server we see this:

I20170216-13:20:19.227(0)? Starting sequence 1
I20170216-13:20:19.288(0)? Starting sequence 2
I20170216-13:20:19.290(0)? Starting sequence 3
I20170216-13:20:19.290(0)? Starting sequence 4
I20170216-13:20:19.291(0)? Starting sequence 5
I20170216-13:20:19.744(0)? Completed sequence 5
I20170216-13:20:19.829(0)? Completed sequence 4
I20170216-13:20:19.931(0)? Completed sequence 3
I20170216-13:20:20.031(0)? Completed sequence 2
I20170216-13:20:20.128(0)? Completed sequence 1

On the client we see this:

Sequence: 1, Expected time: 900ms, Actual time: 907ms
Sequence: 2, Expected time: 800ms, Actual time: 801ms
Sequence: 3, Expected time: 700ms, Actual time: 701ms
Sequence: 4, Expected time: 600ms, Actual time: 602ms
Sequence: 5, Expected time: 500ms, Actual time: 500ms
Execution Time on Client = 912ms

Again, that’s the same behavior we saw before with this.unblock() in the server.

In Conclusion

  1. Methods are independent as far as different clients are concerned. So for example, Bob’s client can call method A at the same time as Carol’s client calls method A — those method invocations will run concurrently — neither Bob nor Carol will wait for the other*.
  2. Using this.unblock() to try to improve (1) will do nothing — it’s already running as efficiently as it can.
  3. For any one client
    3.1. Methods are executed in the order they are called from the client — imagine a FIFO queue on the server for each client.
    3.2. If this.unblock() is not used, methods are also evaluated in order — the method runs to completion and then the queue is popped for the next method to execute.
    3.3. If this.unblock() is used (or you’re using async methods), methods may be evaluated out of order — the queue is popped until empty, each method starting up in a new fiber as soon as it’s popped. Methods finish independently of the order in which they were initially present in the queue. This may result in unpredictable interleaving which is a recipe for hard-to-diagnose, race-induced bugs. If you need to take advantage of this approach, you also need to be certain that it’s safe if your methods complete out of order.
  4. Use that async keyword as a handy reminder that you should check for interleaving safety.
  5. In order to avoid thinking about interleaving, you should enforce it at the client level (or use neither this.unblock(), nor async methods). It’s easier to do this using async/await on the client — the alternative is nesting and callback hell.
  6. If you’re using async methods, you don’t need this.unblock().

* This isn’t strictly true: Node.js is single threaded, so any CPU bound method will always hold off any other process until it completes (this.unblock() won’t help). However, if the method is running asynchronous code (typically some form of I/O), then the event loop will be made available to other clients, allowing multiple instances of the method to execute concurrently.


Fun with Meteor Methods was originally published in Meteor Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.


Viewing all articles
Browse latest Browse all 160

Trending Articles