What's the Yield Keyword in JavaScript

What's the yield keyword in JavaScript?

The MDN documentation is pretty good, IMO.

The function containing the yield keyword is a generator. When you call it, its formal parameters are bound to actual arguments, but its body isn't actually evaluated. Instead, a generator-iterator is returned. Each call to the generator-iterator's next() method performs another pass through the iterative algorithm. Each step's value is the value specified by the yield keyword. Think of yield as the generator-iterator version of return, indicating the boundary between each iteration of the algorithm. Each time you call next(), the generator code resumes from the statement following the yield.

How does the yield keyword really work in JavaScript ES6 Generators?

Invoking createNames() does not execute any of the code inside of the generator. It creates an instance of an iterator, and execution begins on the first next() call.

const iterator = createNames(); 
// Iterator instance created, but hasn't executed yet.

iterator.next('Brian');
// creates the people array
// attempts to push, yields
iterator.next('Paul');
// pushes 'Paul'
// attempts to push, yields
iterator.next('John');
// pushes 'John'
// attempts to push, yeilds
iterator.next();
// pushes undefined
// returns ["Paul", "John", undefined]

How does yield work as a argument in javascript

With next method, You can pass data as argument into generator function.

function* logGenerator() {
console.log(0);
console.log(1, yield);
console.log(2, yield);
console.log(3, yield);
}

var gen = logGenerator();

// the first call of next executes from the start of the function
// until the first yield statement
gen.next(); // 0
gen.next('pretzel'); // 1 pretzel
gen.next('california'); // 2 california
gen.next('mayonnaise'); // 3 mayonnaise

What does the yield keyword do in Python?

To understand what yield does, you must understand what generators are. And before you can understand generators, you must understand iterables.

Iterables

When you create a list, you can read its items one by one. Reading its items one by one is called iteration:

>>> mylist = [1, 2, 3]
>>> for i in mylist:
... print(i)
1
2
3

mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable:

>>> mylist = [x*x for x in range(3)]
>>> for i in mylist:
... print(i)
0
1
4

Everything you can use "for... in..." on is an iterable; lists, strings, files...

These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values.

Generators

Generators are iterators, a kind of iterable you can only iterate over once. Generators do not store all the values in memory, they generate the values on the fly:

>>> mygenerator = (x*x for x in range(3))
>>> for i in mygenerator:
... print(i)
0
1
4

It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one.

Yield

yield is a keyword that is used like return, except the function will return a generator.

>>> def create_generator():
... mylist = range(3)
... for i in mylist:
... yield i*i
...
>>> mygenerator = create_generator() # create a generator
>>> print(mygenerator) # mygenerator is an object!
<generator object create_generator at 0xb7555c34>
>>> for i in mygenerator:
... print(i)
0
1
4

Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once.

To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky.

Then, your code will continue from where it left off each time for uses the generator.

Now the hard part:

The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each subsequent call will run another iteration of the loop you have written in the function and return the next value. This will continue until the generator is considered empty, which happens when the function runs without hitting yield. That can be because the loop has come to an end, or because you no longer satisfy an "if/else".



Your code explained

Generator:

# Here you create the method of the node object that will return the generator
def _get_child_candidates(self, distance, min_dist, max_dist):

# Here is the code that will be called each time you use the generator object:

# If there is still a child of the node object on its left
# AND if the distance is ok, return the next child
if self._leftchild and distance - max_dist < self._median:
yield self._leftchild

# If there is still a child of the node object on its right
# AND if the distance is ok, return the next child
if self._rightchild and distance + max_dist >= self._median:
yield self._rightchild

# If the function arrives here, the generator will be considered empty
# there are no more than two values: the left and the right children

Caller:

# Create an empty list and a list with the current object reference
result, candidates = list(), [self]

# Loop on candidates (they contain only one element at the beginning)
while candidates:

# Get the last candidate and remove it from the list
node = candidates.pop()

# Get the distance between obj and the candidate
distance = node._get_dist(obj)

# If the distance is ok, then you can fill in the result
if distance <= max_dist and distance >= min_dist:
result.extend(node._values)

# Add the children of the candidate to the candidate's list
# so the loop will keep running until it has looked
# at all the children of the children of the children, etc. of the candidate
candidates.extend(node._get_child_candidates(distance, min_dist, max_dist))

return result

This code contains several smart parts:

  • The loop iterates on a list, but the list expands while the loop is being iterated. It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node.

  • The extend() method is a list object method that expects an iterable and adds its values to the list.

Usually, we pass a list to it:

>>> a = [1, 2]
>>> b = [3, 4]
>>> a.extend(b)
>>> print(a)
[1, 2, 3, 4]

But in your code, it gets a generator, which is good because:

  1. You don't need to read the values twice.
  2. You may have a lot of children and you don't want them all stored in memory.

And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples, and generators! This is called duck typing and is one of the reasons why Python is so cool. But this is another story, for another question...

You can stop here, or read a little bit to see an advanced use of a generator:

Controlling a generator exhaustion

>>> class Bank(): # Let's create a bank, building ATMs
... crisis = False
... def create_atm(self):
... while not self.crisis:
... yield "$100"
>>> hsbc = Bank() # When everything's ok the ATM gives you as much as you want
>>> corner_street_atm = hsbc.create_atm()
>>> print(corner_street_atm.next())
$100
>>> print(corner_street_atm.next())
$100
>>> print([corner_street_atm.next() for cash in range(5)])
['$100', '$100', '$100', '$100', '$100']
>>> hsbc.crisis = True # Crisis is coming, no more money!
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> wall_street_atm = hsbc.create_atm() # It's even true for new ATMs
>>> print(wall_street_atm.next())
<type 'exceptions.StopIteration'>
>>> hsbc.crisis = False # The trouble is, even post-crisis the ATM remains empty
>>> print(corner_street_atm.next())
<type 'exceptions.StopIteration'>
>>> brand_new_atm = hsbc.create_atm() # Build a new one to get back in business
>>> for cash in brand_new_atm:
... print cash
$100
$100
$100
$100
$100
$100
$100
$100
$100
...

Note: For Python 3, useprint(corner_street_atm.__next__()) or print(next(corner_street_atm))

It can be useful for various things like controlling access to a resource.

Itertools, your best friend

The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator?
Chain two generators? Group values in a nested list with a one-liner? Map / Zip without creating another list?

Then just import itertools.

An example? Let's see the possible orders of arrival for a four-horse race:

>>> horses = [1, 2, 3, 4]
>>> races = itertools.permutations(horses)
>>> print(races)
<itertools.permutations object at 0xb754f1dc>
>>> print(list(itertools.permutations(horses)))
[(1, 2, 3, 4),
(1, 2, 4, 3),
(1, 3, 2, 4),
(1, 3, 4, 2),
(1, 4, 2, 3),
(1, 4, 3, 2),
(2, 1, 3, 4),
(2, 1, 4, 3),
(2, 3, 1, 4),
(2, 3, 4, 1),
(2, 4, 1, 3),
(2, 4, 3, 1),
(3, 1, 2, 4),
(3, 1, 4, 2),
(3, 2, 1, 4),
(3, 2, 4, 1),
(3, 4, 1, 2),
(3, 4, 2, 1),
(4, 1, 2, 3),
(4, 1, 3, 2),
(4, 2, 1, 3),
(4, 2, 3, 1),
(4, 3, 1, 2),
(4, 3, 2, 1)]

Understanding the inner mechanisms of iteration

Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method).
Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables.

There is more about it in this article about how for loops work.

What does 'yield' keyword do in flutter?

yield adds a value to the output stream of the surrounding async* function. It's like return, but doesn't terminate the function.

See https://dart.dev/guides/language/language-tour#generators

Stream asynchronousNaturalsTo(n) async* {
int k = 0;
while (k < n) yield k++;
}

When the yield statement executes, it adds the result of evaluating its expression to the stream. It doesn’t necessarily suspend (though in the current implementations it does).

JS Generators: How is `return yield` different from `yield`?

First, I'll start by saying that Generators are a somewhat complicated topic, so giving a complete overview here won't be possible. For more information I'd highly recommend Kyle Simpson's You Don't Know JS series. Book 5 (Async & Performance) has an excellent discussion on the ins and outs of generators.

Onto the specific example you gave though!

First, the code that you wrote in the example will show no difference but only if it's run correctly. Here's an example:

function* foo() {
yield 123;
}

function* bar() {
return yield 123;
}

var f = foo();
var b = bar();

f.next(); // {value: 123, done: false}
f.next(); // {value: undefined, done: true}
b.next(); // {value: 123, done: false}
b.next(); // {value: undefined, done: true}

As you can see, I'm not calling the generator like a normal function. The generator itself returns a generator object (a form of iterator). We store that iterator in a variable and use the .next() function to advance the iterator to the next step (a yield or return keyword).

The yield keyword allows us to pass a value into the generator, and this is where your examples will run differently. Here's what that would look like:

function* foo() {
yield 123;
}

function* bar() {
return yield 123;
}

var f = foo();
var b = bar();

// Start the generator and advance to the first `yield`
f.next(); // {value: 123, done: false}
b.next(); // {value: 123, done: false}

/** Now that I'm at a `yield` statement I can pass a value into the `yield`
* keyword. There aren't any more `yield` statements in either function,
* so .next() will look for a return statement or return undefined if one
* doesn't exist. Like so:
*/
f.next(2); // {value: undefined, done: true}
b.next(2); // {value: 2, done: true}

Notice that f.next(2) will return undefined as a value, whereas b.next(2) returns the number 2.

This is because the value we're passing into the second .next() call is sent into the to the bar() generator, assigned as a result of the yield expression, which appears after a return keyword; that means the value we send into the generator is returned back to us and finishes the generator.

foo() has no explicit return statement, so you don't get the value back (foo() already yielded its only value, so the generator is finished, and the result is { done:true, value: undefined }`.

Note that iterators don't typically consume this return value, even though this answer demonstrates how you could manually get the return value using .next()

Hope that this helps!

Understanding 'Yield' keyword in javascript?

There is no direct equivalent. However, one can fake it by returning a "generator" object. Basically, the continuation code is moved into the next() of the generator.

Consider this fib-generator example on MDN:

function fib() {
var i = 0, j = 1;
while (true) {
yield i;
var t = i;
i = j;
j += t;
}
}

var g = fib();
for (var i = 0; i < 10; i++) {
console.log(g.next());
}

And re-written using a fake generator:

function fib() {
var i = 0, j = 1;
return {
'next': function () {
var yieldRet = i;
// These haven't occurred before the `yield` in the above generator,
// but it makes it easier to do it in the same order here.
// Just make sure there are no OBSERVABLE side-effects.
var t = i;
i = j;
j += t;
return yieldRet;
}
};
}

var g = fib();
for (var i = 0; i < 10; i++) {
console.log(g.next());
}

Now, this does become a bit trickier with the addition of observable mutable state; the given example could still be expressed as a state machine. Note that each next can "advance" the state.

var currentNode;
function yield1 () {
var y = { next: st0 };
return y;
function st0 () {
if (currentNode) {
y.next = st1;
return currentNode;
} else {
y.next = stZ;
}
}
function st1 () {
y.next = stZ;
currentNode = null; // observable side-effect!
}
function stZ () {
}
}

var g = yield1();
currentNode = "x";
console.log(g.next()); // "x"
console.log(currentNode); // still "x"
g.next();
console.log(currentNode); // null

How to use the Yield keyword in React?

Put your generator outside of the functional component. Every time the component renders, the function gets called which in turn recreates the generator from scratch, so the current state is lost.

// A generator function that yields the numbers 1 through 10
function* someProcess(someValue) {
console.log("Creating someProcess");
for (let i = 0; i < someValue; i += 1) {
yield i;
}
}

// Getting an instance of the generator
const process = someProcess(10);

MyComponent.js = () => {
// Some state for the count
const [count, setCount] = useState(0);

// onClick handler to get the next yield value and set the count state to that value
const getNextYieldValue = () => {
const nextValue = process.next().value;
console.log(nextValue);
setCount(nextValue);
};

return (
<div>
<button onClick={getNextYieldValue} type="button">Get Next Value</button>
<Child count={count} />
</div>
);
};

Trying to understand generators / yield in node.js - what executes the asynchronous function?

When writing async code with generators you are dealing with two types of functions:

  • normal functions declared with function. These functions cannot yield. You cannot write async code in sync style with them because they run to completion; you can only handle asynchronous completion through callbacks (unless you invoke extra power like the node-fibers library or a code transform).
  • generator functions declared with function*. These functions can yield. You can write async code in sync style inside them because they can yield. But you need a companion function that creates the generator, handles the callbacks and resumes the generator with a next call every time a callback fires.

There are several libraries that implement companion functions. In most of these libraries, the companion function handles a single function* at a time and you have to put a wrapper around every function* in your code. The galaxy library (that I wrote) is a bit special because it can handle function* calling other function* without intermediate wrappers. The companion function is a bit tricky because it has to deal with a stack of generators.

The execution flow can be difficult to understand because of the little yield/next dance between your function* and the companion function. One way to understand the flow is to write an example with the library of your choice, add console.log statements both in your code and in the library, and run it.



Related Topics



Leave a reply



Submit