Discriminate First and Last Element in Each

Discriminate first and last element in each?

One of the nicer approaches is:

@example.tap do |head, *body, tail|
head.do_head_specific_task!
tail.do_tail_specific_task!
body.each { |segment| segment.do_body_segment_specific_task! }
end

Type discrimination based on *any* element of an array (not *all* elements)

To do this, we need a way for dogBreeds to know what pets is. For this, we need a generic:

type Household<Pets extends ReadonlyArray<string>> = {

And of course pets is of type Pets:

  pets: Pets;
}

Then we say, if Pets includes "dog", add dogBreeds:

 & (Pets extends [] ? {} : "dog" extends Pets[number] ? { dogBreeds: string[]; } : {})

But first we check if Pets is empty. Otherwise if it is empty, dogBreeds would show up anyways.

We intersect the result of this check with the base of { pets: Pets; }.

Then we can do it with cats too:

  & (Pets extends [] ? {} : "cat" extends Pets[number] ? { catBreeds: string[]; } : {})

However, we can't just use this type like this:

const house: Household = { ... };

TypeScript requires us to use a generic here, but then we'd need to duplicate code, which is not ideal.

To solve this, we need a wrapper function to do the inferring for us:

function household<Pets extends ReadonlyArray<string>>(household: Household<Pets>): Household<Pets> {
return household;
}

And now we can use it:

const house = household({ ... });

Playground

How to uniq an array, keeping the last duplicate of each element instead of the first?

You can accomplish this by reversing the array, uniquing it, and then reversing it again to the original order:

["a", "b", "c", "a"].reverse.uniq.reverse
#=> ["b", "c", "a"]

Iterate over n to last item in a Ruby enumerable

Ruby has a cool way of doing this with the splat operator *. Here is an example:

a = [1,2,3,4]
first, *rest = *a
puts first # 1
puts rest # [2,3,4]
puts a # [1,2,3,4]

Here is your code rewritten:

first, *rest = @model.children
puts first.full_name
rest.each do |child|
puts child.short_name
end

I hope this helps!

How do I extend .each so elements inside the loop have a last? method to check if element is the last one?

Extending the objects being yielded is the wrong way to do it, since the object itself shouldn't be aware of its inclusion in a given collection (and what if you had the same object in multiple arrays?)

If you're wanting to just avoid operating on the last item in an array, why not something like:

arr[0..-2].each {|elem| ... }

You could also extend Enumerable with a variation on Darshan's second answer, allowing you to exclude the last element in any given enumerable:

module Enumerable
def except_last
each_with_index do |el, index|
yield el unless index == count - 1
end
end
end

[1,2,3,4,5].each.except_last {|e| print e }
1234

(In this case, the each is actually redundant, but it's nice and readable with it in there.)

Loop through each element from third to last of active record object

Use the Array#drop method to drop the first two elements of @obj:

@obj.drop(2).each do |obj|
# whatever...
end

how to discriminate two numbers that are very near?

Let's see if I understood correctly, you have this array with vertex points, usually it's just a 2 elements bidimensional array, but sometimes it might receive an extra vertex points array, with a slight different value (differ of 1*10^-14) and you want to discard the higher extra values.

I came up with something like this:

const arr = [
[112.02598008561951, 9.12963236661007],
[112.02598008561952, 9.129632366610064],
[9.751846481442218, 3.5376744911193576],
];

for (let i = 0; i < arr.length-1; i++) {
const diff = Math.abs(arr[i][0] - arr[i + 1][0])
if (diff <= 0.00000000000002) arr.splice(i + 1, 1);
}

console.log("NEW ARR", arr)

Why does the last element reflect the number of non-negative solutions?

This algorithms is pretty cool and demonstrates the power of looking for a solution from a different perspective.

Let's take a example: 3x + 2y + z = 6, where LHS is the left hand side and RHS is the right hand side.

dp[k] will keep track of the number of unique ways to arrive at a RHS value of k by substituting non-negative integer values for LHS variables.

The i loop iterates over the variables in the LHS. The algorithm begins with setting all the variables to zero. So, the only possible k value is zero, hence

k        0   1   2   3   4   5   6 
dp[k] = 1 0 0 0 0 0 0

For i = 0, we will update dp to reflect what happens if x is 1 or 2. We don't care about x > 2 because the solutions are all non-negative and 3x would be too big. The j loop is responsible for updating dp and dp[k] gets incremented by dp[k - 3] because we can arrive at RHS value k by adding one copy of the coefficient 3 to k-3. The result is

k        0   1   2   3   4   5   6 
dp[k] = 1 0 0 1 0 0 1

Now the algorithm continues with i = 1, updating dp to reflect all possible RHS values where x is 0, 1, or 2 and y is 0, 1, 2, or 3. This time the j loop increments dp[k] by dp[k-2] because we can arrive at RHS value k by adding one copy of the coefficient 2 to k-2, resulting in

k        0   1   2   3   4   5   6 
dp[k] = 1 0 1 1 1 1 2

Finally, the algorithm incorporates z = 1, 2, 3, 4, 5, or 6, resulting in

k        0   1   2   3   4   5   6 
dp[k] = 1 1 2 3 4 5 7

In addition to computing the answer in pseudo-polynomial time, dp encodes the answer for every RHS <= the input right hand side.

TypeScript, objects and type discrimination

For both your original and extended examples, you are trying to represent something like a "correlated union type" as discussed in microsoft/TypeScript#30581.

For example, inside renderElement, the renderer value is a union of functions, while the element value is a union of function arguments. But the compiler won't let you call renderer(element) because it doesn't realize that those two unions are correlated to each other. It has forgotten that if element is of type A then renderer will accept a value of type A. And since it has forgotten the correlation, it sees any call of renderer() as calling a union of functions. The only thing the compiler will accept as an argument is something which would be safe for every function type in the union... something which is both an A and a B, or the intersection A & B, which reduces to never because it's not possible for something to be both an A and a B:

const renderElement = (element: AB) => {
const config = Configs[element.id];
const renderer = config.renderer;
/* const renderer: ((data: A) => void) | ((data: B) => void) */
return renderer(element); // error, element is not never (A & B)
}

Anyway, in both cases, the easiest thing to do is to use a type assertion to tell the compiler not to worry about its inability to verify type safety:

  return (renderer as (data: AB) => void)(element); // okay

Here you're telling the compiler that renderer will actually accept A or B, whatever the caller wants to pass in. This is a lie, but it's harmless because you know that element will turn out to be the type that renderer really wants.


Until very recently that would be the end of it. But microsoft/TypeScript#47109 was implemented to provide a way to get type-safe correlated unions. It was just merged into the main branch of the TypeScript code base, so as of now it looks like it will make it into the TypeScript 4.6 release. We can use nightly typescript@next builds to preview it.

Here's how we'd rewrite your original example code to use the fix. First, we write an object type which represents a mapping between the discriminant of A and B and their corresponding data types:

type TypeMap = { a: boolean, b: string };

Then we can define A, B, and AB in terms of TypeMap:

type AB<K extends keyof TypeMap = keyof TypeMap> = 
{ [P in K]: { id: P, value: TypeMap[P] } }[K];

This is what is being called "a distributive object type". Essentially we are taking K, a generic type parameter constrained to the discriminant values, splitting it up into its union members P, and distributing the operation {id: P, value: TypeMap[P]} over that union.

Let's make sure that works:

type A = AB<"a">; // type A = { id: "a"; value: boolean; }
type B = AB<"b"> // type B = { id: "b"; value: string; }
type ABItself = AB // { id: "a"; value: boolean; } | { id: "b"; value: string; }

(Note that when we write AB without a type parameter, it uses the default of keyof TypeMap which is just the union "a" | "b".)

Now, for configs, we need to annotate it as being of a similarly mapped type which turns TypeMap into a version where each property K has a renderer property that is a function accepting AB<K>:

const configs: { [K in keyof TypeMap]: { renderer: (data: AB<K>) => void } } = {
a: { renderer: (data: A) => { } },
b: { renderer: (data: B) => { } }
};

This annotation is crucial. Now the compiler can detect that AB<K> and configs are related to each other. If you make renderElement a generic function in K, then the call succeeds because a function accepting AB<K> will accept a value of type AB<K>:

const renderElement = <K extends keyof TypeMap>(element: AB<K>) => {
const config = configs[element.id];
const renderer = config.renderer;
return renderer(element); // okay
}

Now there's no error. And you should be able to call renderElement and have the compiler infer K based on the value you pass in:

renderElement({ id: "a", value: true });
// const renderElement: <"a">(element: { id: "a"; value: boolean; }) => void
renderElement({ id: "b", value: "okay" });
// const renderElement: <"b">(element: { id: "b"; value: string; }) => void

So that's the gist of it. For your extended example you can write your types like

type ItemMap = { header: HeaderProps, image: ImageProps, group: GroupProps };
type Item<K extends keyof ItemMap = keyof ItemMap> = { [P in K]: { kind: P, data: ItemMap[P] } }[K]
type Whatever = { a: any };
const itemsConfig: { [K in keyof ItemMap]: { onRender: (props: ItemMap[K]) => Whatever } } = {
header: { onRender: (props: HeaderProps) => { return { a: props } } },
image: { onRender: (props: ImageProps) => { return { a: props } } },
group: { onRender: (props: GroupProps) => { return { a: props } } }
};

And your renderItems() function also needs a slight refactoring so that each item can be passed to a generic callback:

const renderItems = (items: Item[]) => {
const result: Whatever[] = [];
items.forEach(<K extends keyof ItemMap>(item: Item<K>) => {
const { onRender } = itemsConfig[item.kind];
result.push(onRender(item.data))
})
return result;
}

Playground link to code

How to discriminate elements from database that have many occurences

You can try joining the ProjectResources table with itself as follows:

select distinct p1.employeeID from
ProjectResources p1 join ProjectResources p2
on p1.employeeID = p2.employeeID and
p1.projectID <> p2.projectID

This just tries to see for each row if there is another row that has the same employeeID value but a different projectID value. We don't care if how many of them there are as long as there is at least one and that is why we select distinct so that the same employeeID does not appear more than once (without the distinct keyword, we would get one row for every project the employee worked on).

I used the table from your updated question to create an actual table at sqlfiddle.com. Next time you can (and should) do this yourself and post the link in your question.

See Demo

But this sounds suspiciously like homework.

If you must have the counts:

select distinct employeeID, count(*) OVER(PARTITION By employeeID) as nbProjects from (
select p1.employeeID from
ProjectResources p1 join ProjectResources p2
on p1.employeeID = p2.employeeID and
p1.projectID <> p2.projectID
) sq;


Leave a reply



Submit