Get the Keys for Duplicate Values in an Array

Get the keys for duplicate values in an array

function get_keys_for_duplicate_values($my_arr, $clean = false) {
if ($clean) {
return array_unique($my_arr);
}

$dups = $new_arr = array();
foreach ($my_arr as $key => $val) {
if (!isset($new_arr[$val])) {
$new_arr[$val] = $key;
} else {
if (isset($dups[$val])) {
$dups[$val][] = $key;
} else {
$dups[$val] = array($key);
// Comment out the previous line, and uncomment the following line to
// include the initial key in the dups array.
// $dups[$val] = array($new_arr[$val], $key);
}
}
}
return $dups;
}

obviously the function name is a bit long;)

Now $dups will contain a multidimensional array keyed by the duplicate value, containing each key that was a duplicate, and if you send "true" as your second argument it will return the original array without the duplicate values.

Alternately you could pass the original array as a reference and it would adjust it accordingly while returning your duplicate array

Find duplicates in Array of objects with different keys in javascript

let letters= [{"a":1,"b":2,"c":7}, {"d":4,"c":21,"f":2}, {"g":34,"c":2}]

let result = letters
.map(group => Object.keys(group))
.reduce((arr, v) => arr ? arr.filter(key => v.includes(key)) : v, null)

console.log(result)

JavaScript: check if duplicate key values exist in array of objects and remove all but most recently added object having that key value

You can turn it into a Map indexed by m_id, then take the map's values:

const map = new Map(
arr.map(obj => [obj.m_id, obj])
);
const deduplicatedArr = [...map.values()];

(you might be able to use an object here, but only if the respective order of non-duplicate IDs doesn't need to be preserved - since the IDs are numeric, they'll be iterated over in ascending numeric order if they were properties of an object)

Find duplicate keys in a single array

I assume this is a one-time job in order to clean out some input data in a file and not something you that needs to happen automatically.

If your data originally is in a CSV type file and you have access to some GNU tools, I often use something like

$ cat filenamv.csv | cut -d, -f1  | sort | uniq -d

This should the first column of the CSV file and print any duplicate keys.

You probalby want to read up on the individual commands (example man uniq) for just the correct parameters to use in your case.

Multi-dimensional array return keys with duplicate values

Edit: I've updated the answer quite a bit.

Edit 2: Now utilizing the built in array functions to find the duplicates

$products = [
0 => ['product-id' => 124],
1 => ['product-id' => 125],
2 => ['product-id' => 124],
3 => ['product-id' => 126],
4 => ['product-id' => 126],
8 => ['product-id' => 124],
];

// Find the duplicates
$product_ids = array_column($products, 'product-id');
$count = array_count_values($product_ids);
$duplicates = array_filter($count, function($var) {
return $var > 1;
});

// List all the entries with duplicate ids:
foreach ( array_flip($duplicates) as $product_id ) {
$filter = array_filter($products, function($var) use ($product_id) {
return ( $var['product-id'] === $product_id );
});
print_r('Product-id: ' . $product_id . ' is duplicated in entries: ');
print_r(array_keys($filter));
}

The output:

// Product-id: 124 is duplicated in entries: Array
// (
// [0] => 0
// [1] => 2
// [2] => 8
// )
// Product-id: 126 is duplicated in entries: Array
// (
// [0] => 3
// [1] => 4
// )

Find duplicate values in an array which creates a new array with key as duplicate keys and values as the dupes

Try this for potential speed boost over the other solution. Will however use a lot more memory on large data sets.

<?php

$orig = array(
1 => 10,
2 => 11,
3 => 12,
4 => 12,
5 => 12,
6 => 13,
7 => 13
);

$seen = array();
$dupes = array();

foreach ($orig as $k => $v) {
if (isset($seen[$v])) {
$dupes[$k] = $seen[$v];
} else {
$seen[$v] = $k;
}
}
unset($seen);

var_dump($dupes);


Related Topics



Leave a reply



Submit