Get Rid of Columns With Null Value in Json Output

Get rid of columns with null value in json output

The best way is to check every element in the dict and remove the keys which have the null value. It would look something like this.

with open('data.json') as f:
json_dict = json.load(f)
for key in json_dict:
if json_dict[key] is Null:
json_dict.pop(key)

With json_dict[key] you get the value of the key and with the pop() function you delete the element from the dictionary. The pop() function also returns the value of the key that is going to be deleted.

Remove NULL fields from JSON in Greenplum

i've had the same issue in GP 5.17 on pg8.3 - and have had success with this regex to remove the null value key-pairs. i use this in the initial insert to a json column, but you could adapt however:

select
col5,
col6,
regexp_replace(regexp_replace(
(SELECT row_to_json(j) FROM
(SELECT
col1,col2,col3,col4
) AS j)::text,
'(?!{|,)("[^"]+":null[,]*)','','g'),'(,})$','}')::json
AS nvp_json
from foo

working from the inside-out, the result of the row_to_json constructor is first cast to text, then the inner regexp replaces any "name":null, values, the outer regexp trims any hanging commas from the end, and finally the whole thing is cast back to json.

Removing element with NULLs from a JSON object in JQuery/Javascript

// this function to remove null or empty value

 const removeEmptyOrNull = (obj) => {
Object.keys(obj).forEach(k =>
(obj[k] && typeof obj[k] === 'object') && removeEmptyOrNull(obj[k]) ||
(!obj[k] && obj[k] !== undefined) && delete obj[k]
);
return obj;
};

$(document).ready(function() {

$('select').change(function() {

//var formData = JSON.stringify($("#cstates").serializeArray());

   var states = { states : $("#cstates").val(), zips : $("#czips").val()};

//var data = JSON.stringify(states).replaceAll(".*\": null(,)?\\r\\n", "");
//{"states":null,"zips":["91941"]}
//str.replaceAll(".*\": null(,)?\\r\\n", "");

alert(JSON.stringify(states));
$.ajax({
type: "POST",
url: "http://localhost:8080/api/campaign/stats",
data: JSON.stringify( removeEmptyOrNull(states) ),
cache: false,
success: function(data){
$("#resultarea").text(data);
},
dataType: "json",
contentType : "application/json"
});

});
});

Remove empty elements from JSON in SQL Server

JSON Auto by default ignore the null fields if INCLUDE_NULL_VALUES not specified explicitly. Check for more info.

To include null values in the JSON output of the FOR JSON clause,
specify the INCLUDE_NULL_VALUES option.

If you don't specify the INCLUDE_NULL_VALUES option, the JSON output
doesn't include properties for values that are null in the query
results.

Also, Sql Fiddle

Ignore Null Values

(SELECT [Id], (CASE WHEN Name = '' THEN NULL ELSE Name END) as Name FROM test)
FOR JSON Auto

Include Null Values

(SELECT [Id],(CASE WHEN Name = '' THEN NULL ELSE Name END) as Name FROM test)
FOR JSON Auto, INCLUDE_NULL_VALUES

Remove all null values

The following illustrates how to remove all the null-valued keys from a JSON object:

jq -n '{"a":1, "b": null, "c": null} | with_entries( select( .value != null ) )'
{
"a": 1
}

Alternatively, paths/0 can be used as follows:

. as $o | [paths[] | {(.) : ($o[.])} ] | add

By the way, del/1 can also be used to achieve the same result, e.g. using this filter:

reduce keys[] as $k (.; if .[$k] == null then del(.[$k]) else . end)

Or less obviously, but more succinctly:

del( .[ (keys - [paths[]])[] ] )

And for the record, here are two ways to use delpaths/1:

jq -n '{"a":1, "b": null, "c": null, "d":2} as $o
| $o
| delpaths( [ keys[] | select( $o[.] == null ) ] | map( [.]) )'


$ jq -n '{"a":1, "b": null, "c": null, "d":2}
| [delpaths((keys - paths) | map([.])) ] | add'

In both these last two cases, the output is the same:
{
"a": 1,
"d": 2
}

Pandas remove null values when to_json

The following gets close to what you want, essentially we create a list of the non-NaN values and then call to_json on this:

In [136]:
df.apply(lambda x: [x.dropna()], axis=1).to_json()

Out[136]:
'{"0":[{"a":1.0,"b":4.0,"c":7.0}],"1":[{"b":5.0}],"2":[{"a":3.0}]}'

creating a list is necessary here otherwise it will try to align the result with your original df shape and this will reintroduce the NaN values which is what you want to avoid:

In [138]:
df.apply(lambda x: pd.Series(x.dropna()), axis=1).to_json()

Out[138]:
'{"a":{"0":1.0,"1":null,"2":3.0},"b":{"0":4.0,"1":5.0,"2":null},"c":{"0":7.0,"1":null,"2":null}}'

also calling list on the result of dropna will broadcast the result with the shape, like filling:

In [137]:
df.apply(lambda x: list(x.dropna()), axis=1).to_json()

Out[137]:
'{"a":{"0":1.0,"1":5.0,"2":3.0},"b":{"0":4.0,"1":5.0,"2":3.0},"c":{"0":7.0,"1":5.0,"2":3.0}}'

Hide null values in output from JSON.stringify()

Thanks for the replies. I just realized that JSON.stringify() has a REPLACER parameter (info here)

So I just added:

function replacer(key, value) {
// Filtering out properties
if (value === null) {
return undefined;
}
return value;
}

document.getElementById('panel').innerHTML =
'<pre>' +
JSON.stringify(jsonRes[0], replacer, "\t") +
'</pre>'
;

Create an array/json from columns excluding NULL values in Bigquery

Any idea why would this happen?

when you do to_json_string - all nulls becomes strings 'null's

Use below instead

select array_agg(a) as phones
from material,
unnest(json_extract_array(to_json_string([phone_1,phone_2,phone_3,phone_4,phone_5,phone_6]))) a
where a != 'null'

with output

Sample Image

C#/JSON What to do with a NULL value returned from a DB query?

There's a discussion about how to handle null values in JSON here:

Representing null in JSON

I also advise you to be aware of extra fields that you know will be always null or that you will not use (as you said "legacy" data). They are going to count on your JSON's final size so if you are sure they are not going to be used, just remove them.



Related Topics



Leave a reply



Submit