Why are 0d arrays in Numpy not considered scalar?
One should not think too hard about it. It's ultimately better for the mental health and longevity of the individual.
The curious situation with Numpy scalar-types was bore out of the fact that there is no graceful and consistent way to degrade the 1x1 matrix to scalar types. Even though mathematically they are the same thing, they are handled by very different code.
If you've been doing any amount of scientific code, ultimately you'd want things like max(a)
to work on matrices of all sizes, even scalars. Mathematically, this is a perfectly sensible thing to expect. However for programmers this means that whatever presents scalars in Numpy should have the .shape and .ndim attirbute, so at least the ufuncs don't have to do explicit type checking on its input for the 21 possible scalar types in Numpy.
On the other hand, they should also work with existing Python libraries that does do explicit type-checks on scalar type. This is a dilemma, since a Numpy ndarray have to individually change its type when they've been reduced to a scalar, and there is no way of knowing whether that has occurred without it having do checks on all access. Actually going that route would probably make bit ridiculously slow to work with by scalar type standards.
The Numpy developer's solution is to inherit from both ndarray and Python scalars for its own scalary type, so that all scalars also have .shape, .ndim, .T, etc etc. The 1x1 matrix will still be there, but its use will be discouraged if you know you'll be dealing with a scalar. While this should work fine in theory, occasionally you could still see some places where they missed with the paint roller, and the ugly innards are exposed for all to see:
>>> from numpy import *
>>> a = array(1)
>>> b = int_(1)
>>> a.ndim
0
>>> b.ndim
0
>>> a[...]
array(1)
>>> a[()]
1
>>> b[...]
array(1)
>>> b[()]
1
There's really no reason why a[...]
and a[()]
should return different things, but it does. There are proposals in place to change this, but looks like they forgot to finish the job for 1x1 arrays.
A potentially bigger, and possibly non-resolvable issue, is the fact that Numpy scalars are immutable. Therefore "spraying" a scalar into a ndarray, mathematically the adjoint operation of collapsing an array into a scalar, is a PITA to implement. You can't actually grow a Numpy scalar, it cannot by definition be cast into an ndarray, even though newaxis
mysteriously works on it:
>>> b[0,1,2,3] = 1
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'numpy.int32' object does not support item assignment
>>> b[newaxis]
array([1])
In Matlab, growing the size of a scalar is a perfectly acceptable and brainless operation. In Numpy you have to stick jarring a = array(a)
everywhere you think you'd have the possibility of starting with a scalar and ending up with an array. I understand why Numpy has to be this way to play nice with Python, but that doesn't change the fact that many new switchers are deeply confused about this. Some have explicit memory of struggling with this behaviour and eventually persevering, while others who are too far gone are generally left with some deep shapeless mental scar that frequently haunts their most innocent dreams. It's an ugly situation for all.
Why a single Numpy array element is not a Python scalar?
There is a difference between elements of an array and the object you get when indexing one.
The array has a data buffer. It is a block of bytes the numpy manages with its own compiled code. Individual elements may be represented by 1 byte, 4, 8, 16, etc.
In [478]: A=np.array([1,2,3])
In [479]: A.__array_interface__
Out[479]:
{'data': (167487856, False),
'descr': [('', '<i4')],
'shape': (3,),
'strides': None,
'typestr': '<i4',
'version': 3}
view the data as a list of bytes (displayed as characters):
In [480]: A.view('S1')
Out[480]:
array(['\x01', '', '', '', '\x02', '', '', '', '\x03', '', '', ''],
dtype='|S1')
When you select an element of A
you get back a one element array (or something like it):
In [491]: b=A[0]
In [492]: b.shape
Out[492]: ()
In [493]: b.__array_interface__
Out[493]:
{'__ref': array(1),
'data': (167480104, False),
'descr': [('', '<i4')],
'shape': (),
'strides': None,
'typestr': '<i4',
'version': 3}
The type
is different, but b
has most of the same attributes as A
, shape
, strides
, mean
, etc.
You have to use .item
to access the underlying 'scalar':
In [496]: b.item()
Out[496]: 1
In [497]: type(b.item())
Out[497]: int
So you can think of b
as a scalar with a numpy
wrapper. The __array_interface__
for b
looks very much like that of np.array(1)
.
is there a pythonic way to change scalar and 0d-array to 1d array?
Use np.atleast_1d
This will work for any input (scalar or array):
def foo(a):
a = np.atleast_1d(a)
a[a < 5] = 0
return a
Note though, that this will return a 1d array for a scalar input.
What is the meaning of `numpy.array(value)`?
When you create a zero-dimensional array like np.array(3)
, you get an object that behaves as an array in 99.99% of situations. You can inspect the basic properties:
>>> x = np.array(3)
>>> x
array(3)
>>> x.ndim
0
>>> x.shape
()
>>> x[None]
array([3])
>>> type(x)
numpy.ndarray
>>> x.dtype
dtype('int32')
So far so good. The logic behind this is simple: you can process any array-like object the same way, regardless of whether is it a number, list or array, just by wrapping it in a call to np.array
.
One thing to keep in mind is that when you index an array, the index tuple must have ndim
or fewer elements. So you can't do:
>>> x[0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: too many indices for array
Instead, you have to use a zero-sized tuple (since x[]
is invalid syntax):
>>> x[()]
3
You can also use the array as a scalar instead:
>>> y = x + 3
>>> y
6
>>> type(y)
numpy.int32
Adding two scalars produces a scalar instance of the dtype
, not another array. That being said, you can use y
from this example in exactly the same way you would x
, 99.99% of the time, since dtypes inherit from ndarray
. It does not matter that 3
is a Python int
, since np.add
will wrap it in an array
regardless. y = x + x
will yield identical results.
One difference between x
and y
in these examples is that x
is not officially considered to be a scalar:
>>> np.isscalar(x)
False
>>> np.isscalar(y)
True
The indexing issue can potentially throw a monkey wrench in your plans to index any array like-object. You can easily get around it by supplying ndmin=1
as an argument to the constructor, or using a reshape
:
>>> x1 = np.array(3, ndmin=1)
>>> x1
array([3])
>>> x2 = np.array(3).reshape(-1)
>>> x2
array([3])
I generally recommend the former method, as it requires no prior knowledge of the dimensionality of the input.
FurtherRreading:
- Why are 0d arrays in Numpy not considered scalar?
What is a scalar in NumPy?
A NumPy scalar is any object which is an instance of np.generic
or whose type
is in np.ScalarType
:
In [12]: np.ScalarType
Out[13]:
(int,
float,
complex,
long,
bool,
str,
unicode,
buffer,
numpy.int16,
numpy.float16,
numpy.int8,
numpy.uint64,
numpy.complex192,
numpy.void,
numpy.uint32,
numpy.complex128,
numpy.unicode_,
numpy.uint32,
numpy.complex64,
numpy.string_,
numpy.uint16,
numpy.timedelta64,
numpy.bool_,
numpy.uint8,
numpy.datetime64,
numpy.object_,
numpy.int64,
numpy.float96,
numpy.int32,
numpy.float64,
numpy.int32,
numpy.float32)
This definition comes from looking at the source code for np.isscalar:
def isscalar(num):
if isinstance(num, generic):
return True
else:
return type(num) in ScalarType
Note that you can test if something is a scalar by using np.isscalar
:
>>> np.isscalar(3.1)
True
>>> np.isscalar([3.1])
False
>>> np.isscalar(False)
True
How do we know what we know?
I like learning how people know what they know—more than the answers themselves. So let me try to explain where the above answer comes from.
Having the right tools can help you figure out things like this for yourself.
I found this out by using IPython. Using its TAB-completion feature, typing
In [19]: import numpy as np
In [20]: np.[TAB]
causes IPython to display all variables in the np
module namespace. A search for the string "scalar"
will lead you to np.ScalarType
and np.isscalar
. Typing
In [20]: np.isscalar?
(note the question mark at the end) prompts IPython to show you where np.isscalar
is defined:
File: /data1/unutbu/.virtualenvs/dev/lib/python2.7/site-packages/numpy/core/numeric.py
which is how I got to the definition of isscalar
. Alternatively, the NumPy documentation for isscalar
has a link to the source code as well.
How to index 0-d array in Python?
This will work fine in NumPy >= 1.9 (not released as of writing this). On previous versions you can work around by an extra np.asarray
call:
x[np.asarray(x > 0)] = 0
Related Topics
Multiprocessing in Python - Sharing Large Object (E.G. Pandas Dataframe) Between Multiple Processes
Retrieving a Foreign Key Value with Django-Rest-Framework Serializers
What Is the Reason for Having '//' in Python
Extract Subset of Key-Value Pairs from Dictionary
Python: Best Way to Add to Sys.Path Relative to the Current Running Script
Python Multiprocessing: Handling Child Errors in Parent
Create a Day-Of-Week Column in a Pandas Dataframe Using Python
Explicitly Select Items from a List or Tuple
How to Use Brew Installed Python as the Default Python
Automating Pydrive Verification Process
Python Pip Specify a Library Directory and an Include Directory
Python Dictionary Keys. "In" Complexity
Installing Python Packages Without Internet and Using Source Code as .Tar.Gz and .Whl
How to Change the Figure Size with Subplots
Convolve2D Just by Using Numpy
Typeerror: Module._Init_() Takes at Most 2 Arguments (3 Given)
Solving "Dll Load Failed: %1 Is Not a Valid Win32 Application." for Pygame