Skip to content

Conversion test fails on non-x86 architectures (ARM64, PPC64)

When building tinyarray 1.2.3 on aarch64/ppc64 architectures, the conversion test fails (via pytest) with the following output. No such problem seen with tinyarray version 1.2.2.

[   53s] =================================== FAILURES ===================================
[   53s] _______________________________ test_conversion ________________________________
[   53s] 
[   53s]     def test_conversion():
[   53s]         for src_dtype in dtypes:
[   53s]             for dest_dtype in dtypes:
[   53s]                 src = ta.zeros(3, src_dtype)
[   53s]                 tsrc = tuple(src)
[   53s]                 npsrc = np.array(tsrc)
[   53s]                 impossible = src_dtype is complex and dest_dtype in [int, float]
[   53s]                 for s in [src, tsrc, npsrc]:
[   53s]                     if impossible:
[   53s]                         raises(TypeError, ta.array, s, dest_dtype)
[   53s]                     else:
[   53s]                         dest = ta.array(s, dest_dtype)
[   53s]                         assert isinstance(dest[0], dest_dtype)
[   53s]                         assert src == dest
[   53s]     
[   53s]         # Check correct overflow detection.  We assume a typical architecture:
[   53s]         # sys.maxsize is also the maximum size of an integer held in a tinyarray
[   53s]         # array, and that Python floats are double-precision IEEE numbers.
[   53s]         for n in [10**100, -10**100, 123 * 10**20, -2 * sys.maxsize,
[   53s]                   sys.maxsize + 1, np.array(sys.maxsize + 1),
[   53s]                   -sys.maxsize - 2]:
[   53s]             raises(OverflowError, ta.array, n, int)
[   53s]     
[   53s]         # Check that values just below the threshold of overflow work.
[   53s]         for n in [sys.maxsize, np.array(sys.maxsize),
[   53s]                   -sys.maxsize - 1, np.array(-sys.maxsize - 1)]:
[   53s]             ta.array(n, int)
[   53s]     
[   53s]         # If tinyarray integers are longer than 32 bit, numbers around the maximal
[   53s]         # and minimal values cannot be represented exactly as double precision
[   53s]         # floating point numbers.  Check correct overflow detection also in this
[   53s]         # case.
[   53s]         n = sys.maxsize + 1
[   53s]         for dtype in [float, np.float64, np.float32]:
[   53s]             # The following assumes that n can be represented exactly.  This should
[   53s]             # be true for typical (all?) architectures.
[   53s]             assert dtype(n) == n
[   53s]             for factor in [1, 1.0001, 1.1, 2, 5, 123, 1e5]:
[   53s]     
[   53s]                 for x in [n, min(-n-1, np.nextafter(-n, -np.inf, dtype=dtype))]:
[   53s]                     x = dtype(factor) * dtype(x)
[   53s]                     raises(OverflowError, ta.array, x, int)
[   53s]                     if dtype is not float:
[   53s]                         # This solicitates the buffer interface.
[   53s]                         x = np.array(x)
[   53s]                         assert(x.dtype == dtype)
[   53s] >                       raises(OverflowError, ta.array, x, int)
[   53s] E                       Failed: DID NOT RAISE <class 'OverflowError'>
[   53s] 
[   53s] test_tinyarray.py:200: Failed

Brief system info:

  • openSUSE Tumbleweed on aarch64 (also on ppc64)
  • Python version 3.8.5
  • tinyarray version 1.2.3
  • GCC 10.2.1
  • numpy 1.19.2