Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • kwant/python-mumps
  • kel85uk/mumpy
  • jcapriot/python-mumps
  • vlukes/python-mumps
4 results
Show changes
Commits on Source (117)
# GitHub syntax highlighting
pixi.lock linguist-language=YAML linguist-generated=true
*.DS_Store*
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
......@@ -10,6 +12,7 @@ __pycache__/
.Python
env/
build/
build-install/
develop-eggs/
dist/
downloads/
......@@ -24,66 +27,20 @@ var/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
.pytest_cache
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
.idea/
# pixi environments
.pixi
*.egg-info
image: ghcr.io/prefix-dev/pixi:0.29.0
variables:
FF_USE_FASTZIP: "true"
CACHE_COMPRESSION_LEVEL: "fastest"
pre-commit:
variables:
PRE_COMMIT_HOME: $CI_PROJECT_DIR/.pre-commit-cache
script:
- pixi run -e precommit git config --global --add safe.directory $CI_PROJECT_DIR
- pixi run -e precommit check
cache:
key: "$CI_JOB_NAME"
paths:
- .pre-commit-cache
- .pixi
test:
script:
- pixi run test
cache:
key: "$CI_JOB_NAME"
paths:
- .pixi
- build
build:
script:
- pixi run -e publish uv build --sdist
- pixi run -e publish twine check dist/*
artifacts:
paths:
- dist
expire_in: "7 days"
cache:
key: "$CI_JOB_NAME"
paths:
- .pixi
publish to test pypi:
needs:
- test
- build
- pre-commit
rules:
- if: '$CI_COMMIT_TAG =~ /^v[0-9]+\.[0-9]+\.[0-9]+.*\+test$/' # vX.Y.Z.post1+test
script:
- pixi run -e publish twine upload -u __token__ -p $PYPI_TEST_TOKEN --repository testpypi dist/*
cache:
key: "$CI_JOB_NAME"
paths:
- .pixi
publish to pypi:
needs:
- test
- build
- pre-commit
rules:
- if: '$CI_COMMIT_TAG =~ /^v[0-9]+\.[0-9]+\.[0-9]+[^+]*$/' # No +test
script:
- pixi run -e publish twine upload -u __token__ -p $PYPI_TOKEN dist/*
cache:
key: "$CI_JOB_NAME"
paths:
- .pixi
repos:
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: "v0.6.4"
hooks:
- id: ruff
- id: ruff-format
- repo: https://github.com/PyCQA/doc8
rev: "v1.1.2"
hooks:
- id: doc8
- repo: https://github.com/codespell-project/codespell
rev: v2.3.0
hooks:
- id: codespell
stages: [commit]
additional_dependencies:
- tomli
# Prevent committing inline conflict markers
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
hooks:
- id: check-merge-conflict
args: [--assume-in-merge]
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
# Python-MUMPS Authors
Below is a list of the contributors to Python-MUMPS
+ [Anton Akhmerov](<https://antonakhmerov.org>)
+ Christoph Groth
+ Kelvin Loh
+ [Bas Nijholt](<https://nijho.lt>)
+ [Joseph Weston](<https://joseph.weston.cloud>)
+ Michael Wimmer
+ Kostas Vilkelis
For a full list of contributors run
```bash
git log --pretty=format:"%an" | sort | uniq
```
BSD 3-Clause License
BSD 2-Clause License
Copyright (c) 2018 Python-MUMPS Authors
Copyright (c) 2017, Bas Nijholt
All rights reserved.
Redistribution and use in source and binary forms, with or without
......@@ -13,10 +14,6 @@ modification, are permitted provided that the following conditions are met:
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
......
# Python-MUMPS
Python bindings for the [MUMPS](http://mumps-solver.org/): a parallel sparse direct solver.
## Scope
This package targets MUMPS packaged by conda-forge using Cython bindings. It
aims to provide a full wrapper of the MUMPS sequential API. Its primary target
OS is Linux.
Next steps include:
- Support for Windows and OSX
- Support for distributed (MPI) MUMPS
## Installation
`python-mumps` works with Python 3.10 and higher on Linux, Windows and Mac.
The recommended way to install `python-mumps` is using `mamba`/`conda`.
```bash
mamba install -c conda-forge python-mumps
```
`python-mumps` can also be installed from PyPI, however this is a more involved procedure
that requires separately installing the MUMPS library and a C compiler.
## Usage example
The following example shows how Python-MUMPS can be used to implement sparse diagonalization
with Scipy.
```python
import scipy.sparse.linalg as sla
from scipy.sparse import identity
import mumps
def sparse_diag(matrix, k, sigma, **kwargs):
"""Call sla.eigsh with mumps support.
See scipy.sparse.linalg.eigsh for documentation.
"""
class LuInv(sla.LinearOperator):
def __init__(self, A):
inst = mumps.Context()
inst.analyze(A, ordering='pord')
inst.factor(A)
self.solve = inst.solve
sla.LinearOperator.__init__(self, A.dtype, A.shape)
def _matvec(self, x):
return self.solve(x.astype(self.dtype))
opinv = LuInv(matrix - sigma * identity(matrix.shape[0]))
return sla.eigsh(matrix, k, sigma=sigma, OPinv=opinv, **kwargs)
```
## Development
### Pixi
`python-mumps` recommends [pixi](https://pixi.sh/).
After installing pixi, use
```bash
pixi run test -v # (Pytest arguments go after test)
```
This will also install the necessary dependencies.
### pre-commit
`python-mumps` uses [pre-commit](https://pre-commit.com/) to enforce code style. After installing it, run
```bash
pre-commit install
```
or if you want to use pre-commit provided by pixi, run
```bash
pixi run pre-commit install
```
project(
'python-mumps',
'cython', 'c',
license: 'BSD-2-Clause',
license_files: 'LICENSE',
meson_version: '>= 1.1.0',
default_options: [
'buildtype=release',
'c_std=c99',
],
version: run_command('mumps/_version.py', check: true).stdout().strip(),
)
py = import('python').find_installation(pure: false)
cc = meson.get_compiler('c')
tempita = files('tempita.py')
fs = import('fs')
incdir_numpy = meson.get_external_property('numpy-include-dir', 'not-given')
if incdir_numpy == 'not-given'
incdir_numpy = run_command(py,
[
'-c',
'''import os
import numpy as np
try:
incdir = os.path.relpath(np.get_include())
except Exception:
incdir = np.get_include()
print(incdir)
'''
],
check: true
).stdout().strip()
else
_incdir_numpy_abs = incdir_numpy
endif
inc_np = include_directories(incdir_numpy)
numpy_nodepr_api = ['-DNPY_NO_DEPRECATED_API=NPY_1_9_API_VERSION']
np_dep = declare_dependency(include_directories: inc_np, compile_args: numpy_nodepr_api)
mumps_names = get_option('mumps_names')
mumps_libs = []
foreach mumps_name : mumps_names
if cc.find_library(mumps_name, required: false).found()
mumps_libs += cc.find_library(mumps_name)
else
# Fall back to the name without '_seq'.
mumps_libs += cc.find_library(mumps_name.substring(0, -4))
endif
endforeach
subdir('mumps')
option(
'mumps_names',
type: 'array',
value: ['dmumps_seq', 'zmumps_seq', 'cmumps_seq', 'smumps_seq'],
description: 'MUMPS library names to link against. Default is: ones used in conda-forge.'
)
from .mumps import (
Context,
MUMPSError,
AnalysisStatistics,
FactorizationStatistics,
schur_complement,
nullspace,
possible_orderings,
orderings,
)
from ._version import __version__
__all__ = [
"Context",
"MUMPSError",
"AnalysisStatistics",
"FactorizationStatistics",
"schur_complement",
"nullspace",
"possible_orderings",
"orderings",
"__version__",
"test",
]
def test(verbose=True):
try:
from pytest import main
except ImportError:
raise RuntimeError("pytest is required to run the tests")
import os.path
return main(
[os.path.dirname(os.path.abspath(__file__)), "-s"] + (["-v"] if verbose else [])
)
test.__test__ = False
# Copyright 2011-2016 Kwant authors.
# Copyright 2018 Python-MUMPS Authors.
#
# This file is part of Python-MUMPS. It is subject to the license terms in the file
# LICENSE found in the top-level directory of this distribution. A list of
# Python-MUMPS authors can be found in the file AUTHORS.md at the top-level
# directory of this distribution and at
# https://gitlab.kwant-project.org/kwant/python-mumps.
"""Cython declarations for the C interface to Mumps."""
cdef extern from "mumps_c_types.h" nogil:
ctypedef int MUMPS_INT
ctypedef float SMUMPS_REAL
ctypedef float SMUMPS_COMPLEX
ctypedef double DMUMPS_REAL
ctypedef double DMUMPS_COMPLEX
ctypedef struct mumps_complex:
pass
ctypedef struct mumps_double_complex:
pass
ctypedef float CMUMPS_REAL
ctypedef mumps_complex CMUMPS_COMPLEX
ctypedef double ZMUMPS_REAL
ctypedef mumps_double_complex ZMUMPS_COMPLEX
{{for p in 'SDCZ'}}
cdef extern from "{{p.lower()}}mumps_c.h" nogil:
ctypedef struct {{p}}MUMPS_STRUC_C:
MUMPS_INT sym, par, job
MUMPS_INT comm_fortran
MUMPS_INT icntl[60]
{{p}}MUMPS_REAL cntl[15]
MUMPS_INT n
MUMPS_INT nz
MUMPS_INT *irn
MUMPS_INT *jcn
{{p}}MUMPS_COMPLEX *a
MUMPS_INT nrhs, lrhs
{{p}}MUMPS_COMPLEX *rhs
MUMPS_INT info[40]
MUMPS_INT infog[40]
{{p}}MUMPS_REAL rinfo[40]
{{p}}MUMPS_REAL rinfog[40]
MUMPS_INT nz_rhs
{{p}}MUMPS_COMPLEX *rhs_sparse
MUMPS_INT *irhs_sparse
MUMPS_INT *irhs_ptr
MUMPS_INT size_schur
MUMPS_INT *listvar_schur
{{p}}MUMPS_COMPLEX *schur
MUMPS_INT lredrhs
{{p}}MUMPS_COMPLEX *redrhs
cdef void {{p.lower()}}mumps_c({{p}}MUMPS_STRUC_C *)
{{endfor}}
# Copyright 2011-2016 Kwant authors.
# Copyright 2018 Python-MUMPS Authors.
#
# This file is part of Python-MUMPS. It is subject to the license terms in the file
# LICENSE found in the top-level directory of this distribution. A list of
# Python-MUMPS authors can be found in the file AUTHORS.md at the top-level
# directory of this distribution and at
# https://gitlab.kwant-project.org/kwant/python-mumps.
"""Low-level interface to Mumps.
Thin wrappers around the Mumps C API. At this stage there
are still separate interfaces for real vs. complex and single
vs. double precision.
End users should not use this interface directly unless they
need finer control that that provided by the 'mumps' interface.
"""
cimport numpy as np
import numpy as np
from mumps cimport _mumps as mumps
from mumps.fortran_helpers import assert_fortran_matvec, assert_fortran_mat
from cpython.pythread cimport (
PyThread_type_lock,
PyThread_allocate_lock,
PyThread_acquire_lock,
PyThread_release_lock,
PyThread_free_lock
)
int_dtype = np.int32
ctypedef np.float32_t SMUMPS_COMPLEX_NPT
ctypedef np.float64_t DMUMPS_COMPLEX_NPT
ctypedef np.complex64_t CMUMPS_COMPLEX_NPT
ctypedef np.complex128_t ZMUMPS_COMPLEX_NPT
SMUMPS_COMPLEX_NP = np.float32
DMUMPS_COMPLEX_NP = np.float64
CMUMPS_COMPLEX_NP = np.complex64
ZMUMPS_COMPLEX_NP = np.complex128
# Proxy classes for Python access to the control and info parameters of MUMPS
cdef class mumps_int_array:
cdef mumps.MUMPS_INT *array
def __init__(self):
self.array = NULL
def __getitem__(self, key):
return self.array[key - 1]
def __setitem__(self, key, value):
self.array[key - 1] = value
# workaround for the fact that cython cannot pass pointers to an __init__()
cdef make_mumps_int_array(mumps.MUMPS_INT *array):
wrapper = mumps_int_array()
wrapper.array = array
return wrapper
{{for p in 'SDCZ'}}
cdef class {{p.lower()}}mumps_real_array:
cdef mumps.{{p}}MUMPS_REAL *array
def __init__(self):
self.array = NULL
def __getitem__(self, key):
return self.array[key - 1]
def __setitem__(self, key, value):
self.array[key - 1] = value
cdef make_{{p.lower()}}mumps_real_array(mumps.{{p}}MUMPS_REAL *array):
wrapper = {{p.lower()}}mumps_real_array()
wrapper.array = array
return wrapper
{{endfor}}
#############################################################
{{for p in 'SDCZ'}}
cdef class {{p.lower()}}mumps:
cdef mumps.{{p}}MUMPS_STRUC_C params
cdef public mumps_int_array icntl
cdef public {{p.lower()}}mumps_real_array cntl
cdef public mumps_int_array info
cdef public mumps_int_array infog
cdef public {{p.lower()}}mumps_real_array rinfo
cdef public {{p.lower()}}mumps_real_array rinfog
cdef PyThread_type_lock lock
def __cinit__(self, verbose=False, sym=0):
self.lock = PyThread_allocate_lock()
self.params.job = -1
self.params.sym = sym
self.params.par = 1
self.params.comm_fortran = -987654 # MPI_COMM_WORLD, according to docs
with nogil:
mumps.{{p.lower()}}mumps_c(&self.params)
self.icntl = make_mumps_int_array(self.params.icntl)
self.cntl = make_{{p.lower()}}mumps_real_array(self.params.cntl)
self.info = make_mumps_int_array(self.params.info)
self.infog = make_mumps_int_array(self.params.infog)
self.rinfo = make_{{p.lower()}}mumps_real_array(self.params.rinfo)
self.rinfog = make_{{p.lower()}}mumps_real_array(self.params.rinfog)
def __init__(self, verbose=False, sym=0):
# no diagnostic output (MUMPS is very verbose normally)
if not verbose:
self.icntl[1] = 0
self.icntl[3] = 0
def __dealloc__(self):
if self.lock: # guard to ensure __dealloc__ operations are not called multiple times
self.params.job = -2
with nogil:
mumps.{{p.lower()}}mumps_c(&self.params)
PyThread_free_lock(self.lock)
self.lock = NULL
def call(self):
with nogil:
PyThread_acquire_lock(self.lock, 1)
mumps.{{p.lower()}}mumps_c(&self.params)
PyThread_release_lock(self.lock)
def _set_job(self, value):
self.params.job = value
def _get_job(self):
return self.params.job
job = property(_get_job, _set_job)
@property
def sym(self):
return self.params.sym
def set_assembled_matrix(self,
mumps.MUMPS_INT N,
np.ndarray[mumps.MUMPS_INT, ndim=1] i,
np.ndarray[mumps.MUMPS_INT, ndim=1] j,
np.ndarray[{{p}}MUMPS_COMPLEX_NPT, ndim=1] a):
self.params.n = N
self.params.nz = a.shape[0]
self.params.irn = <mumps.MUMPS_INT *>i.data
self.params.jcn = <mumps.MUMPS_INT *>j.data
self.params.a = <mumps.{{p}}MUMPS_COMPLEX *>a.data
def set_dense_rhs(self, np.ndarray rhs):
assert_fortran_matvec(rhs)
if rhs.dtype != {{p}}MUMPS_COMPLEX_NP:
raise ValueError("numpy array must be of dtype {}".format({{p}}MUMPS_COMPLEX_NP))
if rhs.ndim == 1:
self.params.nrhs = 1
else:
self.params.nrhs = rhs.shape[1]
self.params.lrhs = rhs.shape[0]
self.params.rhs = <mumps.{{p}}MUMPS_COMPLEX *>rhs.data
def set_sparse_rhs(self,
np.ndarray[mumps.MUMPS_INT, ndim=1] col_ptr,
np.ndarray[mumps.MUMPS_INT, ndim=1] row_ind,
np.ndarray[{{p}}MUMPS_COMPLEX_NPT, ndim=1] data):
if row_ind.shape[0] != data.shape[0]:
raise ValueError("Number of entries in row index and value "
"array differ!")
self.params.nz_rhs = data.shape[0]
self.params.nrhs = col_ptr.shape[0] - 1
self.params.rhs_sparse = <mumps.{{p}}MUMPS_COMPLEX *>data.data
self.params.irhs_sparse = <mumps.MUMPS_INT *>row_ind.data
self.params.irhs_ptr = <mumps.MUMPS_INT *>col_ptr.data
def set_schur(self,
np.ndarray[{{p}}MUMPS_COMPLEX_NPT, ndim=2, mode='c'] schur,
np.ndarray[mumps.MUMPS_INT, ndim=1] schur_vars):
if schur.shape[0] != schur.shape[1]:
raise ValueError("Schur matrix must be squared!")
if schur.shape[0] != schur_vars.shape[0]:
raise ValueError("Number of Schur variables must agree "
"with Schur complement size!")
self.params.size_schur = schur.shape[0]
self.params.schur = <mumps.{{p}}MUMPS_COMPLEX *>schur.data
self.params.listvar_schur = <mumps.MUMPS_INT *>schur_vars.data
def set_schur_rhs(self,
np.ndarray[{{p}}MUMPS_COMPLEX_NPT, ndim=1, mode='c'] rhs):
self.params.lredrhs = rhs.shape[0]
self.params.redrhs = <mumps.{{p}}MUMPS_COMPLEX *>rhs.data
{{endfor}}
#!/usr/bin/env python
try:
# setuptools_scm is not a runtime dependency. This is used when pip
# installing from a git repository.
from setuptools_scm import get_version
__version__ = get_version(root="..", relative_to=__file__)
except (ImportError, LookupError):
from importlib.metadata import version, PackageNotFoundError
try:
__version__ = version("python-mumps")
except PackageNotFoundError:
__version__ = "0.0.0+unknown"
if __name__ == "__main__":
print(__version__)
# Copyright 2011-2016 Anton Akhmerov, Christoph Groth, Michael Wimmer
# Copyright 2018 Python-MUMPS Authors
#
# This file is part of Python-MUMPS. It is subject to the license terms in the file
# LICENSE found in the top-level directory of this distribution. A list of
# Python-MUMPS authors can be found in the file AUTHORS.md at the top-level
# directory of this distribution and at
# https://gitlab.kwant-project.org/kwant/python-mumps.
import numpy as np
def prepare_for_fortran(overwrite, *args):
"""Convert arrays to Fortran format.
This function takes a number of array objects in `args` and converts them
to a format that can be directly passed to a Fortran function (Fortran
contiguous NumPy array). If the arrays have different data type, they
converted arrays are cast to a common compatible data type (one of NumPy's
`float32`, `float64`, `complex64`, `complex128` data types).
If `overwrite` is ``False``, an NumPy array that would already be in the
correct format (Fortran contiguous, right data type) is neverthelessed
copied. (Hence, overwrite = True does not imply that acting on the
converted array in the return values will overwrite the original array in
all cases -- it does only so if the original array was already in the
correct format. The conversions require copying. In fact, that's the same
behavior as in SciPy, it's just not explicitly stated there)
If an argument is ``None``, it is just passed through and not used to
determine the proper data type.
`prepare_for_lapack` returns a character indicating the proper
data type in LAPACK style 'SDCZ' and a list of
properly converted arrays.
"""
# Make sure we have NumPy arrays
mats = [None] * len(args)
for i in range(len(args)):
if args[i] is not None:
arr = np.asanyarray(args[i])
if not np.issubdtype(arr.dtype, np.number):
raise ValueError("Argument cannot be interpreted as a numeric array")
mats[i] = (arr, arr is not args[i] or overwrite)
else:
mats[i] = (None, True)
# First figure out common dtype
# Note: The return type of common_type is guaranteed to be a floating point
# kind.
dtype = np.common_type(*[arr for arr, ovwrt in mats if arr is not None])
if dtype == np.float32:
lapacktype = "s"
elif dtype == np.float64:
lapacktype = "d"
elif dtype == np.complex64:
lapacktype = "c"
elif dtype == np.complex128:
lapacktype = "z"
else:
raise AssertionError("Unexpected data type from common_type")
ret = [lapacktype]
for npmat, ovwrt in mats:
# Now make sure that the array is contiguous, and copy if necessary.
if npmat is not None:
if npmat.ndim == 2:
if not npmat.flags["F_CONTIGUOUS"]:
npmat = np.asfortranarray(npmat, dtype=dtype)
elif npmat.dtype != dtype:
npmat = npmat.astype(dtype)
elif not ovwrt:
# ugly here: copy makes always C-array, no way to tell it
# to make a Fortran array.
npmat = np.asfortranarray(npmat.copy())
elif npmat.ndim == 1:
if not npmat.flags["C_CONTIGUOUS"]:
npmat = np.ascontiguousarray(npmat, dtype=dtype)
elif npmat.dtype != dtype:
npmat = npmat.astype(dtype)
elif not ovwrt:
npmat = np.asfortranarray(npmat.copy())
else:
raise ValueError("Dimensionality of array is not 1 or 2")
ret.append(npmat)
return tuple(ret)
def assert_fortran_mat(*mats):
"""Check if the input ndarrays are all proper Fortran matrices."""
# This is a workaround for a bug in NumPy version < 2.0,
# where 1x1 matrices do not have the F_Contiguous flag set correctly.
for mat in mats:
if (
mat is not None
and (mat.shape[0] > 1 or mat.shape[1] > 1)
and not mat.flags["F_CONTIGUOUS"]
):
raise ValueError("Input matrix must be Fortran contiguous")
def assert_fortran_matvec(*arrays):
"""Check if the input ndarrays are all proper Fortran matrices
or vectors."""
# This is a workaround for a bug in NumPy version < 2.0,
# where 1x1 matrices do not have the F_Contiguous flag set correctly.
for arr in arrays:
if arr.ndim not in (1, 2):
raise ValueError("Input must be either a vector or a matrix.")
if not arr.flags["F_CONTIGUOUS"] or (
arr.ndim == 2 and arr.shape[0] == 1 and arr.shape[1] == 1
):
raise ValueError("Input must be a Fortran ordered NumPy array")
templates = files('_mumps.pxd.in', '_mumps.pyx.in')
pyx_files = []
foreach template : templates
base_name = fs.name(template)
rendered_name = base_name.replace('.in', '')
pyx_file = custom_target(
rendered_name,
input: template,
output: rendered_name,
command: [py, tempita, '@INPUT@', '@OUTPUT@']
)
pyx_files += pyx_file
endforeach
py_files = files('__init__.py', 'mumps.py', 'fortran_helpers.py', '_version.py')
py.install_sources(py_files, subdir: 'mumps')
foreach py_file : py_files
fs.copyfile(py_file)
endforeach
py.extension_module('_mumps',
pyx_files,
dependencies: [np_dep, mumps_libs],
install: true,
subdir: 'mumps'
)
subdir('tests')
This diff is collapsed.
# Copyright 2011-2016 Kwant authors.
# Copyright 2018 Python-MUMPS Authors.
#
# This file is part of Python-MUMPS. It is subject to the license terms in the file
# LICENSE found in the top-level directory of this distribution. A list of
# Python-MUMPS authors can be found in the file AUTHORS.md at the top-level
# directory of this distribution and at
# https://gitlab.kwant-project.org/kwant/python-mumps.
import numpy as np
class _Random:
def __init__(self):
self._x = 13
def _set_seed(self, seed):
self._x = seed
def _randf(self):
# A very bad random number generator returning numbers between -1 and
# +1. Just for making some matrices, and being sure that they are the
# same on any architecture.
m = 2**16
a = 11929
c = 36491
self._x = (a * self._x + c) % m
return (float(self._x) / m - 0.5) * 2
def _randi(self):
# A very bad random number generator returning number between 0 and 20.
# Just for making some matrices, and being sure that they are the same
# on any architecture.
m = 2**16
a = 11929
c = 36491
self._x = (a * self._x + c) % m
return self._x % 21
def randmat(self, n, m, dtype):
mat = np.empty((n, m), dtype=dtype)
if issubclass(dtype, np.complexfloating):
for i in range(n):
for j in range(m):
mat[i, j] = self._randf() + 1j * self._randf()
elif issubclass(dtype, np.floating):
for i in range(n):
for j in range(m):
mat[i, j] = self._randf()
elif issubclass(dtype, np.integer):
for i in range(n):
for j in range(m):
mat[i, j] = self._randi()
return mat
def randvec(self, n, dtype):
vec = np.empty(n, dtype=dtype)
if issubclass(dtype, np.complexfloating):
for i in range(n):
vec[i] = self._randf() + 1j * self._randf()
elif issubclass(dtype, np.floating):
for i in range(n):
vec[i] = self._randf()
elif issubclass(dtype, np.integer):
for i in range(n):
vec[i] = self._randi()
return vec
py.install_sources(
files('__init__.py', '_test_utils.py', 'test_mumps.py'),
subdir: 'mumps/tests'
)
# Copyright 2011-2016 Kwant authors.
# Copyright 2018 Python-MUMPS Authors.
#
# This file is part of Python-MUMPS. It is subject to the license terms in the file
# LICENSE found in the top-level directory of this distribution. A list of
# Python-MUMPS authors can be found in the file AUTHORS.md at the top-level
# directory of this distribution and at
# https://gitlab.kwant-project.org/kwant/python-mumps.
import pytest
import numpy as np
import scipy.sparse as sp
import scipy.linalg as la
from mumps import Context, MUMPSError, schur_complement, nullspace
from ._test_utils import _Random
# Decimal places of precision per datatype. These limits have been determined
# heuristically by inspecting the upper error bound reported by MUMPS for
# the random matrices used here.
precisions = {
np.float32: 1, # yes, really only 1 decimal place of precision!
np.float64: 10,
np.complex64: 1,
np.complex128: 10,
}
dtypes = list(precisions.keys())
def assert_array_almost_equal(dtype, a, b):
np.testing.assert_almost_equal(a, b, decimal=precisions[dtype])
def assert_almost_equal(dtype, a, b):
np.testing.assert_almost_equal(a, b, decimal=precisions[dtype])
@pytest.mark.parametrize("dtype", dtypes, ids=str)
@pytest.mark.parametrize("mat_size", [2, 10, 100], ids=str)
def test_lu_with_dense(dtype, mat_size):
rand = _Random()
a = rand.randmat(mat_size, mat_size, dtype)
bmat = rand.randmat(mat_size, mat_size, dtype)
bvec = rand.randvec(mat_size, dtype)
ctx = Context()
ctx.factor(sp.coo_matrix(a))
xvec = ctx.solve(bvec)
xmat = ctx.solve(bmat)
assert_array_almost_equal(dtype, np.dot(a, xvec), bvec)
assert_array_almost_equal(dtype, np.dot(a, xmat), bmat)
# now "sparse" right hand side
xvec = ctx.solve(sp.csc_matrix(bvec.reshape(mat_size, 1)))
xmat = ctx.solve(sp.csc_matrix(bmat))
assert_array_almost_equal(dtype, np.dot(a, xvec), bvec.reshape(mat_size, 1))
assert_array_almost_equal(dtype, np.dot(a, xmat), bmat)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
@pytest.mark.parametrize("mat_size", [5, 10], ids=str)
def test_schur_complement_with_dense(dtype, mat_size):
rand = _Random()
a = rand.randmat(mat_size, mat_size, dtype)
s = schur_complement(sp.coo_matrix(a), list(range(3)))
assert_array_almost_equal(dtype, np.linalg.inv(s), np.linalg.inv(a)[:3, :3])
@pytest.mark.parametrize("dtype", dtypes, ids=str)
@pytest.mark.parametrize("mat_size", [5, 10], ids=str)
@pytest.mark.parametrize("symmetric_matrix", [True, False], ids=str)
def test_schur_complement_solution(dtype, mat_size, symmetric_matrix):
rand = _Random()
a = rand.randmat(mat_size, mat_size, dtype)
if symmetric_matrix:
a = a + a.T
bvec = rand.randvec(mat_size, dtype)
ctx = Context()
ctx.set_matrix(a, symmetric=symmetric_matrix)
ctx.schur(range(3))
xvec = ctx.solve_schur(bvec)
assert_array_almost_equal(dtype, a @ xvec, bvec)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
def test_factor_error(dtype):
"""Test that an error is raised if factor is asked to reuse missing analysis."""
a = sp.identity(10, dtype=dtype)
with pytest.raises(ValueError):
Context().factor(a, reuse_analysis=True)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
def test_error_minus_19(dtype):
"""Test if MUMPSError -19 is properly caught by increasing memory"""
ctx = Context()
a = sp.eye(5000, dtype=dtype)
# Create the context so we can modify it for factorization
ctx.analyze(a)
# We ensure that this first call creates a -19 error by
# allocating only 1 MB for factorization
ctx.mumps_instance.icntl[23] = 1 # Memory upper bound set to 1MB
ctx.mumps_instance.icntl[14] = 1 # Initial memory relaxation to 1%
ctx.mumps_instance.job = 2
ctx.mumps_instance.call()
# ensure that we really don't have enough memory
assert ctx.mumps_instance.infog[1] == -19
# This call should not raise any errors as
# it would successfully allocate memory
Context().factor(a)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
@pytest.mark.parametrize("mat_size", [20, 50], ids=str)
@pytest.mark.parametrize("symmetric_matrix", [True, False], ids=str)
def test_nullspace(dtype, mat_size, symmetric_matrix):
"""Tests the nullspace wrapper by creating a rank deficient matrix
with known deficiency
This test is to check whether the wrapper works. The algorithm
to obtain the nullspace basis within MUMPS is still rather
unclear with the use of these null pivot threshold. Suppose
that we have to live with this uncertainty, we at least create
a test case where we know for sure the right nullspace
dimensions and check if the subroutine has successfully
calculated the nullspace basis.
"""
if not symmetric_matrix:
test = sp.coo_matrix([[1, 2], [2, 4]])
try:
nullspace(test, False)
except MUMPSError as error:
if error.error == -37:
pytest.skip("Installed MUMPS does not support unsymmetric nullspace")
else:
raise
# We ensure the dimension of the nullspace row vectors by creating a random
# column vector, a of size (mat_size x 1)
rand = _Random()
a = sp.coo_matrix(rand.randmat(mat_size, 1, dtype))
# We then create the symmetric matrix by making this operation:
# A = a * a^T, where A has size (mat_size x mat_size). Also because the
# matrix is created by an outer product of two column vectors, the rank of
# A is always 1. Hence, we know the dimension of the nullspace row vectors
# is always mat_size - 1. Also, since the two column vectors are the same,
# A is then a symmetric matrix.
if symmetric_matrix:
b = a.copy()
else:
b = sp.coo_matrix(rand.randmat(mat_size, 1, dtype))
A = a * b.transpose()
A_original = a * b.transpose()
# Remove the lower triangular elements in order to comply with the
# symmetric solver of MUMPS
if symmetric_matrix:
for i in range(a.shape[0]):
for j in range(i):
A[i, j] = 0
A.eliminate_zeros()
# Start with a small pivot_threshold value
pivot_threshold = 1e-5
maximum_threshold = 1.0
# We repeatedly add the threshold until we get the full nullspace basis
while True:
X = nullspace(A, symmetric_matrix, pivot_threshold)
pivot_threshold += 2 * 1e-5
if (X.shape[1] == A.shape[0] - 1) or (pivot_threshold > maximum_threshold):
break
# We know that the row vector dimension of the returned nullspace
# has to be mat_size-1
assert X.shape[1] == A_original.shape[0] - 1
# Check that indeed the nullspace does indeed give 0 when multiplied
# with the input matrix
assert_almost_equal(dtype, la.norm(A_original * X), 0.0)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
def test_one_by_one(dtype):
"""Test a 1x1 matrix.
This is a regression test for
https://gitlab.kwant-project.org/kwant/python-mumps/-/issues/18
"""
ctx = Context()
ctx.factor(sp.eye(1, dtype=dtype))
assert_almost_equal(dtype, ctx.solve(np.array([1], dtype=dtype)), 1)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
def test_zero_size_rhs(dtype):
"""Test that a 0xn rhs can be solved."""
a = np.random.randn(10, 10).astype(dtype)
ctx = Context()
ctx.factor(a)
rhs = np.zeros((10, 0), dtype=dtype)
assert_almost_equal(dtype, ctx.solve(rhs), rhs)
@pytest.mark.parametrize("dtype", dtypes, ids=str)
def test_symmetric_matrix(dtype):
"""Test that a symmetric matrix can be solved."""
n = 10
a = np.random.randn(n, n).astype(dtype)
if np.iscomplexobj(a):
a += (1j * np.random.randn(n, n)).astype(dtype)
a += a.T
ctx = Context()
ctx.set_matrix(a, symmetric=True)
ctx.factor()
rhs = np.random.randn(n, 1).astype(dtype)
assert_almost_equal(dtype, ctx.solve(rhs), la.solve(a, rhs))