Skip to content

Commit

Permalink
Merge pull request #1489 from hackermd/patch-release-2.2.1
Browse files Browse the repository at this point in the history
Patch release 2.2.1
  • Loading branch information
darcymason committed Aug 27, 2021
2 parents 5f4629a + 873e9dd commit 45eb0d9
Show file tree
Hide file tree
Showing 41 changed files with 156 additions and 86 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/master-push.yml
Expand Up @@ -127,7 +127,7 @@ jobs:

- name: Install pymedphys
if: ${{ matrix.pymedphys-dep == 'pymedphys' }}
run: python -m pip install pymedphys[user,tests]>=0.31.0
run: python -m pip install --pre pymedphys && python -m pip install pymedphys[user,tests]
- name: Get PyMedPhys cache directory
if: ${{ matrix.pymedphys-dep == 'pymedphys' }}
id: pymedphys-cache-location
Expand Down
2 changes: 1 addition & 1 deletion build_tools/circle/push_doc.sh
Expand Up @@ -3,7 +3,7 @@
# This script is meant to be called in the "deploy" step defined
# in .circleci/config.yml. See https://circleci.com/docs/2.0 for more details.

# We have three possibily workflows:
# We have three possible workflows:
# If the git branch is 'master' then we want to commit and merge the dev/
# docs on gh-pages
# If the git branch is [0-9].[0.9].X (i.e. 0.9.X, 1.0.X, 1.2.X, 41.21.X) then
Expand Down
2 changes: 1 addition & 1 deletion doc/old/ref_guide.rst
Expand Up @@ -32,7 +32,7 @@ File Writing
DICOM files can also be written using *pydicom*. There are two ways to do this.

* The first is to use
:func:`~filewriter.dcmwrite` with a prexisting :class:`~dataset.FileDataset`
:func:`~filewriter.dcmwrite` with a preexisting :class:`~dataset.FileDataset`
(derived from :class:`~dataset.Dataset`) instance.
* The second is to use the :meth:`Dataset.save_as()<dataset.Dataset.save_as>`
method on a ``FileDataset`` or ``Dataset`` instance.
2 changes: 1 addition & 1 deletion doc/release_notes/v0.9.5.rst
Expand Up @@ -38,6 +38,6 @@ Other
.....

* switch to Distribute for packaging
* preliminary work on Python 3 compatiblity
* preliminary work on Python 3 compatibility
* preliminary work on using sphinx for documentation
* preliminary work on better writing of files from scratch
2 changes: 1 addition & 1 deletion doc/release_notes/v1.2.0.rst
Expand Up @@ -63,7 +63,7 @@ Fixes
* Fixed handling of private tags in repeater range (:issue:`689`)
* Fixed Pillow pixel data handler for non-JPEG2k transfer syntax (:issue:`663`)
* Fixed handling of elements with ambiguous VR (:pr:`700, 728`)
* Adapted pixel handlers where endianess is explicitly adapted (:issue:`704`)
* Adapted pixel handlers where endianness is explicitly adapted (:issue:`704`)
* Improve performance of bit unpacking (:pr:`715`)
* First character set no longer removed (:issue:`707`)
* Fixed RLE decoded data having the wrong byte order (:pr:`729`)
Expand Down
2 changes: 2 additions & 0 deletions doc/release_notes/v2.2.0.rst
Expand Up @@ -116,3 +116,5 @@ Fixes
* Fixed handling of code extensions with person name component delimiter
(:pr:`1449`)
* Fixed bug decoding RBG jpg with APP14 marker due to change in Pillow (:pr:`1444`)
* Fixed decoding for `FloatPixelData` and `DoubleFloatPixelData` via
`pydicom.pixel_data_handlers.numpy_handler` (:issue:`1457`)
2 changes: 1 addition & 1 deletion doc/tutorials/filesets.rst
Expand Up @@ -332,7 +332,7 @@ use the :meth:`~pydicom.fileset.FileSet.add_custom` method.
The :meth:`~pydicom.fileset.FileSet.add` method uses *pydicom's* default
directory record creation functions to create the necessary records based on
the SOP instance's attributes, such as *SOP Class UID* and *Modality*.
Occassionally they may fail when an element required by these functions
Occasionally they may fail when an element required by these functions
is empty or missing:
.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/tutorials/installation.rst
Expand Up @@ -156,7 +156,7 @@ RLE images provided a suitable plugin is installed.

Using pip::

pip install pylibjpeg pylibjpeg-libjpeg pylibjpeg-openjpeg pylibjpeg-rle
pip install -U "pylibjpeg>=1.2" pylibjpeg-libjpeg pylibjpeg-openjpeg pylibjpeg-rle


.. _tut_install_dev:
Expand Down
1 change: 0 additions & 1 deletion examples/input_output/plot_read_fileset.py
Expand Up @@ -7,7 +7,6 @@
"""

import os
from pathlib import Path
from tempfile import TemporaryDirectory
import warnings
Expand Down
2 changes: 1 addition & 1 deletion examples/input_output/plot_write_dicom.py
Expand Up @@ -17,7 +17,7 @@
import datetime

import pydicom
from pydicom.dataset import Dataset, FileDataset, FileMetaDataset
from pydicom.dataset import FileDataset, FileMetaDataset

# Create some temporary filenames
suffix = '.dcm'
Expand Down
4 changes: 2 additions & 2 deletions pydicom/_version.py
@@ -1,9 +1,9 @@
"""Pure python package for DICOM medical file reading and writing."""
import re
from typing import Tuple, cast, Match
from typing import cast, Match


__version__: str = '2.2.0'
__version__: str = '2.2.1'

result = cast(Match[str], re.match(r'(\d+\.\d+\.\d+).*', __version__))
__version_info__ = tuple(result.group(1).split('.'))
Expand Down
34 changes: 17 additions & 17 deletions pydicom/charset.py
Expand Up @@ -519,24 +519,24 @@ def encode_string(value: str, encodings: Sequence[str]) -> bytes:
return encoded
except UnicodeError:
continue
else:
# if we have more than one encoding, we retry encoding by splitting
# `value` into chunks that can be encoded with one of the encodings
if len(encodings) > 1:
try:
return _encode_string_parts(value, encodings)
except ValueError:
pass
# all attempts failed - raise or warn and encode with replacement
# characters
if config.enforce_valid_values:
# force raising a valid UnicodeEncodeError
value.encode(encodings[0])

warnings.warn("Failed to encode value with encodings: {} - using "
"replacement characters in encoded string"
.format(', '.join(encodings)))
return _encode_string_impl(value, encodings[0], errors='replace')
# if we have more than one encoding, we retry encoding by splitting
# `value` into chunks that can be encoded with one of the encodings
if len(encodings) > 1:
try:
return _encode_string_parts(value, encodings)
except ValueError:
pass
# all attempts failed - raise or warn and encode with replacement
# characters
if config.enforce_valid_values:
# force raising a valid UnicodeEncodeError
value.encode(encodings[0])

warnings.warn("Failed to encode value with encodings: {} - using "
"replacement characters in encoded string"
.format(', '.join(encodings)))
return _encode_string_impl(value, encodings[0], errors='replace')


def _encode_string_parts(value: str, encodings: Sequence[str]) -> bytes:
Expand Down
2 changes: 0 additions & 2 deletions pydicom/cli/main.py
Expand Up @@ -8,15 +8,13 @@
"""

import argparse
import sys
import pkg_resources
import re
from typing import Tuple, cast, List, Any, Dict, Optional, Callable

from pydicom import dcmread
from pydicom.data.data_manager import get_testdata_file
from pydicom.dataset import Dataset
from pydicom.dataelem import DataElement


subparsers: Optional[argparse._SubParsersAction] = None
Expand Down
5 changes: 1 addition & 4 deletions pydicom/cli/show.py
Expand Up @@ -2,11 +2,8 @@
"""Pydicom command line interface program for `pydicom show`"""

import argparse
import sys
from typing import Optional, List, Union, Callable, Any
from typing import Optional, List, Union, Callable

from pydicom import dcmread
from pydicom.data.data_manager import get_testdata_file
from pydicom.dataset import Dataset
from pydicom.cli.main import filespec_help, filespec_parser

Expand Down
5 changes: 1 addition & 4 deletions pydicom/config.py
Expand Up @@ -5,10 +5,7 @@

import logging
import os
from typing import (
Optional, Callable, Dict, Any, TYPE_CHECKING, List, Union, MutableSequence,
Union
)
from typing import Optional, Dict, Any, TYPE_CHECKING

have_numpy = True
try:
Expand Down
2 changes: 1 addition & 1 deletion pydicom/data/__init__.py
Expand Up @@ -9,7 +9,7 @@
__all__ = [
'fetch_data_files',
'get_charset_files',
'get_palette_files'
'get_palette_files',
'get_testdata_files',
'get_testdata_file',
]
2 changes: 1 addition & 1 deletion pydicom/data/download.py
Expand Up @@ -185,7 +185,7 @@ def data_path_with_download(
filename : str
The filename of the file to return the path to.
check_hash : bool, optional
``True`` to perform a SHA256 checkum on the file, ``False`` otherwise.
``True`` to perform a SHA256 checksum on the file, ``False`` otherwise.
redownload_on_hash_mismatch : bool, optional
``True`` to redownload the file on checksum failure, ``False``
otherwise.
Expand Down
4 changes: 2 additions & 2 deletions pydicom/datadict.py
Expand Up @@ -2,7 +2,7 @@
# -*- coding: utf-8 -*-
"""Access dicom dictionary information"""

from typing import Tuple, Optional, Dict, Union
from typing import Tuple, Optional, Dict

from pydicom.config import logger
from pydicom.tag import Tag, BaseTag, TagType
Expand Down Expand Up @@ -602,7 +602,7 @@ def private_dictionary_VM(tag: TagType, private_creator: str) -> str:
The tag for the element whose value multiplicity (VM) is being
retrieved.
private_creator : str
The name of the private creater.
The name of the private creator.
Returns
-------
Expand Down
10 changes: 5 additions & 5 deletions pydicom/dataset.py
Expand Up @@ -329,7 +329,7 @@ class Dataset:
>>> def recurse(ds):
... for elem in ds:
... if elem.VR == 'SQ':
... [recurse(item) for item in elem]
... [recurse(item) for item in elem.value]
... else:
... # Do something useful with each DataElement
Expand All @@ -354,7 +354,7 @@ class Dataset:
is_little_endian : bool
Shall be set before writing with ``write_like_original=False``.
The :class:`Dataset` (excluding the pixel data) will be written using
the given endianess.
the given endianness.
is_implicit_VR : bool
Shall be set before writing with ``write_like_original=False``.
The :class:`Dataset` will be written using the transfer syntax with
Expand Down Expand Up @@ -382,7 +382,7 @@ def __init__(self, *args: _DatasetType, **kwargs: Any) -> None:

# the following read_XXX attributes are used internally to store
# the properties of the dataset after read from a file
# set depending on the endianess of the read dataset
# set depending on the endianness of the read dataset
self.read_little_endian: Optional[bool] = None
# set depending on the VR handling of the read dataset
self.read_implicit_vr: Optional[bool] = None
Expand Down Expand Up @@ -1148,7 +1148,7 @@ def get_item(
def _dataset_slice(self, slce: slice) -> "Dataset":
"""Return a slice that has the same properties as the original dataset.
That includes properties related to endianess and VR handling,
That includes properties related to endianness and VR handling,
and the specific character set. No element conversion is done, e.g.
elements of type ``RawDataElement`` are kept.
"""
Expand All @@ -1168,7 +1168,7 @@ def is_original_encoding(self) -> bool:
.. versionadded:: 1.1
This includes properties related to endianess, VR handling and the
This includes properties related to endianness, VR handling and the
(0008,0005) *Specific Character Set*.
"""
return (
Expand Down
2 changes: 1 addition & 1 deletion pydicom/dicomdir.py
Expand Up @@ -2,7 +2,7 @@
"""Module for DicomDir class."""

import os
from typing import Optional, List, Dict, Union, BinaryIO, AnyStr
from typing import Optional, List, Dict, Union, BinaryIO
import warnings

from pydicom import config
Expand Down
1 change: 0 additions & 1 deletion pydicom/encoders/base.py
Expand Up @@ -8,7 +8,6 @@
TYPE_CHECKING, Any
)

from pydicom.encaps import encapsulate
from pydicom.uid import (
UID, JPEGBaseline8Bit, JPEGExtended12Bit, JPEGLosslessP14, JPEGLosslessSV1,
JPEGLSLossless, JPEGLSNearLossless, JPEG2000Lossless, JPEG2000, RLELossless
Expand Down
2 changes: 1 addition & 1 deletion pydicom/encoders/gdcm.py
Expand Up @@ -2,7 +2,7 @@

from typing import Any, cast

from pydicom.uid import RLELossless, ImplicitVRLittleEndian
from pydicom.uid import RLELossless

try:
import gdcm
Expand Down
5 changes: 2 additions & 3 deletions pydicom/encoders/native.py
@@ -1,9 +1,8 @@
# Copyright 2008-2021 pydicom authors. See LICENSE file for details.
"""Interface for *Pixel Data* encoding, not intended to be used directly."""

from itertools import groupby, islice
from itertools import groupby
from struct import pack
import sys
from typing import List, Any

from pydicom.uid import RLELossless
Expand Down Expand Up @@ -130,7 +129,7 @@ def _encode_row(src: bytes) -> bytes:
Notes
-----
* 2-byte repeat runs are always encoded as Replicate Runs rather than
only when not preceeded by a Literal Run as suggested by the Standard.
only when not preceded by a Literal Run as suggested by the Standard.
"""
out: List[int] = []
out_append = out.append
Expand Down
8 changes: 5 additions & 3 deletions pydicom/filereader.py
Expand Up @@ -9,7 +9,7 @@
import sys
from typing import (
BinaryIO, Union, Optional, List, Any, Callable, cast, MutableSequence,
Type, Iterator, Dict
Iterator, Dict
)
import warnings
import zlib
Expand All @@ -24,6 +24,7 @@
from pydicom.dataset import Dataset, FileDataset, FileMetaDataset
from pydicom.dicomdir import DicomDir
from pydicom.errors import InvalidDicomError
from pydicom.filebase import DicomFileLike
from pydicom.fileutil import (
read_undefined_length_value, path_from_pathlike, PathType, _unpack_tag
)
Expand Down Expand Up @@ -732,7 +733,8 @@ def read_preamble(fp: BinaryIO, force: bool) -> Optional[bytes]:


def _at_pixel_data(tag: BaseTag, VR: Optional[str], length: int) -> bool:
return cast(bool, tag == 0x7fe00010)
pixel_data_tags = {0x7fe00010, 0x7fe00009, 0x7fe00008}
return tag in pixel_data_tags


def read_partial(
Expand Down Expand Up @@ -901,7 +903,7 @@ def read_partial(


def dcmread(
fp: Union[PathType, BinaryIO],
fp: Union[PathType, BinaryIO, DicomFileLike],
defer_size: Optional[Union[str, int, float]] = None,
stop_before_pixels: bool = False,
force: bool = False,
Expand Down
2 changes: 1 addition & 1 deletion pydicom/fileset.py
Expand Up @@ -90,7 +90,7 @@ def generate_filename(
The starting index to use for the suffixes, (default ``0``).
i.e. if you want to start at ``'00010'`` then `start` should be ``10``.
alphanumeric : bool, optional
If ``False`` (defalt) then only generate suffixes using the characters
If ``False`` (default) then only generate suffixes using the characters
[0-9], otherwise use [0-9][A-Z].
Yields
Expand Down
3 changes: 2 additions & 1 deletion pydicom/fileutil.py
Expand Up @@ -7,6 +7,7 @@
from pydicom.misc import size_in_bytes
from pydicom.tag import TupleTag, Tag, SequenceDelimiterTag, ItemTag, BaseTag
from pydicom.datadict import dictionary_description
from pydicom.filebase import DicomFileLike

from pydicom.config import logger

Expand Down Expand Up @@ -411,7 +412,7 @@ def length_of_undefined_length(


def path_from_pathlike(
file_object: Union[PathType, BinaryIO]
file_object: Union[PathType, BinaryIO, DicomFileLike]
) -> Union[str, BinaryIO]:
"""Returns the path if `file_object` is a path-like object, otherwise the
original `file_object`.
Expand Down
2 changes: 1 addition & 1 deletion pydicom/filewriter.py
Expand Up @@ -830,7 +830,7 @@ def _write_dataset(
if any, has been written.
"""

# if we want to write with the same endianess and VR handling as
# if we want to write with the same endianness and VR handling as
# the read dataset we want to preserve raw data elements for
# performance reasons (which is done by get_item);
# otherwise we use the default converting item getter
Expand Down
3 changes: 1 addition & 2 deletions pydicom/jsonrep.py
Expand Up @@ -3,9 +3,8 @@

import base64
from inspect import signature
import inspect
from typing import (
Callable, Optional, Union, Any, cast, Type, TypeVar, Dict, TYPE_CHECKING,
Callable, Optional, Union, Any, cast, Type, Dict, TYPE_CHECKING,
List
)
import warnings
Expand Down
1 change: 0 additions & 1 deletion pydicom/misc.py
Expand Up @@ -3,7 +3,6 @@

from itertools import groupby
from pathlib import Path
import re
from typing import Optional, Union


Expand Down

0 comments on commit 45eb0d9

Please sign in to comment.