Skip to main content

The kwcoco Module

Project description

GitlabCIPipeline GitlabCICoverage Appveyor Pypi Downloads ReadTheDocs

The main webpage for this project is: https://gitlab.kitware.com/computer-vision/kwcoco

The Kitware COCO module defines a variant of the Microsoft COCO format, originally developed for the “collected images in context” object detection challenge. We are backwards compatible with the original module, but we also have improved implementations in several places, including segmentations and keypoints.

The main data structure in this model is largely based on the implementation in https://github.com/cocodataset/cocoapi It uses the same efficient core indexing data structures, but in our implementation the indexing can be optionally turned off, functions are silent by default (with the exception of long running processes, which optionally show progress by default). We support helper functions that add and remove images, categories, and annotations.

We do not reimplement the scoring code fro pycocotools in this module. Instead that functionality currently lives in netharn: https://gitlab.kitware.com/computer-vision/netharn/-/blob/master/netharn/metrics/detect_metrics.py We may move that file either this repo or kwannot in the future. This module is more focused on efficient access and modification of a COCO dataset.

The kwcoco CLI

After installing kwcoco, you will also have the kwcoco command line tool. This uses a scriptconfig / argparse CLI interface. Running kwcoco --help should provide a good starting point.

usage: kwcoco [-h] {stats,union,split,show,toydata} ...

The Kitware COCO CLI

positional arguments:
  {stats,union,split,show,toydata}
                        specify a command to run
    stats               Compute summary statistics about a COCO dataset
    union               Combine multiple COCO datasets into a single merged dataset.
    split               Split a single COCO dataset into two sub-datasets.
    show                Visualize a COCO image
    toydata             Create COCO toydata

optional arguments:
  -h, --help            show this help message and exit

This should help you inspect (via stats and show), combine (via union), and make training splits (via split) using the command line. Also ships with toydata, which generates a COCO file you can use for testing.

Toy Data

Don’t have a dataset with you, but you still want to test out your algorithms? Try the kwcoco shapes demo dataset, and generate an arbitrarilly large dataset.

The toydata submodule renders simple objects on a noisy background — optionally with auxillary channels — and provides bounding boxes, segmentations, and keypoint annotations. The following example illustrates a generated toy image with and without overlaid annotations.

https://i.imgur.com/Vk0zUH1.png

The CocoDataset object

The kwcoco.CocoDataset class is capable of dynamic addition and removal of categories, images, and annotations. Has better support for keypoints and segmentation formats than the original COCO format. Despite being written in Python, this data structure is reasonably efficient.

>>> import kwcoco
>>> import json
>>> # Create demo data
>>> demo = CocoDataset.demo()
>>> # could also use demo.dump / demo.dumps, but this is more explicit
>>> text = json.dumps(demo.dataset)
>>> with open('demo.json', 'w') as file:
>>>    file.write(text)

>>> # Read from disk
>>> self = CocoDataset('demo.json')

>>> # Add data
>>> cid = self.add_category('Cat')
>>> gid = self.add_image('new-img.jpg')
>>> aid = self.add_annotation(image_id=gid, category_id=cid, bbox=[0, 0, 100, 100])

>>> # Remove data
>>> self.remove_annotations([aid])
>>> self.remove_images([gid])
>>> self.remove_categories([cid])

>>> # Look at data
>>> print(ub.repr2(self.basic_stats(), nl=1))
>>> print(ub.repr2(self.extended_stats(), nl=2))
>>> print(ub.repr2(self.boxsize_stats(), nl=3))
>>> print(ub.repr2(self.category_annotation_frequency()))


>>> # Inspect data
>>> import kwplot
>>> kwplot.autompl()
>>> self.show_image(gid=1)

>>> # Access single-item data via imgs, cats, anns
>>> cid = 1
>>> self.cats[cid]
{'id': 1, 'name': 'astronaut', 'supercategory': 'human'}

>>> gid = 1
>>> self.imgs[gid]
{'id': 1, 'file_name': 'astro.png', 'url': 'https://i.imgur.com/KXhKM72.png'}

>>> aid = 3
>>> self.anns[aid]
{'id': 3, 'image_id': 1, 'category_id': 3, 'line': [326, 369, 500, 500]}

# Access multi-item data via the annots and images helper objects
>>> aids = self.index.gid_to_aids[2]
>>> annots = self.annots(aids)

>>> print('annots = {}'.format(ub.repr2(annots, nl=1, sv=1)))
annots = <Annots(num=2)>

>>> annots.lookup('category_id')
[6, 4]

>>> annots.lookup('bbox')
[[37, 6, 230, 240], [124, 96, 45, 18]]

>>> # built in conversions to efficient kwimage array DataStructures
>>> print(ub.repr2(annots.detections.data))
{
    'boxes': <Boxes(xywh,
                 array([[ 37.,   6., 230., 240.],
                        [124.,  96.,  45.,  18.]], dtype=float32))>,
    'class_idxs': np.array([5, 3], dtype=np.int64),
    'keypoints': <PointsList(n=2) at 0x7f07eda33220>,
    'segmentations': <PolygonList(n=2) at 0x7f086365aa60>,
}

>>> gids = list(self.imgs.keys())
>>> images = self.images(gids)
>>> print('images = {}'.format(ub.repr2(images, nl=1, sv=1)))
images = <Images(num=3)>

>>> images.lookup('file_name')
['astro.png', 'carl.png', 'stars.png']

>>> print('images.annots = {}'.format(images.annots))
images.annots = <AnnotGroups(n=3, m=3.7, s=3.9)>

>>> print('images.annots.cids = {!r}'.format(images.annots.cids))
images.annots.cids = [[1, 2, 3, 4, 5, 5, 5, 5, 5], [6, 4], []]

The JSON Spec

A COCO file is a json file that follows a particular spec. It is used for storing computer vision datasets: namely images, categories, and annotations. Images have an id and a file name, which holds a relative or absolute path to the image data. Images can also have auxillary files (e.g. for depth masks, infared, or motion). A category has an id, a name, and an optional supercategory. Annotations always have an id, an image-id, and a bounding box. Usually they also contain a category-id. Sometimes they contain keypoints, segmentations.

An implementation and extension of the original MS-COCO API [1].

Extends the format to also include line annotations.

Dataset Spec:

dataset = {
    # these are object level categories
    'categories': [
        {
            'id': <int:category_id>,
            'name': <str:>,
            'supercategory': str  # optional

            # Note: this is the original way to specify keypoint
            # categories, but our implementation supports a more general
            # alternative schema
            "keypoints": [kpname_1, ..., kpname_K], # length <k> array of keypoint names
            "skeleton": [(kx_a1, kx_b1), ..., (kx_aE, kx_bE)], # list of edge pairs (of keypoint indices), defining connectivity of keypoints.
        },
        ...
    ],
    'images': [
        {
            'id': int, 'file_name': str
        },
        ...
    ],
    'annotations': [
        {
            'id': int,
            'image_id': int,
            'category_id': int,
            'bbox': [tl_x, tl_y, w, h],  # optional (xywh format)
            "score" : float,
            "caption": str,  # an optional text caption for this annotation
            "iscrowd" : <0 or 1>,  # denotes if the annotation covers a single object (0) or multiple objects (1)
            "keypoints" : [x1,y1,v1,...,xk,yk,vk], # or new dict-based format
            'segmentation': <RunLengthEncoding | Polygon>,  # formats are defined bellow
        },
        ...
    ],
    'licenses': [],
    'info': [],
}

Polygon:
    A flattned list of xy coordinates.
    [x1, y1, x2, y2, ..., xn, yn]

    or a list of flattned list of xy coordinates if the CCs are disjoint
    [[x1, y1, x2, y2, ..., xn, yn], [x1, y1, ..., xm, ym],]

    Note: the original COCO spec does not allow for holes in polygons.

    (PENDING) We also allow a non-standard dictionary encoding of polygons
        {'exterior': [(x1, y1)...],
         'interiors': [[(x1, y1), ...], ...]}

RunLengthEncoding:
    The RLE can be in a special bytes encoding or in a binary array
    encoding. We reuse the original C functions are in [2]_ in
    `kwimage.structs.Mask` to provide a convinient way to abstract this
    rather esoteric bytes encoding.

    For pure python implementations see kwimage:
        Converting from an image to RLE can be done via kwimage.run_length_encoding
        Converting from RLE back to an image can be done via:
            kwimage.decode_run_length

        For compatibility with the COCO specs ensure the binary flags
        for these functions are set to true.

Keypoints:
    (PENDING)
    Annotation keypoints may also be specified in this non-standard (but
    ultimately more general) way:

    'annotations': [
        {
            'keypoints': [
                {
                    'xy': <x1, y1>,
                    'visible': <0 or 1 or 2>,
                    'keypoint_category_id': <kp_cid>,
                    'keypoint_category': <kp_name, optional>,  # this can be specified instead of an id
                }, ...
            ]
        }, ...
    ],
    'keypoint_categories': [{
        'name': <str>,
        'id': <int>,  # an id for this keypoint category
        'supercategory': <kp_name>  # name of coarser parent keypoint class (for hierarchical keypoints)
        'reflection_id': <kp_cid>  # specify only if the keypoint id would be swapped with another keypoint type
    },...
    ]

    In this scheme the "keypoints" property of each annotation (which used
    to be a list of floats) is now specified as a list of dictionaries that
    specify each keypoints location, id, and visibility explicitly. This
    allows for things like non-unique keypoints, partial keypoint
    annotations. This also removes the ordering requirement, which makes it
    simpler to keep track of each keypoints class type.

    We also have a new top-level dictionary to specify all the possible
    keypoint categories.

Auxillary Channels:
    For multimodal or multispectral images it is possible to specify
    auxillary channels in an image dictionary as follows:

    {
        'id': int, 'file_name': str
        'channels': <spec>,  # a spec code that indicates the layout of these channels.
        'auxillary': [  # information about auxillary channels
            {
                'file_name':
                'channels': <spec>
            }, ... # can have many auxillary channels with unique specs
        ]
    }

Converting your data to COCO

Assuming you have programmatic access to your dataset you can easily convert to a coco file using process similar to the following code:

# ASSUME INPUTS
# my_classes: a list of category names
# my_annots: a list of annotation objects with bounding boxes, images, and categories
# my_images: a list of image files.

my_images = [
    'image1.png',
    'image2.png',
    'image3.png',
]

my_classes = [
    'spam', 'eggs', 'ham', 'jam'
]

my_annots = [
    {'image': 'image1.png', 'box': {'tl_x':  2, 'tl_y':  3, 'br_x':  5, 'br_y':  7}, 'category': 'spam'},
    {'image': 'image1.png', 'box': {'tl_x': 11, 'tl_y': 13, 'br_x': 17, 'br_y': 19}, 'category': 'spam'},
    {'image': 'image3.png', 'box': {'tl_x': 23, 'tl_y': 29, 'br_x': 31, 'br_y': 37}, 'category': 'eggs'},
    {'image': 'image3.png', 'box': {'tl_x': 41, 'tl_y': 43, 'br_x': 47, 'br_y': 53}, 'category': 'spam'},
    {'image': 'image3.png', 'box': {'tl_x': 59, 'tl_y': 61, 'br_x': 67, 'br_y': 71}, 'category': 'jam'},
    {'image': 'image3.png', 'box': {'tl_x': 73, 'tl_y': 79, 'br_x': 83, 'br_y': 89}, 'category': 'spam'},
]

# The above is just an example input, it is left as an exercise for the
# reader to translate that to your own dataset.

import kwcoco
import kwimage

# A kwcoco.CocoDataset is simply an object that manages an underlying
# `dataset` json object. It contains methods to dynamically, add, remove,
# and modify these data structures, efficient lookup tables, and many more
# conveniences when working and playing with vision datasets.
my_dset = kwcoco.CocoDataset()

for catname in my_classes:
    my_dset.add_category(name=catname)

for image_path in my_images:
    my_dset.add_image(file_name=image_path)

for annot in my_annots:
    # The index property provides fast lookups into the json data structure
    cat = my_dset.index.name_to_cat[annot['category']]
    img = my_dset.index.file_name_to_img[annot['image']]
    # One quirk of the coco format is you need to be aware that
    # boxes are in <top-left-x, top-left-y, width-w, height-h> format.
    box = annot['box']
    # Use kwimage.Boxes to preform quick, reliable, and readable
    # conversions between common bounding box formats.
    tlbr = [box['tl_x'], box['tl_y'], box['br_x'], box['br_y']]
    xywh = kwimage.Boxes([tlbr], 'tlbr').toformat('xywh').data[0].tolist()
    my_dset.add_annotation(bbox=xywh, image_id=img['id'], category_id=cat['id'])

# Dump the underlying json `dataset` object to a file
my_dset.fpath = 'my-converted-dataset.mscoco.json'
my_dset.dump(my_dset.fpath, newlines=True)

# Dump the underlying json `dataset` object to a string
print(my_dset.dumps(newlines=True))

Project details


Release history Release notifications | RSS feed

This version

0.1.3

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kwcoco-0.1.3.tar.gz (130.1 kB view details)

Uploaded Source

Built Distributions

kwcoco-0.1.3-py3-none-any.whl (136.7 kB view details)

Uploaded Python 3

kwcoco-0.1.3-py2.py3-none-any.whl (136.7 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file kwcoco-0.1.3.tar.gz.

File metadata

  • Download URL: kwcoco-0.1.3.tar.gz
  • Upload date:
  • Size: 130.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.1 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3

File hashes

Hashes for kwcoco-0.1.3.tar.gz
Algorithm Hash digest
SHA256 f6902cddd17ef6eb80821aba3feaf7c90ef37e39c8254c10aaebcd530c6be761
MD5 ae207f408f45086e100a60b6b8970d59
BLAKE2b-256 756dd8d9ecc9300c833eea14c9312d80145198365f45baf6eaeb24ab2c0f6aa2

See more details on using hashes here.

File details

Details for the file kwcoco-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: kwcoco-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 136.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.1 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3

File hashes

Hashes for kwcoco-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 22a81182fa686bd497d47e64688521e419d6f5391d5150728939f59f55ddeb73
MD5 35d22895b76338b4818f3836f0821870
BLAKE2b-256 74a89275db4b2f28f42080015819591ea943472ec1f6e38c82fdf1ca88ac3961

See more details on using hashes here.

File details

Details for the file kwcoco-0.1.3-py2.py3-none-any.whl.

File metadata

  • Download URL: kwcoco-0.1.3-py2.py3-none-any.whl
  • Upload date:
  • Size: 136.7 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.1.1 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3

File hashes

Hashes for kwcoco-0.1.3-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 3af63addbb681dceb0f6b010183e7082d19159fa8049b915c18d6dd26e8ed259
MD5 db534537c564294db61c66e8c968d90d
BLAKE2b-256 85880d62f504bb4b839ab57db1a046b77f104dae3981166bd4cc91ab6cbd2e57

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page