技术专栏
WIDER数据集的相关标注,逐一显示landmark dataset所有图片的标注
本文解释如何显示WIDER数据集及显示相关标注。
如何显示coco数据集的图片及查看标注的质量请参考前面的文章《保存coco dataset注释为单一文件,并逐一显示所有图片的mask》。
根据libfacedetection.train(https://github.com/ShiqiYu/libfacedetection.train)的数据结构,我们看一下其中的标注trainset.json的效果,
$ tree data/widerfacedata/widerface├── eval\_tools├── wider\_face\_split├── WIDER\_test├── WIDER\_train├── WIDER\_val└── trainset.json
文件trainset.json 比较大,直接打开要好久,我摘录几条看一下格式
{"images":[{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_53.jpg", "height": 683, "width": 1024, "id": 0},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_28.jpg", "height": 736, "width": 1024, "id": 1},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_591.jpg", "height": 936, "width": 1024, "id": 2},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_46.jpg", "height": 520, "width": 1024, "id": 3},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_321.jpg", "height": 1392, "width": 1024, "id": 4},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_946.jpg", "height": 1001, "width": 1024, "id": 5},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_307.jpg", "height": 1139, "width": 1024, "id": 6},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_943.jpg", "height": 1156, "width": 1024, "id": 7},{"file_name": "./44--Aerobics/44_Aerobics_Aerobics_44_711.jpg", "height": 1656, "width": 1024, "id": 8},...{"file_name": "./27--Spa/27_Spa_Spa_27_110.jpg", "height": 307, "width": 1024, "id": 12860},{"file_name": "./27--Spa/27_Spa_Spa_27_879.jpg", "height": 1024, "width": 1024, "id": 12861},{"file_name": "./27--Spa/27_Spa_Spa_27_219.jpg", "height": 682, "width": 1024, "id": 12862}],"annotations":[{"segmentation": [[421.1, 133.2, 445.5, 125.5, 432.6, 145.4, 432.3, 159.9, 450.6, 153.4]], "area": 4356.76, "iscrowd": 0, "image_id": 0, "bbox": [411.5, 100.1, 59.6, 73.1], "category_id": 1, "id": 0, "ignore": 0},{"segmentation": [[537.6, 120.5, 546.8, 119.1, 541.2, 125.2, 539.9, 130.7, 547.1, 129.5]], "area": 585.66, "iscrowd": 0, "image_id": 0, "bbox": [534.4, 110.9, 22.7, 25.8], "category_id": 1, "id": 1, "ignore": 0},{"segmentation": [[104.5, 152.8, 116.5, 150.9, 109.9, 158.4, 107.3, 164.1, 117.7, 162.5]], "area": 953.2800000000001, "iscrowd": 0, "image_id": 0, "bbox": [99.3, 139.2, 28.8, 33.1], "category_id": 1, "id": 2, "ignore": 0},{"segmentation": [[823.0, 102.6, 832.1, 102.6, 826.2, 107.1, 823.6, 112.3, 830.9, 112.2]], "area": 497.96000000000004, "iscrowd": 0, "image_id": 0, "bbox": [819.0, 95.6, 21.1, 23.6], "category_id": 1, "id": 3, "ignore": 0},{"segmentation": [[955.1, 94.6, 971.3, 96.7, 961.0, 106.2, 954.8, 111.8, 966.8, 113.3]], "area": 1760.5900000000001, "iscrowd": 0, "image_id": 0, "bbox": [945.4, 76.2, 37.7, 46.7], "category_id": 1, "id": 4, "ignore": 0},{"segmentation": [[597.0, 121.9, 603.2, 121.7, 600.3, 125.3, 598.1, 127.7, 602.5, 127.5]], "area": 195.35999999999999, "iscrowd": 0, "image_id": 0, "bbox": [593.3, 115.6, 13.2, 14.8], "category_id": 1, "id": 5, "ignore": 0},{"segmentation": [[756.2, 115.3, 762.5, 115.1, 759.5, 118.2, 757.2, 121.5, 762.1, 121.3]], "area": 221.1, "iscrowd": 0, "image_id": 0, "bbox": [752.5, 108.7, 13.4, 16.5], "category_id": 1, "id": 6, "ignore": 0},{"segmentation": [[358.1, 117.4, 372.8, 116.2, 370.0, 123.7, 361.8, 134.2, 373.6, 133.4]], "area": 1874.5200000000002, "iscrowd": 0, "image_id": 0, "bbox": [340.3, 96.7, 38.1, 49.2], "category_id": 1, "id": 7, "ignore": 0},{"segmentation": [[379.5, 129.0, 389.0, 129.0, 384.7, 133.6, 380.9, 138.7, 387.5, 138.7]], "area": 552.16, "iscrowd": 0, "image_id": 0, "bbox": [373.1, 116.6, 20.3, 27.2], "category_id": 1, "id": 8, "ignore": 0},{"segmentation": [[709.0, 126.4, 713.6, 126.1, 711.4, 128.5, 709.7, 130.8, 713.6, 130.6]], "area": 124.23, "iscrowd": 0, "image_id": 0, "bbox": [706.5, 121.5, 10.1, 12.3], "category_id": 1, "id": 9, "ignore": 0},{"segmentation": [[572.8, 126.0, 581.4, 126.0, 576.7, 131.5, 574.6, 134.5, 580.3, 134.4]], "area": 371.05, "iscrowd": 0, "image_id": 0, "bbox": [569.0, 116.7, 18.1, 20.5], "category_id": 1, "id": 10, "ignore": 0},{"segmentation": [[931.5, 271.4, 943.9, 269.6, 936.0, 278.6, 935.2, 286.0, 944.5, 284.4]], "area": 1200.3700000000001, "iscrowd": 0, "image_id": 1, "bbox": [927.0, 255.3, 30.7, 39.1], "category_id": 1, "id": 11, "ignore": 0},{"segmentation": [[473.7, 339.6, 481.5, 339.6, 474.1, 345.6, 475.3, 351.7, 481.4, 351.4]], "area": 818.4, "iscrowd": 0, "image_id": 1, "bbox": [471.2, 326.5, 24.8, 33.0], "category_id": 1, "id": 12, "ignore": 0},{"segmentation": [[28.5, 326.5, 34.1, 323.5, 31.7, 333.3, 39.8, 337.5, 42.5, 334.7]], "area": 912.0300000000001, "iscrowd": 0, "image_id": 1, "bbox": [23.9, 310.9, 30.1, 30.3], "category_id": 1, "id": 13, "ignore": 0},{"segmentation": [[413.6, 314.8, 418.6, 315.9, 412.2, 319.4, 412.7, 324.3, 416.4, 324.8]], "area": 477.52, "iscrowd": 0, "image_id": 1, "bbox": [410.3, 305.7, 18.8, 25.4], "category_id": 1, "id": 14, "ignore": 0},{"segmentation": [[599.5, 263.3, 614.7, 263.0, 603.4, 272.6, 601.1, 283.4, 612.3, 283.2]], "area": 1978.4599999999998, "iscrowd": 0, "image_id": 1, "bbox": [594.8, 242.8, 37.4, 52.9], "category_id": 1, "id": 15, "ignore": 0},{"segmentation": [[828.2, 290.6, 839.6, 290.0, 832.2, 297.0, 830.0, 303.8, 838.8, 303.3]], "area": 865.8, "iscrowd": 0, "image_id": 1, "bbox": [824.6, 278.5, 26.0, 33.3], "category_id": 1, "id": 16, "ignore": 0},...{"segmentation": [[738.1, 329.4, 747.3, 329.3, 741.2, 334.4, 738.8, 340.2, 746.2, 340.1]], "area": 607.7599999999999, "iscrowd": 0, "image_id": 1, "bbox": [734.6, 319.3, 21.4, 28.4], "category_id": 1, "id": 17, "ignore": 0},{"segmentation": [[876.7, 210.8, 877.4, 211.2, 874.3, 218.4, 878.8, 225.6, 879.2, 225.7]], "area": 847.5, "iscrowd": 0, "image_id": 12862, "bbox": [872.6, 197.0, 22.6, 37.5], "category_id": 1, "id": 113613, "ignore": 0}],"categories":[{"name": "background", "id": 0},{"name": "face", "id": 1}]}
典型的coco数据结构,我基本参考了原来的cocoapi,但也做了不小的改动,主要包括:
- 显示方式发生了变化,这次简单地显示成多边形吧(其实也可以用数据点或数字)
- mask(RLE)不需要了,因此没有maskutil,无需编译安装,直接python搞定
源码分成两个文件,一个是主文件,随便取个名吧:unknown.py
# @MxTan from SpaceVision SZ Co.Ltd## @brief for display landmark annotations piece by piece## ref. windows version cocoapi if you need a mask version# https://github.com/philferriere/cocoapi##from CoLandMark import LandMarkimport numpy as npimport skimage.io as io #conda install scikit-imageimport jsonimport osimport matplotlib as mplmpl.use('TkAgg')import pylabimport matplotlib.rcsetup as rcsetuppylab.rcParams['figure.figsize'] = (8.0, 10.0)dataDir='D:/vsAI/libfacedetectiontrain/data/widerface/WIDER_train/images'annFile= 'trainset.json'# initialize COCO api for instance annotationscoco=LandMark(annFile)# display COCO categoriescatIds = coco.getCatIds()cats = coco.loadCats(catIds)nms=[cat['name'] for cat in cats]print('COCO format categories: \n{}\n'.format(' '.join(nms)))# recursively display all images and its masksimgIds = coco.getImgIds()for id in imgIds:mpl.pyplot.clf() #put a stop breakpoint here, each cycle you will see a marked imageannIds = coco.getAnnIds([id], catIds=catIds, iscrowd=None)anns = coco.loadAnns(annIds)imgIds = coco.getImgIds(imgIds = [id])img = coco.loadImgs(imgIds[0])[0]#----- save seperate image ----#file_name_ext='./WIDER_train/images/' + img['file_name']#(filename,extension) = os.path.splitext(file_name_ext)#file_path = "coco/" + filename + ".json"#data = {"annotations":anns}#with open(file_path, 'w') as result_file:# json.dump(data, result_file)#----display image----file_path = '{}/{}'.format(dataDir,img['file_name'])I = io.imread(file_path)#NOTE: the above method is equivalent to the following format#I = io.imread('%s/%s'%(dataDir,img['file_name']))mpl.pyplot.imshow(I)mpl.pyplot.axis('off')coco.showAnns(anns)
标注工具文件取名叫做CoLandMark.py(原来叫COCO.py),里面那个类改名为LandMark,以避免同时使用原来的CoCo时冲突。
__author__ = 'tylin'__version__ = '2.0'# A copy from CocoApi, but some modifications are made to cope withe the landmark display## AN alternative import lib for landmark points (NO area rle code required, so we removed the mask part)# The following API functions are defined:# COCO - COCO api class that loads COCO annotation file and prepare data structures.# decodeMask - Decode binary mask M encoded via run-length encoding.# encodeMask - Encode binary mask M using run-length encoding.# getAnnIds - Get ann ids that satisfy given filter conditions.# getCatIds - Get cat ids that satisfy given filter conditions.# getImgIds - Get img ids that satisfy given filter conditions.# loadAnns - Load anns with the specified ids.# loadCats - Load cats with the specified ids.# loadImgs - Load imgs with the specified ids.# annToMask - Convert segmentation in an annotation to binary mask.# showAnns - Display the specified annotations.# loadRes - Load algorithm results and create API for accessing them.# download - Download COCO images from mscoco.org server.# Throughout the API "ann"=annotation, "cat"=category, and "img"=image.# Help on each functions can be accessed by: "help COCO>function".# See also COCO>decodeMask,# COCO>encodeMask, COCO>getAnnIds, COCO>getCatIds,# COCO>getImgIds, COCO>loadAnns, COCO>loadCats,# COCO>loadImgs, COCO>annToMask, COCO>showAnns# Microsoft COCO Toolbox. version 2.0# Data, paper, and tutorials available at: http://mscoco.org/# Code written by Piotr Dollar and Tsung-Yi Lin, 2014.# Licensed under the Simplified BSD License [see bsd.txt]import jsonimport timeimport numpy as npimport copyimport itertoolsimport matplotlibmatplotlib.use('Agg')import matplotlib.pyplot as pltfrom matplotlib.collections import PatchCollectionfrom matplotlib.patches import Polygonimport osfrom collections import defaultdictimport sysPYTHON_VERSION = sys.version_info[0]if PYTHON_VERSION == 2:from urllib import urlretrieveelif PYTHON_VERSION == 3:from urllib.request import urlretrievedef _isArrayLike(obj):return hasattr(obj, '__iter__') and hasattr(obj, '__len__')class LandMark:def __init__(self, annotation_file=None):"""Constructor of Microsoft COCO helper class for reading and visualizing annotations.:param annotation_file (str): location of annotation file:param image_folder (str): location to the folder that hosts images.:return:"""# load datasetself.dataset,self.anns,self.cats,self.imgs = dict(),dict(),dict(),dict()self.imgToAnns, self.catToImgs = defaultdict(list), defaultdict(list)if not annotation_file == None:print('loading annotations into memory...')tic = time.time()dataset = json.load(open(annotation_file, 'r'))assert type(dataset)==dict, 'annotation file format {} not supported'.format(type(dataset))print('Done (t={:0.2f}s)'.format(time.time()- tic))self.dataset = datasetself.createIndex()def createIndex(self):# create indexprint('creating index...')anns, cats, imgs = {}, {}, {}imgToAnns,catToImgs = defaultdict(list),defaultdict(list)if 'annotations' in self.dataset:for ann in self.dataset['annotations']:imgToAnns[ann['image_id']].append(ann)anns[ann['id']] = annif 'images' in self.dataset:for img in self.dataset['images']:imgs[img['id']] = imgif 'categories' in self.dataset:for cat in self.dataset['categories']:cats[cat['id']] = catif 'annotations' in self.dataset and 'categories' in self.dataset:for ann in self.dataset['annotations']:catToImgs[ann['category_id']].append(ann['image_id'])print('index created!')# create class membersself.anns = annsself.imgToAnns = imgToAnnsself.catToImgs = catToImgsself.imgs = imgsself.cats = catsdef info(self):"""Print information about the annotation file.:return:"""for key, value in self.dataset['info'].items():print('{}: {}'.format(key, value))def getAnnIds(self, imgIds=[], catIds=[], areaRng=[], iscrowd=None):"""Get ann ids that satisfy given filter conditions. default skips that filter:param imgIds (int array) : get anns for given imgscatIds (int array) : get anns for given catsareaRng (float array) : get anns for given area range (e.g. [0 inf])iscrowd (boolean) : get anns for given crowd label (False or True):return: ids (int array) : integer array of ann ids"""imgIds = imgIds if _isArrayLike(imgIds) else [imgIds]catIds = catIds if _isArrayLike(catIds) else [catIds]if len(imgIds) == len(catIds) == len(areaRng) == 0:anns = self.dataset['annotations']else:if not len(imgIds) == 0:lists = [self.imgToAnns[imgId] for imgId in imgIds if imgId in self.imgToAnns]anns = list(itertools.chain.from_iterable(lists))else:anns = self.dataset['annotations']anns = anns if len(catIds) == 0 else [ann for ann in anns if ann['category_id'] in catIds]anns = anns if len(areaRng) == 0 else [ann for ann in anns if ann['area'] > areaRng[0] and ann['area'] < areaRng[1]]if not iscrowd == None:ids = [ann['id'] for ann in anns if ann['iscrowd'] == iscrowd]else:ids = [ann['id'] for ann in anns]return idsdef getCatIds(self, catNms=[], supNms=[], catIds=[]):"""filtering parameters. default skips that filter.:param catNms (str array) : get cats for given cat names:param supNms (str array) : get cats for given supercategory names:param catIds (int array) : get cats for given cat ids:return: ids (int array) : integer array of cat ids"""catNms = catNms if _isArrayLike(catNms) else [catNms]supNms = supNms if _isArrayLike(supNms) else [supNms]catIds = catIds if _isArrayLike(catIds) else [catIds]if len(catNms) == len(supNms) == len(catIds) == 0:cats = self.dataset['categories']else:cats = self.dataset['categories']cats = cats if len(catNms) == 0 else [cat for cat in cats if cat['name'] in catNms]cats = cats if len(supNms) == 0 else [cat for cat in cats if cat['supercategory'] in supNms]cats = cats if len(catIds) == 0 else [cat for cat in cats if cat['id'] in catIds]ids = [cat['id'] for cat in cats]return idsdef getImgIds(self, imgIds=[], catIds=[]):'''Get img ids that satisfy given filter conditions.:param imgIds (int array) : get imgs for given ids:param catIds (int array) : get imgs with all given cats:return: ids (int array) : integer array of img ids'''imgIds = imgIds if _isArrayLike(imgIds) else [imgIds]catIds = catIds if _isArrayLike(catIds) else [catIds]if len(imgIds) == len(catIds) == 0:ids = self.imgs.keys()else:ids = set(imgIds)for i, catId in enumerate(catIds):if i == 0 and len(ids) == 0:ids = set(self.catToImgs[catId])else:ids &= set(self.catToImgs[catId])return list(ids)def loadAnns(self, ids=[]):"""Load anns with the specified ids.:param ids (int array) : integer ids specifying anns:return: anns (object array) : loaded ann objects"""if _isArrayLike(ids):return [self.anns[id] for id in ids]elif type(ids) == int:return [self.anns[ids]]def loadCats(self, ids=[]):"""Load cats with the specified ids.:param ids (int array) : integer ids specifying cats:return: cats (object array) : loaded cat objects"""if _isArrayLike(ids):return [self.cats[id] for id in ids]elif type(ids) == int:return [self.cats[ids]]def loadImgs(self, ids=[]):"""Load anns with the specified ids.:param ids (int array) : integer ids specifying img:return: imgs (object array) : loaded img objects"""if _isArrayLike(ids):return [self.imgs[id] for id in ids]elif type(ids) == int:return [self.imgs[ids]]def showAnns(self, anns):"""Display the specified annotations.:param anns (array of object): annotations to display:return: None"""if len(anns) == 0:return 0if 'segmentation' in anns[0] or 'keypoints' in anns[0]:datasetType = 'instances'elif 'caption' in anns[0]:datasetType = 'captions'else:raise Exception('datasetType not supported')if datasetType == 'instances':#plt.clf() #clear the foreground image#plt.cla() # clear the axisax = plt.gca()ax.set_autoscale_on(False)polygons = []color = []for ann in anns:c = (np.random.random((1, 3))*0.6+0.4).tolist()[0]if 'segmentation' in ann:if type(ann['segmentation']) == list:# polygonfor seg in ann['segmentation']:poly = np.array(seg).reshape((int(len(seg)/2), 2))polygons.append(Polygon(poly))color.append(c)#else:# # mask# t = self.imgs[ann['image_id']]# if type(ann['segmentation']['counts']) == list:# rle = maskUtils.frPyObjects([ann['segmentation']], t['height'], t['width'])# else:# rle = [ann['segmentation']]# m = maskUtils.decode(rle)# img = np.ones( (m.shape[0], m.shape[1], 3) )# if ann['iscrowd'] == 1:# color_mask = np.array([2.0,166.0,101.0])/255# if ann['iscrowd'] == 0:# color_mask = np.random.random((1, 3)).tolist()[0]# for i in range(3):# img[:,:,i] = color_mask[i]# ax.imshow(np.dstack( (img, m*0.5) ))if 'keypoints' in ann and type(ann['keypoints']) == list:# turn skeleton into zero-based indexsks = np.array(self.loadCats(ann['category_id'])[0]['skeleton'])-1kp = np.array(ann['keypoints'])x = kp[0::3]y = kp[1::3]v = kp[2::3]for sk in sks:if np.all(v[sk]>0):plt.plot(x[sk],y[sk], linewidth=3, color=c)plt.plot(x[v>0], y[v>0],'o',markersize=8, markerfacecolor=c, markeredgecolor='k',markeredgewidth=2)plt.plot(x[v>1], y[v>1],'o',markersize=8, markerfacecolor=c, markeredgecolor=c, markeredgewidth=2)#p = PatchCollection(polygons, facecolor=color, linewidths=0, alpha=0.4)#ax.add_collection(p)p = PatchCollection(polygons, facecolor='none', edgecolors=color, linewidths=2)ax.add_collection(p)elif datasetType == 'captions':for ann in anns:print(ann['caption'])def loadRes(self, resFile):"""Load result file and return a result api object.:param resFile (str) : file name of result file:return: res (obj) : result api object"""res = LandMark()res.dataset['images'] = [img for img in self.dataset['images']]print('Loading and preparing results...')tic = time.time()# Check result type in a way compatible with Python 2 and 3.if PYTHON_VERSION == 2:is_string = isinstance(resFile, basestring) # Python 2elif PYTHON_VERSION == 3:is_string = isinstance(resFile, str) # Python 3if is_string:anns = json.load(open(resFile))elif type(resFile) == np.ndarray:anns = self.loadNumpyAnnotations(resFile)else:anns = resFileassert type(anns) == list, 'results in not an array of objects'annsImgIds = [ann['image_id'] for ann in anns]assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \'Results do not correspond to current coco set'if 'caption' in anns[0]:imgIds = set([img['id'] for img in res.dataset['images']]) & set([ann['image_id'] for ann in anns])res.dataset['images'] = [img for img in res.dataset['images'] if img['id'] in imgIds]for id, ann in enumerate(anns):ann['id'] = id+1elif 'bbox' in anns[0] and not anns[0]['bbox'] == []:res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])for id, ann in enumerate(anns):bb = ann['bbox']x1, x2, y1, y2 = [bb[0], bb[0]+bb[2], bb[1], bb[1]+bb[3]]if not 'segmentation' in ann:ann['segmentation'] = [[x1, y1, x1, y2, x2, y2, x2, y1]]ann['area'] = bb[2]*bb[3]ann['id'] = id+1ann['iscrowd'] = 0elif 'segmentation' in anns[0]:res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])for id, ann in enumerate(anns):# now only support compressed RLE format as segmentation results#ann['area'] = maskUtils.area(ann['segmentation'])#if not 'bbox' in ann:# ann['bbox'] = maskUtils.toBbox(ann['segmentation'])ann['id'] = id+1ann['iscrowd'] = 0elif 'keypoints' in anns[0]:res.dataset['categories'] = copy.deepcopy(self.dataset['categories'])for id, ann in enumerate(anns):s = ann['keypoints']x = s[0::3]y = s[1::3]x0,x1,y0,y1 = np.min(x), np.max(x), np.min(y), np.max(y)ann['area'] = (x1-x0)*(y1-y0)ann['id'] = id + 1ann['bbox'] = [x0,y0,x1-x0,y1-y0]print('DONE (t={:0.2f}s)'.format(time.time()- tic))res.dataset['annotations'] = annsres.createIndex()return resdef download(self, tarDir = None, imgIds = [] ):'''Download COCO images from mscoco.org server.:param tarDir (str): COCO results directory nameimgIds (list): images to be downloaded:return:'''if tarDir is None:print('Please specify target directory')return -1if len(imgIds) == 0:imgs = self.imgs.values()else:imgs = self.loadImgs(imgIds)N = len(imgs)if not os.path.exists(tarDir):os.makedirs(tarDir)for i, img in enumerate(imgs):tic = time.time()fname = os.path.join(tarDir, img['file_name'])if not os.path.exists(fname):urlretrieve(img['coco_url'], fname)print('downloaded {}/{} images (t={:0.1f}s)'.format(i, N, time.time()- tic))def loadNumpyAnnotations(self, data):"""Convert result data from a numpy array [Nx7] where each row contains {imageID,x1,y1,w,h,score,class}:param data (numpy.ndarray):return: annotations (python nested list)"""print('Converting ndarray to lists...')assert(type(data) == np.ndarray)print(data.shape)assert(data.shape[1] == 7)N = data.shape[0]ann = []for i in range(N):if i % 1000000 == 0:print('{}/{}'.format(i,N))ann += [{'image_id' : int(data[i, 0]),'bbox' : [ data[i, 1], data[i, 2], data[i, 3], data[i, 4] ],'score' : data[i, 5],'category_id': int(data[i, 6]),}]return ann# def annToRLE(self, ann):# """# Convert annotation which can be polygons, uncompressed RLE to RLE.# :return: binary mask (numpy 2D array)# """# t = self.imgs[ann['image_id']]# h, w = t['height'], t['width']# segm = ann['segmentation']# if type(segm) == list:# # polygon -- a single object might consist of multiple parts# # we merge all parts into one mask rle code# rles = maskUtils.frPyObjects(segm, h, w)# rle = maskUtils.merge(rles)# elif type(segm['counts']) == list:# # uncompressed RLE# rle = maskUtils.frPyObjects(segm, h, w)# else:# # rle# rle = ann['segmentation']# return rle# def annToMask(self, ann):# """# Convert annotation which can be polygons, uncompressed RLE, or RLE to binary mask.# :return: binary mask (numpy 2D array)# """# rle = self.annToRLE(ann)# m = maskUtils.decode(rle)# return m
注释掉的部分我没有删除,大家可以和原文比较。
声明:本文内容由易百纳平台入驻作者撰写,文章观点仅代表作者本人,不代表易百纳立场。如有内容侵权或者其他问题,请联系本站进行删除。
红包
点赞
收藏
评论
打赏
- 分享
- 举报
评论
0个
手气红包
暂无数据相关专栏
-
浏览量:6140次2021-05-02 18:00:46
-
浏览量:16703次2021-07-16 12:56:10
-
浏览量:11626次2021-04-27 00:28:09
-
浏览量:1285次2023-06-03 16:08:03
-
浏览量:16872次2021-07-29 10:22:10
-
浏览量:1020次2023-06-03 15:58:50
-
浏览量:4101次2024-02-20 13:54:36
-
浏览量:5240次2021-04-23 14:09:37
-
2024-02-02 14:41:10
-
浏览量:1516次2023-06-02 17:42:09
-
浏览量:6274次2021-07-09 11:16:51
-
浏览量:3901次2024-01-18 18:05:38
-
浏览量:6995次2024-02-05 10:11:42
-
浏览量:5640次2023-05-25 16:32:18
-
浏览量:7330次2021-06-14 02:59:16
-
浏览量:1302次2024-03-04 15:03:25
-
浏览量:1062次2023-11-30 17:44:37
-
浏览量:17419次2021-05-31 17:01:39
-
浏览量:7297次2021-05-04 20:17:10
置顶时间设置
结束时间
删除原因
-
广告/SPAM
-
恶意灌水
-
违规内容
-
文不对题
-
重复发帖
打赏作者
mini菜
您的支持将鼓励我继续创作!
打赏金额:
¥1
¥5
¥10
¥50
¥100
支付方式:
微信支付
举报反馈
举报类型
- 内容涉黄/赌/毒
- 内容侵权/抄袭
- 政治相关
- 涉嫌广告
- 侮辱谩骂
- 其他
详细说明
审核成功
发布时间设置
发布时间:
请选择发布时间设置
是否关联周任务-专栏模块
审核失败
失败原因
请选择失败原因
备注
请输入备注

微信扫码分享
QQ好友