Modules and utils for YOLOv5¶
- class yolort.v5.AutoShape(model)[source]¶
YOLOv5 input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
- classes = None¶
- conf = 0.25¶
- forward(imgs, size=640, augment=False, profile=False)[source]¶
- Inference from various sources. For height=640, width=1280, RGB images example inputs are:
file: imgs = ‘data/images/zidane.jpg’ # str or PosixPath
OpenCV: = cv2.imread(‘image.jpg’)[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
PIL: = Image.open(‘image.jpg’) or ImageGrab.grab() # HWC x(640,1280,3)
numpy: = np.zeros((640,1280,3)) # HWC
torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
multiple: = [Image.open(‘image1.jpg’), Image.open(‘image2.jpg’), …] # list of images
- iou = 0.45¶
- max_det = 1000¶
- multi_label = False¶
- class yolort.v5.Bottleneck(c1, c2, shortcut=True, g=1, e=0.5, version='r4.0')[source]¶
Standard bottleneck
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
shortcut (bool) – shortcut
g (int) – groups
e (float) – expansion
version (str) – Module version released by ultralytics. Possible values are [“r3.1”, “r4.0”]. Default: “r4.0”.
- class yolort.v5.BottleneckCSP(c1, c2, n=1, shortcut=True, g=1, e=0.5)[source]¶
CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
n (int) – number
shortcut (bool) – shortcut
g (int) – groups
e (float) – expansion
- class yolort.v5.C3(c1, c2, n=1, shortcut=True, g=1, e=0.5, version='r4.0')[source]¶
CSP Bottleneck with 3 convolutions
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
n (int) – number
shortcut (bool) – shortcut
g (int) – groups
e (float) – expansion
version (str) – Module version released by ultralytics. Possible values are [“r4.0”]. Default: “r4.0”.
- class yolort.v5.Conv(c1, c2, k=1, s=1, p=None, g=1, act=True, version='r4.0')[source]¶
Standard convolution
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
k (int) – kernel
s (int) – stride
p (Optional[int]) – padding
g (int) – groups
act (bool or nn.Module) – determine the activation function
version (str) – Module version released by ultralytics. Possible values are [“r3.1”, “r4.0”]. Default: “r4.0”.
- class yolort.v5.DWConv(c1, c2, k=1, s=1, act=True, version='r4.0')[source]¶
Depth-wise convolution class.
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
k (int) – kernel
s (int) – stride
act (bool or nn.Module) – determine the activation function
version (str) – Module version released by ultralytics. Possible values are [“r3.1”, “r4.0”]. Default: “r4.0”.
- class yolort.v5.Detect(nc=80, anchors=(), ch=(), inplace=True)[source]¶
- onnx_dynamic = False¶
- stride = None¶
- class yolort.v5.Focus(c1, c2, k=1, s=1, p=None, g=1, act=True, version='r4.0')[source]¶
Focus wh information into c-space
- Parameters
c1 (int) – ch_in
c2 (int) – ch_out
k (int) – kernel
s (int) – stride
p (Optional[int]) – padding
g (int) – groups
act (bool or nn.Module) – determine the activation function
version (str) – Module version released by ultralytics. Possible values are [“r3.1”, “r4.0”]. Default: “r4.0”.
- class yolort.v5.SPPF(c1, c2, k=5, version='r4.0')[source]¶
Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher
- yolort.v5.add_yolov5_context()[source]¶
Temporarily add yolov5 folder to sys.path. Adapted from https://github.com/fcakyon/yolov5-pip/blob/0d03de6/yolov5/utils/general.py#L739-L754
torch.hub handles it in the same way: https://github.com/pytorch/pytorch/blob/d3e36fa/torch/hub.py#L387-L416
- yolort.v5.intersect_dicts(dict1, dict2, exclude=())[source]¶
Dictionary intersection of matching keys and shapes, omitting ‘exclude’ keys, using dict1 values
- yolort.v5.letterbox(im: numpy.ndarray, new_shape: Tuple[int, int] = (640, 640), color: Tuple[int, int, int] = (114, 114, 114), auto: bool = True, scale_fill: bool = False, scaleup: bool = True, stride: int = 32)[source]¶
- yolort.v5.load_yolov5_model(checkpoint_path: str, autoshape: bool = False, verbose: bool = True)[source]¶
Creates a specified YOLOv5 model
- Parameters
checkpoint_path (str) – path of the YOLOv5 model, i.e. ‘yolov5s.pt’
autoshape (bool) – apply YOLOv5 .autoshape() wrapper to model. Default: False.
verbose (bool) – print all information to screen. Default: True.
- Returns
YOLOv5 pytorch model