数据集放在 datasets/coco_minitrain_10k数据集目录结构如下:
datasets/└── coco_mintrain_10k/├── annotations/│ ├── instances_train2017.json│ ├── instances_val2017.json│ ├── ... (其他标注文件)├── train2017/│ ├── 000000000001.jpg│ ├── ... (其他训练图像)├── val2017/│ ├── 000000000001.jpg│ ├── ... (其他验证图像)└── test2017/├── 000000000001.jpg├── ... (其他测试图像)
conda creaet -n yolo11_py310 python=3.10conda activate yolo11_py310pip install -U -r train/requirements.txt
先下载预训练权重:
bash 0_download_wgts.sh
执行预测测试:
bash 1_run_predict_yolo11.sh
预测结果保存在 runs 文件夹下,效果如下:

已经准备好一键训练肩膀,直接执行训练脚本:
bash 2_run_train_yolo11.sh
其中其作用的代码很简单,就在 train/train_yolo11.py 中,如下:
# Load a modelmodel = YOLO(curr_path + "/wgts/yolo11n.pt")# Train the modeltrain_results = model.train(data= curr_path + "/cfg/coco128.yaml", # path to dataset YAMLepochs=100, # number of training epochsimgsz=640, # training image sizedevice="0", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu)# Evaluate model performance on the validation setmetrics = model.val()
主要就是配置一下训练参数,如数据集路径、训练轮数、显卡ID、图片大小等,然后执行训练即可
训练完成后,训练日志会在 runs/train 文件夹下,比如训练中 val 预测图片如下:

这样就完成了算法训练
使用 TensorRT 进行算法部署
直接执行一键导出ONNX脚本:
bash 3_run_export_onnx.sh
在脚本中已经对ONNX做了sim的简化
生成的ONNX以及_simONNX模型保存在wgts文件夹下
直接去NVIDIA的官网下载(https://developer.nvidia.com/tensorrt/download)对应版本的tensorrt TAR包,解压基本步骤如下:
tar zxvf TensorRT-xxx-.tar.gz# 软链trtexecsudo ln -s /path/to/TensorRT/bin/trtexec /usr/local/bin# 验证一下trtexec --help# 安装trt的python接口cd pythonpip install tensorrt-xxx.whl
直接执行一键生成trt模型引擎的脚本:
bash 4_build_trt_engine.sh
正常会在wgts路径下生成yolo11n.engine,并有类似如下的日志:
[10/02/2024-21:28:48] [V] === Explanations of the performance metrics ===[10/02/2024-21:28:48] [V] Total Host Walltime: the host walltime from when the first query (after warmups) is enqueued to when the last query is completed.[10/02/2024-21:28:48] [V] GPU Compute Time: the GPU latency to execute the kernels for a query.[10/02/2024-21:28:48] [V] Total GPU Compute Time: the summation of the GPU Compute Time of all the queries. If this is significantly shorter than Total Host Walltime, the GPU may be under-utilized because of host-side overheads or data transfers.[10/02/2024-21:28:48] [V] Throughput: the observed throughput computed by dividing the number of queries by the Total Host Walltime. If this is significantly lower than the reciprocal of GPU Compute Time, the GPU may be under-utilized because of host-side overheads or data transfers.[10/02/2024-21:28:48] [V] Enqueue Time: the host latency to enqueue a query. If this is longer than GPU Compute Time, the GPU may be under-utilized.[10/02/2024-21:28:48] [V] H2D Latency: the latency for host-to-device data transfers for input tensors of a single query.[10/02/2024-21:28:48] [V] D2H Latency: the latency for device-to-host data transfers for output tensors of a single query.[10/02/2024-21:28:48] [V] Latency: the summation of H2D Latency, GPU Compute Time, and D2H Latency. This is the latency to infer a single query.[10/02/2024-21:28:48] [I]&&&& PASSED TensorRT.trtexec [TensorRT v100500] [b18] # trtexec --onnx=../wgts/yolo11n_sim.onnx --saveEngine=../wgts/yolo11n.engine --fp16 --verbose
直接执行一键推理脚本:
bash 5_infer_trt.sh
实际的trt推理脚本在 deploy/infer_trt.py推理成功会有如下日志:
------ trt infer success! ------
推理结果保存在 deploy/output.jpg
如下:

好文章,需要你的鼓励
AWS re:Invent大会展示了亚马逊在智能代理AI和定制模型方面的重大进展,包括AgentCore平台更新和Nova Forge服务发布。英伟达CEO黄仁勋在独家访谈中预测AI工厂将在边缘计算中普及,形成分布式智能工厂模型。尽管谷歌和亚马逊推出自研芯片挑战英伟达,但英伟达凭借CUDA生态系统优势仍将保持市场主导地位。地缘政治因素可能重塑半导体格局,台积电地位关键。
波恩大学研究团队首次量化AI训练的材料成本,发现一块GPU含32种元素,93%为重金属。训练GPT-4需消耗约7吨金属材料,其中多为有毒重金属。研究建立了从计算需求到硬件消耗的评估框架,发现通过软硬件优化可减少93%的资源消耗。该研究揭示了AI发展的隐性环境代价,呼吁行业从规模竞赛转向效率革命,实现可持续发展。
Lumen技术CTO戴夫·沃德指出,当前互联网基础设施无法满足AI工作负载和数据流量需求。AI兴起与企业对云计算需求的演变正推动新的云经济和"云2.0"概念。他预测未来3-5年将出现支持下一代需求的云基础设施。CIO需要重新设计企业网络架构,摆脱传统的集线器辐射式设计,采用多云直连模式来适应AI时代要求。
南开大学团队构建了迄今最大规模的结肠镜AI数据库COLONVQA,包含110万视觉问答条目。他们发现现有AI模型存在泛化能力不足和容易被误导等问题,因此开发了首个具备临床推理能力的结肠镜AI模型COLONR1。该模型采用多专家辩论机制生成推理数据,在综合评估中准确率达56.61%,比传统方法提升25.22%,为智能结肠镜诊断从图像识别向临床推理的转变奠定了基础。