数据集放在 datasets/coco_minitrain_10k
数据集目录结构如下:
datasets/
└── coco_mintrain_10k/
├── annotations/
│ ├── instances_train2017.json
│ ├── instances_val2017.json
│ ├── ... (其他标注文件)
├── train2017/
│ ├── 000000000001.jpg
│ ├── ... (其他训练图像)
├── val2017/
│ ├── 000000000001.jpg
│ ├── ... (其他验证图像)
└── test2017/
├── 000000000001.jpg
├── ... (其他测试图像)
conda creaet -n yolo11_py310 python=3.10
conda activate yolo11_py310
pip install -U -r train/requirements.txt
先下载预训练权重:
bash 0_download_wgts.sh
执行预测测试:
bash 1_run_predict_yolo11.sh
预测结果保存在 runs
文件夹下,效果如下:
已经准备好一键训练肩膀,直接执行训练脚本:
bash 2_run_train_yolo11.sh
其中其作用的代码很简单,就在 train/train_yolo11.py
中,如下:
# Load a model
model = YOLO(curr_path + "/wgts/yolo11n.pt")
# Train the model
train_results = model.train(
data= curr_path + "/cfg/coco128.yaml", # path to dataset YAML
epochs=100, # number of training epochs
imgsz=640, # training image size
device="0", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
)
# Evaluate model performance on the validation set
metrics = model.val()
主要就是配置一下训练参数,如数据集路径、训练轮数、显卡ID、图片大小等,然后执行训练即可
训练完成后,训练日志会在 runs/train
文件夹下,比如训练中 val 预测图片如下:
这样就完成了算法训练
使用 TensorRT 进行算法部署
直接执行一键导出ONNX脚本:
bash 3_run_export_onnx.sh
在脚本中已经对ONNX做了sim的简化
生成的ONNX以及_simONNX模型保存在wgts
文件夹下
直接去NVIDIA的官网下载(https://developer.nvidia.com/tensorrt/download)对应版本的tensorrt TAR包,解压基本步骤如下:
tar zxvf TensorRT-xxx-.tar.gz
# 软链trtexec
sudo ln -s /path/to/TensorRT/bin/trtexec /usr/local/bin
# 验证一下
trtexec --help
# 安装trt的python接口
cd python
pip install tensorrt-xxx.whl
直接执行一键生成trt模型引擎的脚本:
bash 4_build_trt_engine.sh
正常会在wgts
路径下生成yolo11n.engine,并有类似如下的日志:
[10/02/2024-21:28:48] [V] === Explanations of the performance metrics ===
[10/02/2024-21:28:48] [V] Total Host Walltime: the host walltime from when the first query (after warmups) is enqueued to when the last query is completed.
[10/02/2024-21:28:48] [V] GPU Compute Time: the GPU latency to execute the kernels for a query.
[10/02/2024-21:28:48] [V] Total GPU Compute Time: the summation of the GPU Compute Time of all the queries. If this is significantly shorter than Total Host Walltime, the GPU may be under-utilized because of host-side overheads or data transfers.
[10/02/2024-21:28:48] [V] Throughput: the observed throughput computed by dividing the number of queries by the Total Host Walltime. If this is significantly lower than the reciprocal of GPU Compute Time, the GPU may be under-utilized because of host-side overheads or data transfers.
[10/02/2024-21:28:48] [V] Enqueue Time: the host latency to enqueue a query. If this is longer than GPU Compute Time, the GPU may be under-utilized.
[10/02/2024-21:28:48] [V] H2D Latency: the latency for host-to-device data transfers for input tensors of a single query.
[10/02/2024-21:28:48] [V] D2H Latency: the latency for device-to-host data transfers for output tensors of a single query.
[10/02/2024-21:28:48] [V] Latency: the summation of H2D Latency, GPU Compute Time, and D2H Latency. This is the latency to infer a single query.
[10/02/2024-21:28:48] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v100500] [b18] # trtexec --onnx=../wgts/yolo11n_sim.onnx --saveEngine=../wgts/yolo11n.engine --fp16 --verbose
直接执行一键推理脚本:
bash 5_infer_trt.sh
实际的trt推理脚本在 deploy/infer_trt.py
推理成功会有如下日志:
------ trt infer success! ------
推理结果保存在 deploy/output.jpg
如下:
好文章,需要你的鼓励
CIO们正面临众多复杂挑战,其多样性值得关注。除了企业安全和成本控制等传统问题,人工智能快速发展和地缘政治环境正在颠覆常规业务模式。主要挑战包括:AI技术快速演进、IT部门AI应用、AI网络攻击威胁、AIOps智能运维、快速实现价值、地缘政治影响、成本控制、人才短缺、安全风险管理以及未来准备等十个方面。
北航团队发布AnimaX技术,能够根据文字描述让静态3D模型自动生成动画。该系统支持人形角色、动物、家具等各类模型,仅需6分钟即可完成高质量动画生成,效率远超传统方法。通过多视角视频-姿态联合扩散模型,AnimaX有效结合了视频AI的运动理解能力与骨骼动画的精确控制,在16万动画序列数据集上训练后展现出卓越性能。
过去两年间,许多组织启动了大量AI概念验证项目,但失败率高且投资回报率令人失望。如今出现新趋势,组织开始重新评估AI实验的撒网策略。IT观察者发现,许多组织正在减少AI概念验证项目数量,IT领导转向商业AI工具,专注于有限的战略性目标用例。专家表示,组织正从大规模实验转向更专注、结果导向的AI部署,优先考虑能深度融入运营工作流程并产生可衡量结果的少数用例。
这项研究解决了AI图片描述中的两大难题:描述不平衡和内容虚构。通过创新的"侦探式追问"方法,让AI能生成更详细准确的图片描述,显著提升了多个AI系统的性能表现,为无障碍技术、教育、电商等领域带来实用价值。