ass日本风韵熟妇pics男人扒开女人屁屁桶到爽|扒开胸露出奶头亲吻视频|邻居少妇的诱惑|人人妻在线播放|日日摸夜夜摸狠狠摸婷婷|制服 丝袜 人妻|激情熟妇中文字幕|看黄色欧美特一级|日本av人妻系列|高潮对白av,丰满岳妇乱熟妇之荡,日本丰满熟妇乱又伦,日韩欧美一区二区三区在线

水稻收獲無人駕駛運(yùn)糧車糧廂圖像輕量化分割模型研究
CSTR:
作者:
作者單位:

作者簡介:

通訊作者:

中圖分類號:

基金項(xiàng)目:

國家重點(diǎn)研發(fā)計(jì)劃項(xiàng)目(2021YFD2000600)和農(nóng)業(yè)裝備技術(shù)全國重點(diǎn)實(shí)驗(yàn)室(華南農(nóng)業(yè)大學(xué))開放課題項(xiàng)目(SKLAET-202404)


Research on Lightweight Image Segmentation Model for Grain Tank of an Unmanned Grain Cart in Rice Harvesting
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪問統(tǒng)計(jì)
  • |
  • 參考文獻(xiàn)
  • |
  • 相似文獻(xiàn)
  • |
  • 引證文獻(xiàn)
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對目前無人駕駛水稻收獲機(jī)向運(yùn)糧車轉(zhuǎn)卸稻谷時,依靠收獲機(jī)和運(yùn)糧車的北斗定位信息決策卸糧臂位置控制,對靶精度難以保證問題,提出一種糧廂圖像視覺分割模型GTSM,為卸糧臂對靶提供糧廂位置參考信息。在DeepLabv3+結(jié)構(gòu)基礎(chǔ)上,使用輕量化主干ShuffleNetv2替換Xception,將ASPP模塊中空洞卷積替換為深度可分離卷積,然后低秩分解為微因子分解卷積,以減小模型復(fù)雜度和提高運(yùn)行速度;在淺層特征分支引入SE通道注意力機(jī)制,提高模型對糧廂邊緣、紋理等低級特征利用能力。試驗(yàn)結(jié)果顯示,GTSM平均交占比和平均像素準(zhǔn)確率分別達(dá)到96.06%和98.69%,較基準(zhǔn)DeepLabv3+分別提升0.78、0.67個百分點(diǎn);同時,模型復(fù)雜度明顯改善,參數(shù)量和內(nèi)存占用量僅為原來的1/9,推理速度提高166%。試驗(yàn)結(jié)果表明,提出的GTSM兼顧分割精度和推理速度,可為田間運(yùn)糧車糧廂自動化分割提供參考依據(jù)。

    Abstract:

    Aiming to address the issue of low targeting accuracy in controlling the unloading arm position during rice transfer from unmanned rice harvesters to grain transport vehicles, which relies on Beidou positioning information of the harvester and transport vehicle, a GTSM network for visual segmentation of grain compartment images was proposed to provide positional reference information for the unloading arm. Based on the DeepLabv3+ architecture, the lightweight ShuffleNetv2 backbone replaced Xception, and the atrous convolutions in the ASPP module were replaced with depthwise separable convolutions, followed by low-rank decomposition into micro-factorized convolutions to reduce model complexity and improve inference speed. Additionally, an SE channel attention mechanism was introduced in the shallow feature branch to enhance the model’s ability to utilize low-level features such as grain compartment edges and textures. Experimental results showed that GTSM achieved a mean intersection over union (mIoU) of 96.06% and a mean pixel accuracy (mPA) of 98.69%, representing improvements of 0.78 and 0.67 percentage points, respectively, over the baseline DeepLabv3+. Meanwhile, model complexity was significantly reduced, with parameter count and memory usage reduced to 1/9 of the original, and inference speed was increased by 166%. These results demonstrated that the proposed GTSM balanced segmentation accuracy and inference speed, providing a reference for automated grain compartment segmentation in field grain transport vehicles.

    參考文獻(xiàn)
    相似文獻(xiàn)
    引證文獻(xiàn)
引用本文

趙潤茂,黃嘉濤,滿忠賢,羅錫文,胡煉,何杰,汪沛,黃培奎.水稻收獲無人駕駛運(yùn)糧車糧廂圖像輕量化分割模型研究[J].農(nóng)業(yè)機(jī)械學(xué)報,2025,56(6):196-204. ZHAO Runmao, HUANG Jiatao, MAN Zhongxian, LUO Xiwen, HU Lian, HE Jie, WANG Pei, HUANG Peikui. Research on Lightweight Image Segmentation Model for Grain Tank of an Unmanned Grain Cart in Rice Harvesting[J]. Transactions of the Chinese Society for Agricultural Machinery,2025,56(6):196-204.

復(fù)制
相關(guān)視頻

分享
文章指標(biāo)
  • 點(diǎn)擊次數(shù):
  • 下載次數(shù):
  • HTML閱讀次數(shù):
  • 引用次數(shù):
歷史
  • 收稿日期:2025-05-03
  • 最后修改日期:
  • 錄用日期:
  • 在線發(fā)布日期: 2025-06-10
  • 出版日期:
文章二維碼