万方数据 万方数据 万方数据916浙江大学学报(工学版)第42卷取上下两块区域深度的平均值作为该区域的平均深度.在区域划分足够细的情况下,为了防止噪声的干扰,可以再增加区域在行上的深度梯度约束.1.5整体算法流程整体算法流程可归纳如下4个部分:1)输入图像序列,提取Harris角点并用KLT算法进行跟踪,得到在图像序列中都可见的特征点;2)根据这些特征点在各帧图像中的坐标组成W矩阵,使用改进的因子分解法得到场景的投影重建;3)在满足DAQ约束的自标定的基础上得到场景的欧氏重建以及摄像机运动;4)划分图像区域,通过深度信息来确定为障碍物或者背景.2试验结果为了验证所提出障碍物检测算法的有效性,本文进行了真实的室外场景实验.采用图像序列分辨率为720×576,共计400帧.为了获得比较精确的摄像机实际位移以及防止因子分解法的累积计算量过大,以40帧为单位划分原图像序列,得到10个子序列,对于每个子序列,每隔4帧抽出一帧,采样过后的10帧进行障碍物检测.以其中的一个子序列(200~239帧)为例,图2给出了采样过后该子序列的第一帧和最后一帧(即原序列的第200帧和236帧).(a)第200帧(b)第236帧图2子序列中的第一帧和最后一帧Fig.2Firstandlastframeinsubsequence本文首先选取并匹配特征点共1500个(如图3所示),然后用因子分解法得到场景的投影重建(如图4所示).通过满足DAQ约束的自标定求得DAQ为n’=91OOO000.00030.00241.03671.1209O.00910.06641.12091.2166进而得到场景的欧氏重建(如图5所示).可以看出欧氏重建真实地反映了特征点在世界坐标系中的位置以及摄像的运动(位于欧氏重建三维图的底部的(a)第200帧(b)第236帧图3特征点选取与匹配Fig.3FeaturepointsdetectingandtrackingO.5图4场景的投影重建三维图Fig.4Projectivereconstructionof20l5105O图5场景的欧氏重建三维图Fig.5Euclideanreconstructionof小坐标系代表了运动中摄像机光心的位置).最后,将该子序列中的每一帧图像划分为35×28个20像素×20像素的子区域(图像左右各空出10像素,上面空出16像素作为边界),计算区域深度以及深度梯度,取阈值r,=8,r。
=0.3,r。
一4,确定障碍物区域,并用黑色块进行标记如图6所示.图7以起始帧、中间一帧和最后一帧为代表,反映了基于整个图像序列(400帧)由远而近的障碍物检测效果.(a)第200帧(b)第236帧图6确定并标记障碍物区域Fig.6Obstacleconfirmingandlabeling—y4626叭∞∞OOO万方数据第6期杜歆,等:基于单目视觉的障碍物检测917(a)图像序列(b)检测结果图7障碍物检测效果Fig.7Detectingeffectofobstacle3结语基于单目视觉的障碍物检测方法首先由因子分解法得到场景的投影结构,由自标定升级至欧氏重建,再根据欧氏重建得到的深度信息来检测出图像序列中的障碍物.针对真实的室外场景图像的试验结果表明在一定的检测距离内,该方法能够在室外环境下获得比较好的障碍物检测效果.参考文献(References):[1]cHENGG,ZELINSKYA.Goal—orientatedbehavior-basedvisualnavigation[C]∥ProceedingsofIEEEInter-nationalConfe-renceRoboticsandAutomation(ICRA98).Leuven,Belgium:[s.n.],1998:3431—3436.[23OHYAA.K()SAKAA,KAK八Vision-basednavigationofmobilerobotswithobstacleavoidancebysinglecameravi—sionandultrasonicsensingrJ].IEEETransactionsRo-boticsandAutomation.1998,14(6):969—978.[3]NELSONRC,AI。
OIMONOSJ.Obstacleavoidancesingflowfielddivergence[J].IEEETransactionsPatternAnalysisandMachineIntelligence,1989’11(10):1102一1106.[4]LOWT,wYETHG.ObstacleDetectionusingOpticalFlow[C]∥Proceedingofthe2005AustralianConferenceRoboticsandAutomation.Sydney,Australia:[s.n.],2005.[5]WILLIAMSONT。
THORPEC.ASpecializedmulti—baselinestereotechniqueforobstacledetection[c]∥ProceedingsofIEEEConferenceComputerVisionandPatternRecognition.SantaBarbara,USA:IEEECorn-puterSociety,1998:238—244.[6]BERT()ZZIM,BROGGIA,FASCIOLI九Avisionsystemforreal—timeautomotiveobstacledetection[C]∥ProceedingsofIEEEConferenceImageProcessing.Lau—sanne,Switzerland:Is.rL],1996,2:681—684.[7]TOMASIC,KANADET.Shapeandmotionfromim—agestreamsunderorthography:afactorizationmethod[J].InternationalJournalofComputerVision,1992,9(2):137—154.[8]POEI,MANC,KANADET.Aparaperspectivefactori-zationmethodforshapeandmotionrecovery[J].IEEETransactionsPatternAnalysisandMachineIntelli—gence,1997,19(3):206—218.[93TRIGGSB.FactorizationMethodsforprojectivetureandmotion[C]∥ProceedingsofIEEEConferenceComputerVisionandPatternRecognition.WashingtonDC,USA:IEEEComputerSociety,1996:845—851.[10]STURMP,TRIGGS&Afactorizationbasedalgorithmformulti—imageprojectivestructureandmotion[C]∥Eu-roll.anConferenceComputerVision.Cambridge,UK:IEEEComputerSociety,1996:709—720.[11]MAHAMUDS,HEBERTM,OMORIY,eta1.Provably-convergentiterativemethodsforprojectivestructurefrommotion[C]∥IEEEConferenceCom—puterVisionandPatternRecognition,Hawaii,USA:IEEEComputerSociety,2001,1:1018—1021.[123HARRISC。
STEPHENSM.Acombinedcomerandedgedetector[c]∥ProceedingsFourthAIveyVisionConference.Manchester,UK:s.n],1988:147—151.[13]LUCASBD,KANADET.Aniterativeimageregis-trationtechniquewithapplicationvision[c]∥Proceedingsofthe7thInternationalJointConfer-ArtificialIntelligence.Vancouver:s.n.],1981:674—679.[14]SHIJ,TOMASHIC.Goodfeaturetotrack[c]∥Pro-ceedingsofIEEEConferenceComputerVisionandPatternRecognition.Seattle,Washington,USA:s.n.],1994,2:593—600.[153HARTLEYRI.InDefenceofthe8-pointalgorithm[C]∥ProceedingsoftheIEEEInternationalConferenceComputerVision.Cambridge,UK:IEEEComputerSociety,1995:1064—1070.[16]POLLEFEYSM,VERBIESTF,GOOLLV.Survi—vingdominantplanesinuncalibratedstructureandmo—tionrecovery[C]∥EuropeanConferenceComputerVision.Copenhagen,Denmark:SpringerPublishingCompany,2002:837—851.[17]TRIGGSB.Autocalibrationandtheabsolutequadric[c]∥IEEEConferenceComputerVisionandPatternRecognition.PuertoRico:IEEEComputerSoc,1997:609—6】4. 万方数据基于单目视觉的障碍物检测作者:杜歆, 周围, 朱云芳, 刘济林, DU Xin, ZHOU Wei, ZHU Yun-fang, LIU Ji-lin作者单位:杜歆,周围,刘济林,DU Xin,ZHOU Wei,LIU Ji-lin(浙江大学,信息与通信工程研究所,浙江,杭州,310027), 朱云芳,ZHU Yun-fang(浙江工商大学,计算机与信息工程学院,浙江,杭州,310035)刊名:浙江大学学报(工学版)英文刊名:JOURNAL OF ZHEJIANG UNIVERSITY(ENGINEERING SCIENCE)年,卷(期):2008,42(6)被引用次数:2次1.POLLEFEYS M;VERBIEST F;GOOL L V Surviving dominant planes in uncalibrated structure and motion recovery 20022.HARTLEY R I In Defence of the 8-point algorithm 19953.SHI J;TOMASHI C Good feature to track 19944.TRIGGS B Factorization Methods for projective structure and motion 19965.POELMAN C;KANADE T A paraperspective factorization method for shape and motion recovery[外文期刊] 1997(03)6.TOMASI C;KANADE T Shape and motion from image streams under orthography:a factorization method[外文期刊] 1992(02)7.BERTOZZI M;BROGGI A;FASCIOLI A A stereo vision system for real time automotive obstacle detection 19968.WILLIAMSON T;THORPE C A Specialized multibaseline stereo technique for obstacle detection 19989.LOW T;WYETH G Obstacle Detection using Optical Flow 200510.NELSON R C;ALOIMONOS J Obstacle avoidance using flow field divergence[外文期刊] 1989(10)11.OHYA A;KOSAKA A;KAK A Vision-based navigation of mobile robots with obstacle avoidance by single camera vision and ultrasonic sensing[外文期刊] 1998(06)12.TRIGGS B Autocalibration and the absolute quadric 199713.LUCAS B D;KANADE T An iterative image registration technique with an application to stereo vision 198114.HARRIS C;STEPHENS M A combined corner and edge detector 198815.MAHAMUD S;HEBERT M;OMORI Y Provably convergent iterative methods for projective structure from motion 200116.STURM P;TRIGGS B A factorization based algorithm for multi image projective structure and motion 199617.CHENG G;ZELINSKY A Goal orientated behavior-based visual navigation 19981.丁幼春.王书茂.陈红农用车辆作业环境障碍物检测方法[期刊论文]-农业机械学报 2009(z1)2.李晶.程伟扰动测量环境中基于图像序列的模态参数识别[期刊论文]-北京航空航天大学学报 2009(9)本文链接:/Periodical_zjdxxb-gx200806003.aspx。