Hyper 3D-AI

Artificial Intelligence for 3D multimodal point cloud classification

The aim of this project is to develop software tools that can efficiently analyse images in a three-dimensional context – for example medical images or pictures from cameras mounted on self-driving cars. Artificial intelligence (AI) is already capable of detecting anomalies in MRI images, for example, by classifying the image data. However, many of the existing AI algorithms only work with two-dimensional images. While they can analyse neighbouring pixels in the image, they cannot recognise whether they reside on the same plane as each other in reality.

“We work with point clouds, where we have three-dimensional coordinates for each point,” says Dr. Sandra Lorenz of the Helmholtz Institute Freiberg for Resource Technology. “That is a completely different architecture from what is used for analysing pixels in photos. However, the current methods can’t really cope properly with these point clouds yet, even though point clouds offer a much better depiction of the real world.” The researchers now want to close this gap. By characterising pixels in 3D space, this will open up new possibilities in fields like exploration and mining, medicine and autonomous systems.

AI should then be able to achieve multimodal classification, or in other words distinguish objects or domains out of data coming from multiple sensors. For mining, as one possible application, this could mean the software would automatically recognise deposits of mineral raw materials, for example, based on spectral properties or colours.

Loading...

Publications


4725570 Hyper3D-AI 1 https://helmholtz-imaging.de/apa-bold-title.csl 50 date desc 884 https://helmholtz-imaging.de/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22P5FBJ6WA%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Afifi%20et%20al.%22%2C%22parsedDate%22%3A%222024%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAfifi%2C%20A.%20J.%2C%20Thiele%2C%20S.%20T.%2C%20Rizaldy%2C%20A.%2C%20Lorenz%2C%20S.%2C%20Ghamisi%2C%20P.%2C%20Tolosana-Delgado%2C%20R.%2C%20Kirsch%2C%20M.%2C%20Gloaguen%2C%20R.%2C%20%26amp%3B%20Heizmann%2C%20M.%20%282024%29.%20%26lt%3Bb%26gt%3BTinto%3A%20Multisensor%20Benchmark%20for%203-D%20Hyperspectral%20Point%20Cloud%20Segmentation%20in%20the%20Geosciences%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Geoscience%20and%20Remote%20Sensing%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B62%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B15.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTGRS.2023.3340293%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTGRS.2023.3340293%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tinto%3A%20Multisensor%20Benchmark%20for%203-D%20Hyperspectral%20Point%20Cloud%20Segmentation%20in%20the%20Geosciences%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%20J.%22%2C%22lastName%22%3A%22Afifi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%20T.%22%2C%22lastName%22%3A%22Thiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aldino%22%2C%22lastName%22%3A%22Rizaldy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pedram%22%2C%22lastName%22%3A%22Ghamisi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimon%22%2C%22lastName%22%3A%22Tolosana-Delgado%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moritz%22%2C%22lastName%22%3A%22Kirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gloaguen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22The%20increasing%20use%20of%20deep%20learning%20techniques%20has%20reduced%20interpretation%20time%20and%2C%20ideally%2C%20reduced%20interpreter%20bias%20by%20automatically%20deriving%20geological%20maps%20from%20digital%20outcrop%20models.%20However%2C%20accurate%20validation%20of%20these%20automated%20mapping%20approaches%20is%20a%20significant%20challenge%20due%20to%20the%20subjective%20nature%20of%20geological%20mapping%20and%20the%20difficulty%20in%20collecting%20quantitative%20validation%20data.%20Additionally%2C%20many%20state-of-the-art%20deep%20learning%20methods%20are%20limited%20to%202-D%20image%20data%2C%20which%20is%20insufficient%20for%203-D%20digital%20outcrops%2C%20such%20as%20hyperclouds.%20To%20address%20these%20challenges%2C%20we%20present%20Tinto%2C%20a%20multisensor%20benchmark%20digital%20outcrop%20dataset%20designed%20to%20facilitate%20the%20development%20and%20validation%20of%20deep%20learning%20approaches%20for%20geological%20mapping%2C%20especially%20for%20nonstructured%203-D%20data%20like%20point%20clouds.%20Tinto%20comprises%20two%20complementary%20sets%3A%201%29%20a%20real%20digital%20outcrop%20model%20from%20Corta%20Atalaya%20%28Spain%29%2C%20with%20spectral%20attributes%20and%20ground-truth%20data%20and%202%29%20a%20synthetic%20twin%20that%20uses%20latent%20features%20in%20the%20original%20datasets%20to%20reconstruct%20realistic%20spectral%20data%20%28including%20sensor%20noise%20and%20processing%20artifacts%29%20from%20the%20ground%20truth.%20The%20point%20cloud%20is%20dense%20and%20contains%203242964%20labeled%20points.%20We%20used%20these%20datasets%20to%20explore%20the%20abilities%20of%20different%20deep%20learning%20approaches%20for%20automated%20geological%20mapping.%20By%20making%20Tinto%20publicly%20available%2C%20we%20hope%20to%20foster%20the%20development%20and%20adaptation%20of%20new%20deep%20learning%20tools%20for%203-D%20applications%20in%20Earth%20sciences.%20The%20dataset%20can%20be%20accessed%20through%20this%20link%3A%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.14278%5C%2Frodare.2256.%22%2C%22date%22%3A%222024%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTGRS.2023.3340293%22%2C%22ISSN%22%3A%221558-0644%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F10347274%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T12%3A38%3A45Z%22%7D%7D%2C%7B%22key%22%3A%22ALL7K7JS%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bihler%20et%20al.%22%2C%22parsedDate%22%3A%222023-08-11%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBihler%2C%20M.%2C%20Roming%2C%20L.%2C%20Jiang%2C%20Y.%2C%20Afifi%2C%20A.%20J.%2C%20Aderhold%2C%20J.%2C%20%26%23x10C%3Bibirait%26%23x117%3B-Lukenskien%26%23x117%3B%2C%20D.%2C%20Lorenz%2C%20S.%2C%20Gloaguen%2C%20R.%2C%20Gruna%2C%20R.%2C%20%26amp%3B%20Heizmann%2C%20M.%20%282023%29.%20%26lt%3Bb%26gt%3BMulti-sensor%20data%20fusion%20using%20deep%20learning%20for%20bulky%20waste%20image%20classification%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BAutomated%20Visual%20Inspection%20and%20Machine%20Vision%20V%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B12623%26lt%3B%5C%2Fi%26gt%3B%2C%2069%26%23x2013%3B82.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1117%5C%2F12.2673838%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1117%5C%2F12.2673838%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Multi-sensor%20data%20fusion%20using%20deep%20learning%20for%20bulky%20waste%20image%20classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Manuel%22%2C%22lastName%22%3A%22Bihler%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lukas%22%2C%22lastName%22%3A%22Roming%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifan%22%2C%22lastName%22%3A%22Jiang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%20J.%22%2C%22lastName%22%3A%22Afifi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jochen%22%2C%22lastName%22%3A%22Aderhold%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dovil%5Cu0117%22%2C%22lastName%22%3A%22%5Cu010cibirait%5Cu0117-Lukenskien%5Cu0117%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gloaguen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robin%22%2C%22lastName%22%3A%22Gruna%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20learning%20techniques%20are%20commonly%20utilized%20to%20tackle%20various%20computer%20vision%20problems%2C%20including%20recognition%2C%20segmentation%2C%20and%20classification%20from%20RGB%20images.%20With%20the%20availability%20of%20a%20diverse%20range%20of%20sensors%2C%20industry-specific%20datasets%20are%20acquired%20to%20address%20specific%20challenges.%20These%20collected%20datasets%20have%20varied%20modalities%2C%20indicating%20that%20the%20images%20possess%20distinct%20channel%20numbers%20and%20pixel%20values%20that%20have%20different%20interpretations.%20Implementing%20deep%20learning%20methods%20to%20attain%20optimal%20outcomes%20on%20such%20multimodal%20data%20is%20a%20complicated%20procedure.%20To%20enhance%20the%20performance%20of%20classification%20tasks%20in%20this%20scenario%2C%20one%20feasible%20approach%20is%20to%20employ%20a%20data%20fusion%20technique.%20Data%20fusion%20aims%20to%20use%20all%20the%20available%20information%20from%20all%20sensors%20and%20integrate%20them%20to%20obtain%20an%20optimal%20outcome.%20This%20paper%20investigates%20early%20fusion%2C%20intermediate%20fusion%2C%20and%20late%20fusion%20in%20deep%20learning%20models%20for%20bulky%20waste%20image%20classification.%20For%20training%20and%20evaluation%20of%20the%20models%2C%20a%20multimodal%20dataset%20is%20used.%20The%20dataset%20consists%20of%20RGB%2C%20hyperspectral%20Near%20Infrared%20%28NIR%29%2C%20Thermography%2C%20and%20Terahertz%20images%20of%20bulky%20waste.%20The%20results%20of%20this%20work%20show%20that%20multimodal%20sensor%20fusion%20can%20enhance%20classification%20accuracy%20compared%20to%20a%20single-sensor%20approach%20for%20the%20used%20dataset.%20Hereby%2C%20late%20fusion%20performed%20the%20best%20with%20an%20accuracy%20of%200.921%20compared%20to%20intermediate%20and%20early%20fusion%2C%20on%20our%20test%20data.%22%2C%22date%22%3A%222023%5C%2F08%5C%2F11%22%2C%22proceedingsTitle%22%3A%22Automated%20Visual%20Inspection%20and%20Machine%20Vision%20V%22%2C%22conferenceName%22%3A%22Automated%20Visual%20Inspection%20and%20Machine%20Vision%20V%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1117%5C%2F12.2673838%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.spiedigitallibrary.org%5C%2Fconference-proceedings-of-spie%5C%2F12623%5C%2F126230B%5C%2FMulti-sensor-data-fusion-using-deep-learning-for-bulky-waste%5C%2F10.1117%5C%2F12.2673838.full%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-01-25T10%3A27%3A05Z%22%7D%7D%2C%7B%22key%22%3A%226HL27595%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Thiele%20et%20al.%22%2C%22parsedDate%22%3A%222023-02-22%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BThiele%2C%20S.%2C%20Afifi%2C%20A.%20J.%2C%20Lorenz%2C%20S.%2C%20Tolosana-Delgado%2C%20R.%2C%20Kirsch%2C%20M.%2C%20Ghamisi%2C%20P.%2C%20%26amp%3B%20Gloaguen%2C%20R.%20%282023%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BLithoNet%3A%20A%20benchmark%20dataset%20for%20machine%20learning%20with%20digital%20outcrops%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B%20%28No.%20EGU23-14007%29.%20Copernicus%20Meetings.%20https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.5194%5C%2Fegusphere-egu23-14007%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22report%22%2C%22title%22%3A%22LithoNet%3A%20A%20benchmark%20dataset%20for%20machine%20learning%20with%20digital%20outcrops%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sam%22%2C%22lastName%22%3A%22Thiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ahmed%20J.%22%2C%22lastName%22%3A%22Afifi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Raimon%22%2C%22lastName%22%3A%22Tolosana-Delgado%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moritz%22%2C%22lastName%22%3A%22Kirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pedram%22%2C%22lastName%22%3A%22Ghamisi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gloaguen%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22reportNumber%22%3A%22EGU23-14007%22%2C%22reportType%22%3A%22%22%2C%22institution%22%3A%22Copernicus%20Meetings%22%2C%22date%22%3A%222023-02-22%22%2C%22language%22%3A%22en%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fmeetingorganizer.copernicus.org%5C%2FEGU23%5C%2FEGU23-14007.html%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-01-25T10%3A21%3A14Z%22%7D%7D%2C%7B%22key%22%3A%229WHP88H8%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22HIF-EXPLO%22%2C%22parsedDate%22%3A%222022-05-19%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHIF-EXPLO.%20%282022%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3Bhifexplo%5C%2Fhylite%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fgithub.com%5C%2Fhifexplo%5C%2Fhylite%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fgithub.com%5C%2Fhifexplo%5C%2Fhylite%26lt%3B%5C%2Fa%26gt%3B%20%28Original%20work%20published%202020%29%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22computerProgram%22%2C%22title%22%3A%22hifexplo%5C%2Fhylite%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22programmer%22%2C%22firstName%22%3A%22%22%2C%22lastName%22%3A%22HIF-EXPLO%22%7D%5D%2C%22abstractNote%22%3A%22An%20open-source%20toolbox%20for%20spectral%20geology%22%2C%22versionNumber%22%3A%22%22%2C%22date%22%3A%222022-05-19T20%3A22%3A10Z%22%2C%22system%22%3A%22%22%2C%22company%22%3A%22%22%2C%22programmingLanguage%22%3A%22Python%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fgithub.com%5C%2Fhifexplo%5C%2Fhylite%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T08%3A47%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22AQ748WZP%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schambach%20et%20al.%22%2C%22parsedDate%22%3A%222021-12%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchambach%2C%20M.%2C%20Shi%2C%20J.%2C%20%26amp%3B%20Heizmann%2C%20M.%20%282021%29.%20%26lt%3Bb%26gt%3BSpectral%20Reconstruction%20and%20Disparity%20from%20Spatio-Spectrally%20Coded%20Light%20Fields%20via%20Multi-Task%20Deep%20Learning%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3B2021%20International%20Conference%20on%203D%20Vision%20%283DV%29%26lt%3B%5C%2Fi%26gt%3B%2C%20186%26%23x2013%3B196.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F3DV53792.2021.00029%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2F3DV53792.2021.00029%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Spectral%20Reconstruction%20and%20Disparity%20from%20Spatio-Spectrally%20Coded%20Light%20Fields%20via%20Multi-Task%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Schambach%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiayang%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22We%20present%20a%20novel%20method%20to%20reconstruct%20a%20spectral%20central%20view%20and%20its%20aligned%20disparity%20map%20from%20spatio-spectrally%20coded%20light%20fields.%20Since%20we%20do%20not%20reconstruct%20an%20intermediate%20full%20light%20field%20from%20the%20coded%20measurement%2C%20we%20refer%20to%20this%20as%20principal%20reconstruction.%20We%20show%20that%20the%20direct%20estimation%20is%20superior%20to%20a%20full%20light%20field%20reconstruction%20and%20subsequent%20disparity%20estimation.%20The%20coded%20light%20fields%20correspond%20to%20those%20captured%20by%20a%20light%20field%20camera%20in%20the%20unfocused%20design%20with%20a%20spectrally%20coded%20microlens%20array.%20In%20this%20application%2C%20the%20spectrally%20coded%20light%20field%20camera%20can%20be%20interpreted%20as%20a%20single-shot%20spectral%20depth%20camera.%20We%20investigate%20several%20multi-task%20deep%20learning%20methods%20and%20propose%20a%20new%20auxiliary%20loss-based%20training%20strategy%20to%20enhance%20the%20reconstruction%20performance.%20The%20results%20are%20evaluated%20using%20a%20synthetic%20as%20well%20as%20a%20new%20real-world%20spectral%20light%20field%20dataset%20that%20we%20captured%20using%20a%20custom-built%20camera.%20The%20results%20are%20compared%20to%20state-of-the%20art%20compressed%20sensing%20reconstruction%20and%20disparity%20estimation.%20We%20achieve%20a%20high%20reconstruction%20quality%20for%20both%20synthetic%20and%20real-world%20coded%20light%20fields.%20The%20disparity%20estimation%20quality%20is%20on%20par%20with%20or%20even%20outperforms%20state-of-the-art%20disparity%20estimation%20from%20uncoded%20RGB%20light%20fields.%22%2C%22date%22%3A%222021-12%22%2C%22proceedingsTitle%22%3A%222021%20International%20Conference%20on%203D%20Vision%20%283DV%29%22%2C%22conferenceName%22%3A%222021%20International%20Conference%20on%203D%20Vision%20%283DV%29%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2F3DV53792.2021.00029%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-06T08%3A27%3A25Z%22%7D%7D%2C%7B%22key%22%3A%22GQITDXPR%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Thiele%20et%20al.%22%2C%22parsedDate%22%3A%222021-11-15%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BThiele%2C%20S.%2C%20Lorenz%2C%20S.%2C%20Bnoulkacem%2C%20Z.%2C%20Kirsch%2C%20M.%2C%20%26amp%3B%20Gloaguen%2C%20R.%20%282021%29.%20%26lt%3Bb%26gt%3BHyperspectral%20mineral%20mapping%20of%20cliffs%20using%20a%20UAV%20mounted%20Hyspex%20Mjolnir%20VNIR-SWIR%20sensor%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3B2021%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B3.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3997%5C%2F2214-4609.2021629011%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3997%5C%2F2214-4609.2021629011%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Hyperspectral%20mineral%20mapping%20of%20cliffs%20using%20a%20UAV%20mounted%20Hyspex%20Mjolnir%20VNIR-SWIR%20sensor%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Thiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22S.%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Z.%22%2C%22lastName%22%3A%22Bnoulkacem%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M.%22%2C%22lastName%22%3A%22Kirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22R.%22%2C%22lastName%22%3A%22Gloaguen%22%7D%5D%2C%22abstractNote%22%3A%22Summary%20Cliffs%20provide%20valuable%20geological%20exposures%2C%20but%20present%20significant%20mapping%20challenges%20due%20to%20access%20limitations%20and%20severe%20geometric%20distortions%20in%20remotely%20sensed%20nadir%20satellite%20or%20airbourne%20datasets.%20We%20build%20on%20a%20significant%20body%20of%20work%20employing%20ground-based%20hyperspectral%20sensors%20and%20UAVs%20fitted%20with%20RGB%20cameras%2C%20and%20employ%20a%20UAV%20fitted%20with%20a%20Hyspex%20Mjolnir%20VS%20to%20directly%20map%20cliff%20mineralogy%20and%20lithology.%20The%20variety%20of%20geometric%20and%20spectral%20corrections%20needed%20to%20achieve%20this%20have%20been%20implemented%20in%20the%20open%20source%20hylite%20toolbox%20to%20facilitate%20use%20by%20researchers%20and%20industry%20alike.%22%2C%22date%22%3A%222021%5C%2F11%5C%2F15%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22Second%20EAGE%20Workshop%20on%20Unmanned%20Aerial%20Vehicles%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.3997%5C%2F2214-4609.2021629011%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.earthdoc.org%5C%2Fcontent%5C%2Fpapers%5C%2F10.3997%5C%2F2214-4609.2021629011%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-06T08%3A38%3A43Z%22%7D%7D%2C%7B%22key%22%3A%22KMB3HBPP%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Thiele%20et%20al.%22%2C%22parsedDate%22%3A%222021-09-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BThiele%2C%20S.%20T.%2C%20Lorenz%2C%20S.%2C%20Kirsch%2C%20M.%2C%20Cecilia%20Contreras%20Acosta%2C%20I.%2C%20Tusa%2C%20L.%2C%20Herrmann%2C%20E.%2C%20M%26%23xF6%3Bckel%2C%20R.%2C%20%26amp%3B%20Gloaguen%2C%20R.%20%282021%29.%20%26lt%3Bb%26gt%3BMulti-scale%2C%20multi-sensor%20data%20integration%20for%20automated%203-D%20geological%20mapping%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BOre%20Geology%20Reviews%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B136%26lt%3B%5C%2Fi%26gt%3B%2C%20104252.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.oregeorev.2021.104252%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.oregeorev.2021.104252%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Multi-scale%2C%20multi-sensor%20data%20integration%20for%20automated%203-D%20geological%20mapping%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%20T.%22%2C%22lastName%22%3A%22Thiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moritz%22%2C%22lastName%22%3A%22Kirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22I.%22%2C%22lastName%22%3A%22Cecilia%20Contreras%20Acosta%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Laura%22%2C%22lastName%22%3A%22Tusa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Erik%22%2C%22lastName%22%3A%22Herrmann%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Robert%22%2C%22lastName%22%3A%22M%5Cu00f6ckel%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gloaguen%22%7D%5D%2C%22abstractNote%22%3A%22Enhanced%20digital%20outcrop%20models%20attributed%20with%20hyperspectral%20reflectance%20data%2C%20or%20hyperclouds%2C%20provide%20a%20flexible%2C%20three-dimensional%20medium%20for%20data-driven%20mapping%20of%20geological%20exposures%2C%20mine%20faces%20or%20cliffs.%20This%20approach%20facilitates%20the%20collection%20of%20spatially%20contiguous%20information%20on%20exposed%20mineralogy%2C%20and%20so%20helps%20to%20quantify%20mineralising%20processes%2C%20interpret%201-D%20drillhole%20data%2C%20and%20optimise%20mineral%20extraction.%20In%20this%20contribution%20we%20present%20an%20open-source%20python%20workflow%2C%20hylite%2C%20for%20creating%20hyperclouds%20by%20seamlessly%20fusing%20geometric%20information%20with%20data%20from%20a%20variety%20of%20hyperspectral%20imaging%20sensors%20and%20applying%20necessary%20atmospheric%20and%20illumination%20corrections.%20These%20rich%20datasets%20can%20be%20analysed%20using%20a%20variety%20of%20techniques%2C%20including%20minimum%20wavelength%20mapping%20and%20spectral%20indices%2C%20to%20accurately%20map%20geological%20objects%20from%20a%20distance.%20Reference%20spectra%20from%20spectral%20libraries%2C%20ground%20or%20laboratory%20measurements%20can%20also%20be%20included%20to%20derive%20supervised%20classifications%20using%20machine%20learning%20techniques.%20We%20demonstrate%20the%20potential%20of%20the%20hypercloud%20approach%20by%20integrating%20hyperspectral%20data%20from%20laboratory%2C%20tripod%20and%20unmanned%20aerial%20vehicle%20acquisitions%20to%20automatically%20map%20relevant%20lithologies%20and%20alterations%20associated%20with%20volcanic%20hosted%20massive%20sulphide%20%28VHMS%29%20mineralisation%20in%20the%20Corta%20Atalaya%20open-pit%2C%20Spain.%20These%20analyses%20allow%20quantitative%20and%20objective%20mineral%20mapping%20at%20the%20outcrop%20and%20open-pit%20scale%2C%20facilitating%20quantitative%20research%20and%20smart-mining%20approaches.%20Our%20results%20highlight%20the%20seamless%20sensor%20integration%20made%20possible%20with%20hylite%20and%20the%20power%20of%20data-driven%20mapping%20methods%20applied%20to%20hyperclouds.%20Significantly%2C%20we%20also%20show%20that%20random%20forests%20%28RF%29%20trained%20only%20on%20laboratory%20data%20from%20labelled%20hand-samples%20can%20be%20used%20to%20map%20appropriately%20corrected%20outcrop%20scale%20data.%22%2C%22date%22%3A%222021-09-01%22%2C%22language%22%3A%22en%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.oregeorev.2021.104252%22%2C%22ISSN%22%3A%220169-1368%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS016913682100278X%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-06T08%3A41%3A36Z%22%7D%7D%2C%7B%22key%22%3A%22JV98STPY%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kirsch%20et%20al.%22%2C%22parsedDate%22%3A%222021-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKirsch%2C%20M.%2C%20Lorenz%2C%20S.%2C%20Thiele%2C%20S.%2C%20%26amp%3B%20Gloaguen%2C%20R.%20%282021%29.%20%26lt%3Bb%26gt%3BCharacterisation%20of%20Massive%20Sulphide%20Deposits%20in%20the%20Iberian%20Pyrite%20Belt%20Based%20on%20the%20Integration%20of%20Digital%20Outcrops%20and%20Multi-Scale%2C%20Multi-Source%20Hyperspectral%20Data%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3B2021%20IEEE%20International%20Geoscience%20and%20Remote%20Sensing%20Symposium%20IGARSS%26lt%3B%5C%2Fi%26gt%3B%2C%20126%26%23x2013%3B129.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FIGARSS47720.2021.9554149%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FIGARSS47720.2021.9554149%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Characterisation%20of%20Massive%20Sulphide%20Deposits%20in%20the%20Iberian%20Pyrite%20Belt%20Based%20on%20the%20Integration%20of%20Digital%20Outcrops%20and%20Multi-Scale%2C%20Multi-Source%20Hyperspectral%20Data%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Moritz%22%2C%22lastName%22%3A%22Kirsch%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Samuel%22%2C%22lastName%22%3A%22Thiele%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Richard%22%2C%22lastName%22%3A%22Gloaguen%22%7D%5D%2C%22abstractNote%22%3A%22Geological%20mapping%20in%20difficult-to-access%20terrain%20such%20as%20open%20pit%20mines%20often%20relies%20on%20remotely%20sensed%20data.%20Hyperspectral%20data%20yield%20valuable%20geological%20information%2C%20especially%20when%20spectral%20ranges%20of%20multiple%20sensors%20are%20used%20in%20conjunction.%20In%20this%20contribution%20we%20project%20a%20number%20of%20hyperspectral%20datasets%20of%20an%20open%20pit%20mine%20covering%20the%20visible%20to%20near-infrared%20%28VNIR%29%2C%20short-wave%20infrared%20%28SWIR%29%2C%20and%20long-wave%20infrared%20%28LWIR%29%20range%20from%20airborne%2C%20drone-borne%20and%20ground-based%20acquisitions%20into%20a%20photogrammetric%20point%20cloud.%20The%20resulting%20hyperspectral%20digital%20outcrop%20is%20then%20used%20as%20a%20basis%20for%20data%20integration%20in%20a%203D%20environment.%20To%20discriminate%20geologic%20materials%20in%20the%20pit%20we%20apply%20a%20Gaussian%20deconvolution%20to%20identify%20the%20position%20of%20diagnostic%20absorption%20features%20in%20the%20SWIR%20and%20LWIR%2C%20and%20then%20apply%20a%20support%20vector%20machine-based%20classification.%20Our%20results%20agree%20with%20known%20lithologic%20units%20and%20alteration%20patterns%20and%20can%20be%20used%20to%20guide%20exploration%20targeting%20and%20mine%20planning.%22%2C%22date%22%3A%222021-07%22%2C%22proceedingsTitle%22%3A%222021%20IEEE%20International%20Geoscience%20and%20Remote%20Sensing%20Symposium%20IGARSS%22%2C%22conferenceName%22%3A%222021%20IEEE%20International%20Geoscience%20and%20Remote%20Sensing%20Symposium%20IGARSS%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FIGARSS47720.2021.9554149%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-05-02T07%3A51%3A01Z%22%7D%7D%2C%7B%22key%22%3A%2259EYV65V%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schambach%2C%20Maximilian%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchambach%2C%20Maximilian.%20%282021%29.%20%26lt%3Bi%26gt%3B%26lt%3Bb%26gt%3B%26lt%3Bspan%20style%3D%26quot%3Bfont-style%3Anormal%3B%26quot%3B%26gt%3BA%20highly%20textured%20multispectral%20light%20field%20dataset%26lt%3B%5C%2Fspan%26gt%3B%26lt%3B%5C%2Fb%26gt%3B%26lt%3B%5C%2Fi%26gt%3B.%20Karlsruhe%20Institute%20of%20Technology%20%28KIT%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.35097%5C%2F500%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.35097%5C%2F500%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22dataset%22%2C%22title%22%3A%22A%20highly%20textured%20multispectral%20light%20field%20dataset%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%22%2C%22lastName%22%3A%22Schambach%2C%20Maximilian%22%7D%2C%7B%22creatorType%22%3A%22contributor%22%2C%22firstName%22%3A%22%22%2C%22lastName%22%3A%22Karlsruhe%20Institute%20Of%20Technology%20%28KIT%29%22%7D%2C%7B%22creatorType%22%3A%22contributor%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22A%20multispectral%20light%20field%20dataset%20of%20three%20real-world%20highly%20textured%20scenes.%22%2C%22identifier%22%3A%22%22%2C%22type%22%3A%22%22%2C%22versionNumber%22%3A%22%22%2C%22date%22%3A%222021%22%2C%22repository%22%3A%22Karlsruhe%20Institute%20of%20Technology%20%28KIT%29%22%2C%22repositoryLocation%22%3A%22%22%2C%22format%22%3A%22%22%2C%22DOI%22%3A%2210.35097%5C%2F500%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fradar.kit.edu%5C%2Fradar%5C%2Fen%5C%2Fdataset%5C%2FdPmvqVtGHZHdCurj%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-06T13%3A38%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22ZQJN5X4E%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Li%20and%20Heizmann%22%2C%22parsedDate%22%3A%222021%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLi%2C%20L.%2C%20%26amp%3B%20Heizmann%2C%20M.%20%282021%29.%20%26lt%3Bb%26gt%3B2.5D-VoteNet%3A%20Depth%20Map%20based%203D%20Object%20Detection%20for%20Real-Time%20Applications%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BThe%2032nd%20British%20Machine%20Vision%20Conference%202021%26lt%3B%5C%2Fi%26gt%3B%2C%201.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fpublikationen.bibliothek.kit.edu%5C%2F1000140306%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fpublikationen.bibliothek.kit.edu%5C%2F1000140306%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%222.5D-VoteNet%3A%20Depth%20Map%20based%203D%20Object%20Detection%20for%20Real-Time%20Applications%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lanxiao%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222021%22%2C%22proceedingsTitle%22%3A%22The%2032nd%20British%20Machine%20Vision%20Conference%202021%22%2C%22conferenceName%22%3A%2232nd%20British%20Machine%20Vision%20Conference%20%28BMVC%202021%29%2C%20Online%2C%2022.11.2021%20%5Cu2013%2025.11.2021%22%2C%22language%22%3A%22de%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fpublikationen.bibliothek.kit.edu%5C%2F1000140306%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-06T08%3A23%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22K94EW55G%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Schambach%20and%20Heizmann%22%2C%22parsedDate%22%3A%222020%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSchambach%2C%20M.%2C%20%26amp%3B%20Heizmann%2C%20M.%20%282020%29.%20%26lt%3Bb%26gt%3BA%20Multispectral%20Light%20Field%20Dataset%20and%20Framework%20for%20Light%20Field%20Deep%20Learning%26lt%3B%5C%2Fb%26gt%3B.%20%26lt%3Bi%26gt%3BIEEE%20Access%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B8%26lt%3B%5C%2Fi%26gt%3B%2C%20193492%26%23x2013%3B193502.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FACCESS.2020.3033056%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FACCESS.2020.3033056%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Multispectral%20Light%20Field%20Dataset%20and%20Framework%20for%20Light%20Field%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maximilian%22%2C%22lastName%22%3A%22Schambach%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Heizmann%22%7D%5D%2C%22abstractNote%22%3A%22Deep%20learning%20undoubtedly%20has%20had%20a%20huge%20impact%20on%20the%20computer%20vision%20community%20in%20recent%20years.%20In%20light%20field%20imaging%2C%20machine%20learning-based%20applications%20have%20significantly%20outperformed%20their%20conventional%20counterparts.%20Furthermore%2C%20multi-%20and%20hyperspectral%20light%20fields%20have%20shown%20promising%20results%20in%20light%20field-related%20applications%20such%20as%20disparity%20or%20shape%20estimation.%20Yet%2C%20a%20multispectral%20light%20field%20dataset%2C%20enabling%20data-driven%20approaches%2C%20is%20missing.%20Therefore%2C%20we%20propose%20a%20new%20synthetic%20multispectral%20light%20field%20dataset%20with%20depth%20and%20disparity%20ground%20truth.%20The%20dataset%20consists%20of%20a%20training%2C%20validation%20and%20test%20dataset%2C%20containing%20light%20fields%20of%20randomly%20generated%20scenes%2C%20as%20well%20as%20a%20challenge%20dataset%20rendered%20from%20hand-crafted%20scenes%20enabling%20detailed%20performance%20assessment.%20Additionally%2C%20we%20present%20a%20Python%20framework%20for%20light%20field%20deep%20learning.%20The%20goal%20of%20this%20framework%20is%20to%20ensure%20reproducibility%20of%20light%20field%20deep%20learning%20research%20and%20to%20provide%20a%20unified%20platform%20to%20accelerate%20the%20development%20of%20new%20architectures.%20The%20dataset%20is%20made%20available%20under%20dx.doi.org%5C%2F10.21227%5C%2Fy90t-xk47.%20The%20framework%20is%20maintained%20at%20gitlab.com%5C%2Fiiit-public%5C%2Flfcnn.%22%2C%22date%22%3A%222020%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FACCESS.2020.3033056%22%2C%22ISSN%22%3A%222169-3536%22%2C%22url%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-04-30T14%3A36%3A28Z%22%7D%7D%2C%7B%22key%22%3A%22UH5NHVXM%22%2C%22library%22%3A%7B%22id%22%3A4725570%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lorenz%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%201.35%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLorenz%2C%20S.%20%28n.d.%29.%20%26lt%3Bb%26gt%3BHyper%203D-AI%3A%20Artificial%20Intelligence%20for%203D%20multimodal%20point%20cloud%20classification%26lt%3B%5C%2Fb%26gt%3B.%20Retrieved%20July%207%2C%202022%2C%20from%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fwww.hzdr.de%5C%2Fpublications%5C%2FPubl-33167%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fwww.hzdr.de%5C%2Fpublications%5C%2FPubl-33167%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Hyper%203D-AI%3A%20Artificial%20Intelligence%20for%203D%20multimodal%20point%20cloud%20classification%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sandra%22%2C%22lastName%22%3A%22Lorenz%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%22%22%2C%22language%22%3A%22de%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.hzdr.de%5C%2Fpublications%5C%2FPubl-33167%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222022-07-07T08%3A40%3A32Z%22%7D%7D%5D%7D
Afifi, A. J., Thiele, S. T., Rizaldy, A., Lorenz, S., Ghamisi, P., Tolosana-Delgado, R., Kirsch, M., Gloaguen, R., & Heizmann, M. (2024). Tinto: Multisensor Benchmark for 3-D Hyperspectral Point Cloud Segmentation in the Geosciences. IEEE Transactions on Geoscience and Remote Sensing, 62, 1–15. https://doi.org/10.1109/TGRS.2023.3340293
Bihler, M., Roming, L., Jiang, Y., Afifi, A. J., Aderhold, J., Čibiraitė-Lukenskienė, D., Lorenz, S., Gloaguen, R., Gruna, R., & Heizmann, M. (2023). Multi-sensor data fusion using deep learning for bulky waste image classification. Automated Visual Inspection and Machine Vision V, 12623, 69–82. https://doi.org/10.1117/12.2673838
Thiele, S., Afifi, A. J., Lorenz, S., Tolosana-Delgado, R., Kirsch, M., Ghamisi, P., & Gloaguen, R. (2023). LithoNet: A benchmark dataset for machine learning with digital outcrops (No. EGU23-14007). Copernicus Meetings. https://doi.org/10.5194/egusphere-egu23-14007
HIF-EXPLO. (2022). hifexplo/hylite. https://github.com/hifexplo/hylite (Original work published 2020)
Schambach, M., Shi, J., & Heizmann, M. (2021). Spectral Reconstruction and Disparity from Spatio-Spectrally Coded Light Fields via Multi-Task Deep Learning. 2021 International Conference on 3D Vision (3DV), 186–196. https://doi.org/10.1109/3DV53792.2021.00029
Thiele, S., Lorenz, S., Bnoulkacem, Z., Kirsch, M., & Gloaguen, R. (2021). Hyperspectral mineral mapping of cliffs using a UAV mounted Hyspex Mjolnir VNIR-SWIR sensor. 2021, 1–3. https://doi.org/10.3997/2214-4609.2021629011
Thiele, S. T., Lorenz, S., Kirsch, M., Cecilia Contreras Acosta, I., Tusa, L., Herrmann, E., Möckel, R., & Gloaguen, R. (2021). Multi-scale, multi-sensor data integration for automated 3-D geological mapping. Ore Geology Reviews, 136, 104252. https://doi.org/10.1016/j.oregeorev.2021.104252
Kirsch, M., Lorenz, S., Thiele, S., & Gloaguen, R. (2021). Characterisation of Massive Sulphide Deposits in the Iberian Pyrite Belt Based on the Integration of Digital Outcrops and Multi-Scale, Multi-Source Hyperspectral Data. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, 126–129. https://doi.org/10.1109/IGARSS47720.2021.9554149
Schambach, Maximilian. (2021). A highly textured multispectral light field dataset. Karlsruhe Institute of Technology (KIT). https://doi.org/10.35097/500
Li, L., & Heizmann, M. (2021). 2.5D-VoteNet: Depth Map based 3D Object Detection for Real-Time Applications. The 32nd British Machine Vision Conference 2021, 1. https://publikationen.bibliothek.kit.edu/1000140306
Schambach, M., & Heizmann, M. (2020). A Multispectral Light Field Dataset and Framework for Light Field Deep Learning. IEEE Access, 8, 193492–193502. https://doi.org/10.1109/ACCESS.2020.3033056
Lorenz, S. (n.d.). Hyper 3D-AI: Artificial Intelligence for 3D multimodal point cloud classification. Retrieved July 7, 2022, from https://www.hzdr.de/publications/Publ-33167

Other projects


Image: Paul Kamm, Helmholtz-Zentrum Berlin

Avanti

X-ray tomoscopy of dynamic manufacturing processes

How can the manufacturing processes of materials be mapped at the smallest level? How do you train an artificial intelligence to analyze these processes automatically? That's the focus of the Avanti project, which aims to improve X-ray tomoscopy – the imaging and quantification of three-dimensional images of very fast-moving processes.
 

SATOMI

Tackling the segmentation and tracking challenges of growing colonies and microbialdiversity

An artificial intelligence will observe the growth of bacteria: from microscope images of bacterial cultures taken at regular intervals, it will precisely track the development and division of individual cells – even when multiple bacterial species are cultivated together.
Hyperspectral data cube
Image: Aaron Christian Banze

HYPER-AMPLIFAI

Advancing Visual Foundation Models for Multi-/Hyperspectral Image Analysis in Agriculture/Forestry

The project aims to make advanced AI models accessible for Hyperspectral Earth Observation, reducing computational demands, and improving environmental assessments through user-friendly interfaces.