Archeological sites are delicate, remote, and often difficult to access. Low cost techniques for environmental mapping would assist in the distribution and study of culturally significant artifacts. Coincidentally, robotic systems require similar environmental maps to avoid obstacles and conduct path planning, albeit in real-time.

In this project, we are applying ideas from computer vision to develop tools for high quality environment mapping. Our goal is to compare the speed and quality of 3D models generated by camera systems, to models generated by LiDAR and other established systems.

Documenting Maya Archeology:

Many Maya ruins are hidden under dense jungle. As a consequence, many sites that are not popular tourist destinations, yet instrumental in understanding mayan culture, are seen by very few people. Items found during excavation are typically claimed for preservation in museums. Large exhibits such as temple structures, by virtue of being buildings, cannot be demonstrated to a wider audience. Our goal is to change how archaeological finds are shared by exploring digital methods for documentation and visualization.

The archeological documentation project has two aims: lower the cost of digital documentation by experimenting with data collection methods, and expand distribution by creating visualizations. These methods include stereo-panoramic cameras, LIDAR, and experimental kinect-based point cloud system. For example, the both prior expeditions to Guatemala brought a ground-based LIDAR system for high resolution scans of these large excavated temples. One method we are particularly excited about is Structure From Motion (SFM), a low cost technique for generating 3D models using photos from a traditional camera.

In our 2014 expedition to the Archeological site El Zotz, we performed experiments with a FARO LiDAR scanner, Gigapan cameras, and Structure from Motion. Using these techniques we successfully mapped the interior of two complete excavatations, and several smaller sites. In our 2015 expedition, we experimented with Kinects from the Xbox360 and XboxOne, Google Tango Tablets, and returned for more thorough LiDAR scans and Structure From Motion. The pictures on the left show demonstrate the Kinect sensors in action.

The videos below demonstrate the FARO lidar and SFM Techniques. On the left is a fly-through video created from a composite point cloud generated from 50 LiDAR scans. On the right is a video of a point cloud generated with Structure From Motion, which takes many pictures of a subject and produces a 3D model. In this video, the subject is a stucco mask inside of an excavated temple on the site. 

A site report from the 2014 field season can be found here (in spanish). Our collaborators are  Steve Houston at Brown University, and Thomas Garrison from University of Southern California and their graduate students. This research is funded by PACUNAM, the GEOS Foundation and the California Institute for Telecommunications and Technology (Calit2).

 

 

Kinect Fusion:

Embedding real-time 3D reconstruction of a scene from a low-cost depth sensor can improve the development of technologies in the domains of augmented reality, mobile robotics, and more. However, current implementations require a computer with a powerful GPU, which limits its prospective applications with low-power requirements. To implement low- power 3D reconstruction we embedded two prominent algorithms of 3D reconstruction (Iterative Closest Point and Volumetric Integration) on an Altera Stratix V FPGA by using the OpenCL language and the Altera OpenCL SDK. In this paper, we present our application and evaluation of the Altera tool in terms of performance, area, and programmability trade-offs. We have verified that OpenCL can be a viable method for developing FPGA applications by modifying an open-source version of the Microsoft KinectFusion project to run partially on a FPGA.

 

 

Page | by Dr. Radut