3D Modeling in Archaeology and Cultural
Short Description
3D Modeling in Archaeology and Cultural...
Description
3D Recording and Modelling in Archaeology and Cultural Heritage Theory and best practices Edited by
Fabio Remondino Stefano Campana
BAR International Series 2598 2014
Contents INTRODUCTION.................................................................................................................. 3
M. Santana Quintero
1 ARCHAEOLOGICAL AND GEOMATIC NEEDS 1.1 3D modeling in archaeology and cultural heritage – Theory and best practice ................................................................................................... 7 S. Campana 1.2 Geomatic and cultural heritage....................................................................................... 13 F. Remondino 1.3 3D modelling and shape analysis in archaeology........................................................... 15 J.A. Barceló
2 LASER/LIDAR 2.1 Airborne laser scanning for archaeological prospection ................................................ 27 R. Bennet 2.2 Terrestrial optical active sensors – theory & applications .............................................. 39 G. Guidi
3 PHTOGRAMMETRY 3.1 Photogrammetry: theory ................................................................................................. 65 F. Remondino 3.2 UAV: Platforms, regulations, data acquisition and processing ...................................... 74 F. Remondino
4 REMOTE SENSING 4.1 Exploring archaeological landscapes with satellite imagery .......................................... 91 N. Galiatzatos
5 GIS 5.1 2D GIS vs. 3D GIS theory ........................................................................................... 103 G. Agugiaro
i
6 VIRTUAL REALITY & CYBERARCHAEOLOGY 6.1 Virtual reality, cyberarchaeology, teleimmersive archaeology .................................... 115 M. Forte 6.2 Virtual reality & cyberarchaeology – virtual museums................................................ 130 S. Pescarin
7 CASE STUDIES 7.1 3D Data Capture, Restoration and Online Publication of Sculpture ............................ 137 B. Frischer 7.2 3D GIS for Cultural Heritage sites: The QueryArch3D prototype ............................... 145 G. Agugiaro & F. Remondino 7.3 The Use of 3D Models for Intra-Site Investigation in Archaeology ............................ 151 N. Dell’Unto
ii
List of Figures and Tables F. Remondino: Geomatics and Cultural Heritage Figure 1. Geomatics and its related techniques and applications ......................................... 13 Figure 2. Existing Geomatics data and sensors according to the working scale and object/scene to be surveyed ..................................................................................... 14 Figure 3. Geomatics techniques for 3D data acquisition, shown according to the object/scene dimensions and complexity of the reconstructed digital model ................................................................................................................... 14
J.A. Barceló: 3D Modelling and Shape Analysis in Archaeology Table 1. Microtopography: 3D surface texture .................................................................... 19
R. Bennet: Airborne Laser Scanning for Archaeological Prospection Figure 1. Demonstration and example of the zig-zag point distribution when ALS data are collected using an oscillating sensor ............................................... 28 Figure 2. Schematic of the key components of the ALS system that enable accurate measurement of height and location ................................................................. 28 Figure 3. Schematic illustrating the differences in data recorded by full-waveform and pulse echo ALS sensors ........................................................................................... 29 Figure 4. An example of “orange peel” patterning caused by uncorrected point heights at the edges of swaths. The overlay demonstrates uncorrected data which in the red overlap zones appears speckled and uneven compared with the same areas in the corrected (underlying) model ............................................... 29 Figure 5. An example of classification of points based on return which forms the most basic method to filter non-terrain points from the DSM .................................. 30 Figure 6. Two examples of common interpolation techniques: IDW (left) and Bicubic Spline (right) ............................................................................................... 31 Figure 7. Comparison of visualisation techniques mentioned in this chapter ...................... 32 Figure 8. Angle and Illumination of a shaded relief model .................................................. 33 Figure 9. Different angles of illumination highlighting different archaeological features..................................................................................... 34
G. Guidi : Terrestrial optical active sensors theory &
applications
Figure 1. Triangulation principle: a) xz view of a triangulation based distance measurement through a laser beam inclined with angle α respect to the reference system, impinging on the surface to be measured. The light source is at distance
iii
b from the optical centre of an image capturing device equipped with a lens with focal length f; b) evaluation of xA and zA ............................................................. 40 Figure 2. Acquisition of coordinates along a profile generated by a sheet of laser light. In a 3D laser scanner this profile is mechanically moved in order to probe an entire area ....................................................................................... 41 Figure 3. Acquisition of coordinates along a different profiles generated by multiple sheets of white light.......................................................................................... 42 Figure 4. Acquisition of coordinates of the point A through the a priori knowledge of the angle α, and the measurement of the distance ρ through the Time Of Flight of a light pulse from the sensor to the object and back ................................................... 43 Figure 5. Exemplification of the accuracy and precision concepts. The target has been used by three different shooters. The shooter A is precise but not accurate, B is more accurate than A but less precise (more spreading), C is both accurate and precise ........................................................................................ 46 Figure 6. ICP alignment process: a) selection of corresponding points on two partially superimposed range maps; b) rough pre-alignment; c) accurate alignment after a few iterations .................................................................... 48 Figure 7. Mesh generation: a) set of ICP aligned range maps. Different colours indicate the individual range maps; b) merge of all range maps in a single polygonal mesh............................................................................................................... 48 Figure 8. Mesh optimization: a) mesh with polygon sizes given by the range sensor resolution set-up (520,000 triangles); b) mesh simplified in order to keep the difference with the unsimplified one, below 50 μm. The polygon sizes vary dynamically according to the surface curvature and the mesh size drops down to 90,000 triangles.......................................... 49 Figure 9. Structure of the G Group of temples in MySon: a) map of the G area drawn by the taken archaeologist Parmentier in the early 20th The century (Stern, b) fisheye image from above during the 2011 survey. ruins of the1942); mandapa (G3) are visible in the upper part of the image, the posa (G5) on the right, the gopura (G2) in the center, and the footprint of the holy wall all around .................................... 52 Figure 10. Handmade structures arranged on the field by local workers for locating the laser scanner in the appropriate positions: a) mounting the platform on the top of the structure surrounding the Kalan; b) laser scanner located on the platform at 7 meters above the ruins; c) multi-section ladder for reaching the platform; d) structure for elevating the scanner at 3m from ground. During 3D acquisition the operator lies in the blind cone below the scanner in order to avoid the laser beam trajectory ................................................................................... 53 Figure 11. Map of the hill where the G Group is located within the My Son Area, with the scanner positions for acquiring different structures highlighted by colored dots ............................................................................................................... 54 Figure 12. Sculpted tympanum representing Krishna dancing on the snakes, srcinally at the entrance of the kalan: a) 3D laser scanning in the “store room” of the museum; b) reality-based model from the 3D data .............................................. 54 Figure 13. High resolution capture of the Foundation stone through SFM: a) texturized 3D model measured through a sequence of 24 images shot around the artifact; b) mesh model of the central part of the stone with a small area highlighted in red; c) color-coded deviations of the SFM acquired points from a best-fitting plane calculated on the red area of b), clearly showing a the nearly 2 mm carving on the stone ...................................... 55 Figure 14. Tangential edge error in 3D point clouds: the red points represent the incorrect data respect to the real ones (black-grey color) ......................................... 56 Figure 15. a) Point cloud model of the Kalan cleaned and aligned in the same reference system; b) polygonal model of the Kalan with a decimated and watertight mesh........................................................................................................ 56
iv
Figure 16. Reality-based models of all ruins in the G group obtained from 3D data generated by a laser scanner at 1 cm resolution and texturized with the actual images of the buildings: a) G1, the main temple; b) G2, the entrance portal to the holy area; c) G3, the assembly hall; d) G4, the south building; e) G5; the kiosk of the foundation stone ......................................................................... 57 Figure 17. Reality-based models of eight of the 21 decorations found during the G Group excavations and acquired in the My Son museum. All these decorations have been acquired with a sampling step between 1 and 2 mm, and post processed in order to strongly reduce the significant measurement noise but not the tiniest details of their shapes. The visual representation in this rendering have been made with a seamless texture ............................................. 58 Figure 18. Virtual reconstruction of the G Group and its surrounding panorama starting from the reality-based models acquired through laser scanning and digital images ........................................................................................................... 59 Table 1. Laser scanner configurations planned for 3D data acquisition............................... 51 Table 2. Number of point clouds acquired at different resolution levels (first three columns), and total number of 3D points acquired during the whole 3D survey of the G Group and the related decorations (last column) .............................................. 55
F. Remondino: Photogrammetry: theory Figure 1. The collinearity principle established between the camera projection center, a point in the image and the corresponding point in the object space (left). The multi-image concept, where the 3D object can be reconstructed using multiple collinearity rays between corresponding image points (right) ........................................................................................................ 66 Figure 2. A typical terrestrial image network acquired ad-hoc for a camera calibration procedure, with convergent and rotated images (a). A set of terrestrial images acquired ad-hoc for a 3D reconstruction purpose (b) ..................................................... 68 Figure 3. Radial (a) and decentering (b) distortion profiles for a digital camera set at different focal lengths ................................................................................................. 69 Figure 4. 3D reconstruction of architectural structures with manual measurements in order to generate a simple 3D model with the main geometrical features (a). Dense 3D reconstruction via automated image matching (b). Digital Surface Model (DSM) generation from satellite imagery (Geo-Eye stereo-pair) for 3D landscape visualization (c) ........................................................................................ 71 Figure 5. 3D reconstruction from images: according to the project needs and requirements, sparse or dense point clouds can be derived...................................... 72 Table 1: Photogrammetric procedures for calibration, orientation and point positioning ...................................................................................................... 68
F. Remondino: UAV: Platforms, regulations, data acquisition and processing Figure 1. Available Geomatics techniques, sensors and platforms for 3D recording purposes, according to the scene’ dimensions and complexity ...................................... 75 Figure 2. Typical acquisition and processing pipeline for UAV images .............................. 77 Figure 3. Different modalities of the flight execution delivering different image block’s quality: a) manual mode and image acquisition with a scheduled interval; b) low-cost navigation system with possible waypoints but irregular image overlap; c) automated flying and acquisition mode achieved with a high quality navigation system .................................................................................... 78 Figure 4. Orientation results of an aerial block over a flat area of ca 10 km (a). The derived camera poses are shown in red/green, while color dots are the 3D object points on the ground. The absence of ground constraint (b) can led to a wrong solution of the computed 3D shape (i.e. ground deformation). The more rigorous approach, based on GCPs used as observations in the
v
bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e. a flat terrain .................................................................................................................... 79 Figure 5. Orientation results of an aerial block over a flat area of ca 10 km (a). The derived camera poses are shown in red/green, while color dots are the 3D object points on the ground. The absence of ground constraint (b) can led to a wrong solution of the computed 3D shape (i.e. ground deformation). The more rigorous approach, based on GCPs used as observations in the bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e. a flat terrain .................................................................................................................... 80 Figure 6. A mosaic view of the excavation area in Pava (Siena, Italy) surveyed with UAV images for volume excavation computation and GIS applications (a). The derived DSM shown as shaded (b) and textured mode (c) and the produced ortho-image (d) [75]. If multi-temporal images are available, DSM differences can be computed for volume exaction estimation (e) ..................................................... 81 Figure 7. A mosaic over an urban area in Bandung, Indonesia (a). Visualization of the bundle adjustment results (b) of the large UAV block (ca 270 images) and a close view of the produced DSM over the urban area, shown as point cloud (c, d) and shaded mode (e) .................................................................................... 82 Figure 8. Approximate time effort in a typical UAV-based photogrammetric workflow ........................................................................................................................ 83 Table 1. Evaluation of some UAV platforms employed for Geomatics applications, according to the literature and the authors’ experience. The evaluation is from 1 (low) to 5 (high) .................................................................................................. 76
N. G ali atza tos : Exploring archaeological landscapes with satellite imagery Figure 1. Illustration of the spatial resolution property ........................................................ 92 Figure 2. The high radiometric resolution of IKONOS-2 (11-bit) allows for better visibility at the shadows of the clouds .................................................................. 93 Figure 3. The left part displays the spectral resolution of different satellites. The right part illustrates the spectral signature from the point of view of hyperspectral, multispectral and panchromatic images respectively .......................... 94 Figure 4. Illustration of the different spatial coverage or swath width (nominal values in parenthesis) ...................................................................................... 94 Figure 5. Classical and modern geospatial information system ........................................... 98 Table 1. Landsat processing levels as provided ................................................................... 94 Table 2. Description of error sources ................................................................................... 96
G. Agugiaro: 2D & 3D GIS and Web-Based Visualization Figure 1. Example of relational model: two tables (here: countries and cities) are depicted schematically (top). Attribute names and data types are listed for each table. The black arrow represents the relation existing between them. Data contained in the two tables is presented in the bottom left, and the result of a possible query in the bottom right. The link between the two tables is realized by means of the country_id columns .............................................................. 104 Figure 2. Raster (top) and vector (bottom) representation of point, lines and polygon features in a GIS ...................................................................................... 105 Figure 3. Qualitative examples of different interpolation algorithms starting from the same input (left). Surface interpolated using an Inverse Distance Weighting interpolator (center) and a Spline with Tension interpolator (right) ............................. 108 Figure 4. Examples of network analyses. A road network (upper left), in which 5 possible destinations are represented by black dots, can be represented according to the average speed typical for each roadway (upper right),
vi
where decreasing average speeds are represented in dark green, light green, yellow, orange and red, respectively. The shortest route, considering distance, connecting all 5 destinations is depicted in blue (bottom left), while the shortest route, in terms of time, is depicted in violet (bottom right). These examples are based on the Spearfish dataset available for Grass GIS .......................... 109 Figure 5. Examples of visualization of GIS data. A raster image (orthophoto) and a vector dataset (building footprints) are visualized in 2D (left). A 3D visualization of the extruded buildings draped onto the DTM ............................ 110 Figure 6. Example of Web-based geodata publication in 3D: by means of virtual globes, as in Google Earth, or in the case of the Heidelberg 3D project ...................... 111
M. Forte: Virtual reality, cyberarchaeology, teleimmersive archaeology Figure 1. Digital Hermeneutic Circle ................................................................................. 116 Figure 2. Domains of digital knowledge ............................................................................ 116 Figure 3. 3D-Digging Project at Çatalhöyük ...................................................................... 120 Figure 4. Teleimmersion System in Archaeology (UC Merced, UC Berkeley) ................. 121 Figure 5. Video capturing system for teleimmersive archaeology ..................................... 121 Figure 6. A Teleimmersive work session ........................................................................... 122 Figure 7. Building 77 at Çatalhöyük: the teleimmersive session shows the spatial integration of shape files (layers, units and artifacts) in the 3D model recorded by laser scanning .......................................................................................................... 122 Figure 8. 3D Interaction with Wii in the teleimmersive system: building 77, Çatalhöyük.................................................................................................................... 123 Figure 9. Clouds of points by time of phase scanner (Trimble FX) at Çatalhöyük: building 77 ........................................................................................... 123 Figure 10. Image modeling of the building 89 at Çatalhöyük ............................................ 124 Figure 11. Image modeling of the building 77 at Çatalhöyük ............................................ 124 Figure 12. 3D layers and microstratigraphy in the teleimmersive system: midden layers at Çatalhöyük. This area was recorded by optical scanner ................................ 125 Figure 13. Virtual stratigraphy of the building 89, Çatalhöyük: all the layers recorded by time of phase laser scanner (Trimble FX) ................................................. 125 Figure 14. Building 77 reconstructed by image modeling (Photoscan). In detailhand wall painting and painted calf's head above niche .................................. 125 Figure 15. Building 77 after the removal of the painted calf’s head. The 3D recording by image modeling allows to reconstruct the entire sequence of decoration ................................................................................. 125 Figure 16. Building 77: all the 3D layers with paintings visualized in transparency ......... 125 Table 1................................................................................................................................ 127
S. Pescarin: Virtual reality & cyberarchaeology – virtual museums Figure 1. The virtual museum of Scrovegni chapel (Padova, IT, 2003-): the VR installation at the Civic Museum and the cybermap which is part of the VR application ........................................................................................... 132 Figure 2. Aquae Patavinae VR presented at Archeovirtual 2011 (www.archeovirtual.it): natural interaction through the web .............................................................................. 133 Figure 3. 3D reconstruction parts of the project “Matera: tales of a city” with a view of the same place in different historical periods........................................ 133 Figure 4. Immersive room with Apa stereo movie inside the new museum of the city of Bologna ................................................................................................... 134
vii
B. Frischer: 3D Data Capture, Restoration and Online Publication of Sculpture Figure 1. View of the DSP’s reconstruction of the statue group of Marsyas, olea, ficus, et vitis in the Roman Forum ....................................................................... 139 Figure 2. “Alexander,” plaster cast (left) and srcinal marble (right) of the torso; front view ................................................................................................. 141 Figure 3. “Alexander,” digital model of the cast (left) and of the srcinal (right) of the torso; front view ................................................................................................. 142 Figure 4. Tolerance-Based Pass/Fail test of the digital models of the cast and srcinal torso of “Alexander” in Dresden. Green indicates that the two models differ by less than ± 1 mm. Red indicates areas where the difference between the models exceeds ± 1 m ............................................................................................. 142 Figure 5. Error Map comparing the digital models of the cast and srcinal torso of the Dresden “Alexander” ......................................................................................... 142
G. Agugiaro & F. Remondino: 3D GIS for Cultural Heritage sites: The QueryArch3D prototype Figure 6. Different levels of detail (LoD) in the Query Arch 3D tool. Clockwise from top-left: LoD1 of a temple with prismatic geometries, LoD2 with more detailed models (only exterior walls), LoD3 with interior walls/rooms and some simplified reality-based elements, LoD4 with high-resolution reality-based models ..................................................................................................... 147 Figure 7. Different visualization models in QueryArch3D: aerial view (a, b), walkthrough mode (b) and detail view (d). Data can be queried according to attributes (a) or by clicking on the chosen geometry (b, c, d). The amount of information shown is depending on the LoD: in (b), attributes about the whole temple are shown, in (c) only a subpart of the temple, and the corresponding attributes, are shown ................................................................. 148
N. Dell’Unto: The Use of 3D Models for Intra-Site Investigation in Archaeology Figure 1. This image presents an example of a 3D model acquired during an investigation campaign in Uppåkra (Summer 2011). The model has been realised using Agisoft Photoscan and visualised through MeshLab .................... 152 Figure 2. This image shows the three steps performed by the software (i.e., Photoscan and Agisoft) to calculate the 3D model for the rectangular area excavated in 2011 during the investigation of a Neolithic grave in Uppåkra: (a) camera position calculations, (b) geometry creation, and (c) map projection .................................................................................................. 154 Figure 3. This image shows the investigation area that was selected in 2010 to test the efficiency of the Computer Vision techniques during an archaeological excavation in Uppåkra. The upper part of the image presents (A) a model created during the excavation overlapped with the graphic documentation created during the investigation campaign. The lower part of the image presents (B) an example of models organised in a temporal sequence ......................... 155 Figure 4. This image shows two models of the excavation that were created at different times during the investigation campaign. In the first model, (a) the circular ditch is visible only in the Northwest rectangular area. The second model shows (b) how the results of the archaeological investigation allowed for the discovery of a ditch that was in the Southeast rectangular area .................................................................................. 155 Figure 5. This image shows part of the 3D models that were created during the excavation of a grave, organised in a temporal sequence ....................................... 156 Figure 6. This image shows the integration of the 3D models into the GIS. ArcScene only imported models smaller than 34,000 polygons ........................................ 157
viii
3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE – THEORY AND BEST PRACTICES
INTRODUCTION
Mario SANTANA QUINTERO
development of infrastructure. As well as, armed conflicts, weathering, and pure vandalism.
INTRODUCTION
Digitally capturing cultural heritage resources have become nowadays a common practice. Recording the physical characteristics of historic structures, archaeological sites and landscapes is a cornerstone of their conservation, whatever it means actively maintaining them or making a posterity record. The information produced by such activity potentially would guide decision-making by property owners, site managers, public officials, and conservators around the
The rapid rise in new digital technologies has revolutionized the practice of recording heritage places. Digital tools and media offer a myriad of new opportunities for collecting, analyzing and disseminating information, with these new opportunities; there are also conflicts and constraints, involving fragmentation, longevity and reliability of information. As well as, the threat of generating digital representations that might falsify
world, as these well resources. as, to present historic knowledge may and values of Rigorous documentation also serve a broader purpose: over time, it becomes the primary means by which scholars and the public apprehend a site that has since changed radically or disappeared.
instead of simplifying the understanding of our heritage. Furthermore, a record can be used for promotion leading to participation, increasing the knowledge about a heritage place. It can be a tool for promoting the participation of society in its conservation, a tool for ‘cultural tourism and regional development’2.
A good selection and application of recording and documentation tools is assured, when preparing a comprehensive approach derived from the needs of the site baseline. This base information set should take into consideration the indicators defined by assessing the state of conservation and statement of significance of the heritage place.
In this context, the ICOMOS International Scientific Committee on Heritage documentation (CIPA) has endeavoured over 40 years to organize venues for reflection, exchange and dissemination of research and projects in the field of documentation of cultural heritage. The contribution to the field has been substantial and can be consulted on http://cipa.icomos.org (last accessed 20/05/2011). With the support of CIPA this book provides a guideline for the appropriate training in threedimensional capturing and dissemination techniques.
Moreover, increasing the knowledge of the relevant heritage places in a region can lead to its inclusion in inventories and other legal instruments that can eventually prevent its destruction and helps in combating ‘the theft of and illicit in cultural property on a global scale’2.
BASIC ENGAGEMENT RULES WHEN RECORDING HERITAGE PLACES
A holistic approach in understanding the significance is essential for safeguarding cultural Heritage properties; equally important is the appropriate assessment of their “state of conservation taking into consideration the potential degree of vulnerability to cumulative and/or drastic risk/threats to their integrity, this is very relevant when a digital record is being prepared of a site. As evidenced in the most recent events, heritage places are constantly threated by environmental calamities (earthquakes, tsunamis, inundations, etc), and indiscriminative
Recording for conservation of heritage places is a careful process that requires following these rules:
Nothing is straight, square or horizontal;
Record from the wide (big) to the small (fault theory);
3
For conservation: record as-built condition: record only what you see (make difference between what you see and assumptions deduced from “logical” way of fabric);
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Create a BASIS and CONTROL system;
Record and provide provenance information.
FINAL REMARKS
The heritage recorders should bear in mind that it is crucial to provide a measured dataset of representations that truly presents the actual state of conservation of the property.
MAKING “BASELINE RECORDS” FOR CONSERVATION
This record, additionally, could be use as starting point for designing and implementing plan monitoring strategy, allowing detecting changes affecting the statement of significance of the heritage place. A baseline is defined by both a site report and a dossier of measured representations that could include a site plan, emplacement plan, plans of features, sections, elevations, three-dimensional models, etc.
for conservation. London: English Heritage. 2. Council of Europe 2009. ‘Guidance on inventory and documentation of the cultural heritage’. 3. EPPICH, E.; CHABBI, A. ed. 2007. Illustrated Examples Recording, Documentation, and Information Management for the Conservation of Heritage Places, The Getty Conservation Institute, J. Paul Getty Trust.
The following checklist can be used as guideline to minimum requirements of information required to define the baseline:
4. LETELLIER, R.; SCHMID, W.; LEBLANC, F. 2007. Guiding Principles Recording, Documentation, and Information Management for the Conservation of Heritage Places, Getty Conservation Institute, J. Paul Getty Trust.
Identify site location (centroid, boundaries, elements and buffer zone); Identify and map evidences of criteria; Relative chronology and history of the resources;
Significance and integrity assessment;
Risk assessment: threats and hazards associated to
indicators; Administrative and management issues (current and passed mitigations);
Other assessments.
The rapid rise in new digital technologies has revolutionized the way that our built heritage, with these new opportunities; there are also conflicts and challenges, especially in guaranteeing the scientific correctness and reliability of information used to record and document historic buildings.
1. CLARK, Catherine M. 2001. Informed conservation: Understanding historic buildings and their landscapes
necessary, it is important to prepare a documentary research to review and identify gaps in the existing information (documentation) on the site. This first assessment will allow to estimate the degree of additional recording work required to prepare an adequate set of documents to mapped indicators.
Values is a crucial concept in defining the extend and effective capturing and disseminating knowledge of heritage places;
References
In order to identify the extent of field recording
Recorded information is required to be timely, relevant and precise. It should provide a “clear understanding” of the fabric’s condition and materials, as well as, the property’s chronology of modifications and alternations over its extended lifespan. Therefore, documenting and recording these issues, along with assessing the degree and type of “risks” is an essential part of the property’s understanding, conservation and management;
A “baseline record” is the product of any recording and documenting project when studying cultural resources. The structure, specification (metadata), quality and extend of this ‘record’ should follow international recognize standards and should provide relevant, timely and sufficient information to protect it.
A holistic approach, centered in the relevance of information to understand the significance, integrity and threats to our built heritage is of paramount importance;
5. MATERO, Frank G. 2003. “Managing change: The role of documentation and condition survey at Mesa Verde national park”, In Journal of the American Institute for Conservation (JAIC), 42, pp. 39-58. 6. STOVEL, H. 1998. Risk Preparedness: a Management Manual for World Cultural Heritage, ICCROM. 7. UNESCO 2010. The World Heritage Resource Manual: managing Disaster Risks for World Heritage, ICCROM.
4
1 ARCHAEOLOGICAL AND GEOMATIC NEEDS
ARCHAEOLOGICAL
ANDGEOMATIC NEEDS
1.1 3D MODELING IN ARCHAEOLOGY AND CULTURAL HERITAGE – THEORY AND BEST PRACTICE S. CAMPANA
while at the same time relying on a variety of methodological and technical skills. Only through a wideranging training in the development and maturing of the researcher’s critical faculties is it possible negotiate the transition from three-dimensional reality to graphical or photographic representation in two dimensions. In essence survey and documentation, as well as photography, present not an alternative to reality but an interpretation of reality, whether it be of an object, a
1.1 ARCHAEOLOGICAL NEEDS
Paul Cezanne often commented that “choses vivre s’ils ont un volume” and again “la nature n’est pas en surface; elle est en profondeur ”. This maxim applies equally forcefully within archaeology, across the full spectrum from schools of thought dominated by art-historical approaches to those which focus primarily upon ‘context’ and on the study of cultural material through insights drawn from anthropology and ethnography. The products of human endeavour – objects, structures and landscapes – all have volume and are therefore capable of description in three spatial dimensions – as well, of course, as in terms of their historical derivation.
context a landscape. This hashe been neatly by GregoryorBateson (1979) when wrote thatexpressed “THE MAP IS NOT THE TERRITORY, AND THE NAME IS NOT THE THING NAMED”. Naturally, a good interpretation relies on a clear understanding of the object itself, and of its essential characteristics. These form essential preliminaries to a fuller understanding of the object itself and in the best case a general improvement of the subject. We are clearly dealing here with a process that is both essential to and irreplaceable in the practice of archaeological research.
The concepts of volume and of the third dimension are not recent discoveries. Rather, they constitute an archaeological component which has from the outset been recognised as fundamental to the discipline, expressed through documentation such as excavation plans, perspective drawings, maps and the like (Renfrew, Bahn 2012). But it is also true that three-dimensionality has in general been represented in a ‘non-measuring’ mode, predominantly through the development of a language of symbols. In essence the reasons for this practice, common practice to the present day, lie in the technology and instrumentation available at the time. The forms of graphical documentation which have underpinned
In the last two decades the rapid growth in available technological support and the expanding field of their application has produced new opportunities which challenge traditional frames of reference that have remained virtually unchanged over time. In particular, laser scanning and digital photogrammetry (whether terrestrial or airborne) have an extraordinary potential for
archaeology throughout the greater part of its life can be reduced in essence to maps, excavation drawings, matrices and photographs. All of these rely on methods of presentation that are essentially two-dimensional. All archaeologists – including those still ‘in embryo’ – have been and still are being educated to reduce and then to represent three-dimensional archaeological information in two dimensions. This practice should not be decried or under-valued, nor should it be seen as a banal response to the absence of alternative technical solutions. We are dealing here with a complex process, the first requirement of which is the acquisition of insights into the cultural articulation of historical and archaeological contexts,
promoting a revolution in the documentation and recording of archaeological evidence and in its subsequent dissemination. But the availability of new instruments, however revolutionary in their potential impact, is not in itself sufficient cause to speak of a revolution in the field of archaeology generally. To play an active role in such advances a technique must be developed in such a way as to answer to the real needs of the archaeologist. The full and proper development of a technique should allow the formulation of innovative procedures that match the needs of their field of application, facilitating the framing of new paradigms, new standards and therefore new methods of achieving
7
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
real advances in archaeological understanding. Today, the use of 3D documentation systems, the creation of complex models which can be navigated and measured, and procedures for data handling and management present us with almost daily with new challenges on questions such as measurement, documentation, interpretation and mapping. We must never forget that we are speaking here about the pursuit of archaeology, in which documentation is inseparably bound up with the processes of understanding and interpretation. Graphical documentation often represents the most appropriate means of explaining and communicating the complexity of the archaeological evidence, inter-relationships or contexts being described.
large structures in three dimensions, in particular through laser scanning, there has been a progressive cooling of enthusiasm for this technique because of poor understanding of the functions of point-clouds, meshes and 3D models in general. In this case, too, 3D recording must be placed in the context of the process of understanding and documentating the structures concerned. The situation for the recording of buildings is in fact exactly analogous to that described above for archaeological excavations – the process relies above all on the ‘reading’ and interpretation of the structural elements and related characteristics of the monuments under survey. In a recent manual on survey and recording Bertocci and Bini (2012) noted in his introduction how “a good survey engages with the history of the building, identifying the chronological phases, pointing out variations of technique, underlining stratigraphical relationships, noting anomalies, clarifying engineering choices and summarising in the final documentation the forms, colours, state of preservation and quality of the materials used in the building’s construction”. Thus the resulting record represents a synthesis of measurement combined with ‘reading’ and interpretation of the structure and its development over time. It is evident that if one makes use only of measurement, however accurate and detailed – as in the case of the point clouds produced by laser scanning or photogrammetry – the absence of ‘reading’ and interpretation irrevocably limits the results.
These thoughts will hopefully provide a basic frame of reference within which we can consider the introduction of innovative systems for 3D recording within archaeology. That said, it is time to discuss some of the problems that have emerged in the course of the last couple of decades. In their early history the significance and the role of laser scanning and photogrammetric recording in archaeology was complicated by a number of misunderstandings and ambiguities. It may be useful to start by considering the experience of pioneering applications which placed a high emphasis on objectivity and the faithful representation of stratigraphical units, all too often ignoring the central dictum that the main challenge within any excavation – in itself a process that cannot be repeated – consists not of objective documentation of stratigraphical units but at root in the definition and interpretation of those units. In addition, excavation recording does not deal only with the relationship of stratigraphical units in terms of their volumes but also with such things as the consistency and composition of the strata and their chemical and physical characteristics. All of these are elements which can themselves change under the influence of variations in environmental conditions such as temperature, humidity and lighting, and last but not least in response to the skill and experience of the excavator. 3D documentation makes it possible to create an objective record of some aspects such as volume and texture of stratigraphical units that have been defined or influenced by the necessarily subjective interpretations made by the excavator. For this very reason the adoption of 3D recording does not in itself transform the process into an objective or ‘neutral’
Equally central to the problems that have arisen in the first application of 3D recording, whether of buildings or of excavations, lies the misleading idea that 3D recording can act as a substitute for traditional methods of documentation. It may be useful here to draw a parallel with photography. When the possibility of capturing and using photographs, whether aerial or terrestrial, in archaeological work was first proposed the photographs did not replace traditional landscape or excavation recording but rather complemented them, adding a new form of documentation which in its turn required interpretation and sometimes graphical representation of the archaeological information present in the photographs. 3D documentation presents an innovative means of executing and representing measurements taken from archaeological sites, objects or contexts. It makes possible the acquisition of an extraordinary amount of positional data and measurements, the density of which is conditioned by the scanning density (for example one point every three mm etc) or the resolution of the camera
procedure since the process of observation and hence of understanding cannot by its very nature be other than subjective. That said, it is undeniable that the essentially destructive and unrepeatable nature of excavation makes it imperative to employ recording systems that are as sophisticated and accurate as possible at the time concerned. In the context of the present day the most relevant techniques in this respect are undoubtedly photogrammetry and laser scanning.
sensorthe (inobject). both cases final must criterion being distance from Thesethefactors be matched to the characteristics of the object or context being documented. But as with a photograph, the point cloud produced by these methods is an intermediate document between reality and its conceptual representation (limited, of course, to those elements of reality that can be described in three dimensions). It is certainly true, however, that the aim of the recording work, which traditionally focuses on taking measurements in the field, can in the case of laser scanning and digital photogrammetry be re-allocated to a later stage, reducing the amount of time spent on this process in the field.
Ambiguities have also emerged in the recording of historical buildings and field monuments. After the initial euphoria generated by the possibility of documenting
8
ARCHAEOLOGICAL
In this connection we need to make a distinction of scale – albeit macroscopic – for the various applications, distinguishing between sites, landscapes and individual objects (see Barcelò elsewhere in this volume). Though there are numerous 3D techniques that can be applied at the landscape scale we will limit our discussion at this stage to some comments on photogrammetry, LiDAR and spatial analysis.
ANDGEOMATIC NEEDS
technique, widely used to the present day, is however subject to a limitations by the brevity of the ‘windows of opportunity’ to make it work effectively and by the rigidity of the resulting documentation. Digital photogrammetry, and above all LiDAR, can to a large extent offset these limitations, offering opportunities not previously available. LiDAR measures the relative elevation of the ground surface and of features such as trees and buildings upon it across large areas of landscape with a resolution and accuracy hitherto unattainable except through labour-intensive field survey or photogrammetry. At a conference in 2003 Robert Bewley, then Head of English Heritage’s Aerial Survey Unit, argued that “the introduction of LiDAR is probably the most significant development for archaeological remote sensing since the invention of photography” (Bewley 2005). Over the years since then LiDAR applications have been developed widely around Europe and particularly in the UK, Austria, France, Germany, Norway and Italy (Cowley, Opitz 2012). Currently the principal advantage of LiDAR for archaeologists is its capacity to provide a high-resolution digital elevation model (DEM) of the landscape that can reveal microtopography which is virtually indistinguishable at ground level because of erosion by ploughing or other agencies. Techniques have been developed for the digital removal of ‘modern’ elements such as trees and buildings so as to produce a digital terrain model (DTM) of the actual ground surface, complete with any remaining traces of past human activity. An extremely important characteristic of LiDAR is its ability to ‘penetrate’ woodland or forest cover so as to reveal features that are not distinguishable through traditional prospection methods or that are difficult to reach for ground-based survey (as, for instance, in work at Leitha Mountain, Austria, described in Doneus, Briese 2006). There have been other notable applications at Elverum in Norway (Risbøl et al., 2006), Rastadt in Germany (Sittler, Schellberg 2006), in the Stonehenge landscape and at other locations in the UK (Bewley et al., 2005; Devereux et al., 2005) and, returning to America, at Caracol in Belize (Weishampel et al., 2010). Currently the cutting edge of LiDAR applications in archaeology is represented by the use of a helicopter as the imaging platform, allowing slower and lower flight paths and use of the technique’s multiple-return features with ultra-high frequency, enabling much higher ground resolution. 2 Densities of up to 60 pts/m (about 10 cm resolution) can be obtained by these methods, permitting the recording of
Amongst the requirements of landscape archaeologists the detailed understanding of morphology occupies a role of central importance. From the very outset cartography has provided an essential underpinning for every aspect of archaeological research, from the initiatial planning, through the fieldwork to the documentation of the observations made (in terms of their position) and finally to archaeological interpretation and publication. The first use of photogrammetry for archaeology was undertaken in 1956 in Italy by Castagnoli and Schmidt in a study of the townscape of Norba (Castagnoli, Schmiedt 1957). This method of research and documentation continues to represent a fundamental instrument for the understanding and documentation of complex contexts where the required level of detail is very high and the need for precision is therefore increased. Over the years Italy has seen many aerophotogrammetry projects across a wide range of cultural contexts in terms of history, morphology and topography: one might think for example of Heraclea, Serra di Vaglio, Ugento, Vaste, Cavallino, Rocavecchia, Arpi and Veio etc. Amongst the needs that can be met by this form of archaeological analysis and documentation there should be mentioned the opportunity to explore the landscape in three dimensions, achieving a complex cartographic product consisting not only – as applies in typically twodimensional GIS systems – of the previously exclusive vectorisation of archaeological features but most of all in the recording and representation of the topographical characteristics of the landscape within which they sit. In this way the landscape context is represented in its most up-to-date and systematic interpretation of the term as an interaction between human activity and the environmental context and not in any way as something that can be separated from the monuments, structures and the connective tissue’ of sites, field systems, communication routes etc. that sit within it. Recent developments in digital photogrammetry, and an overall lowering of costs, have helped to promote access to instrumentation that was previously limited in its availability. In reality, however, this is only partially the case since digitally-based cartographic restitution remains a highly specialised activity undertaken by a relatively small number of highly skilled specialists.
micro-topographic variations even where the remains of archaeological features are severely degraded allowing. When used in combination with the multiple-return facility of the LiDAR pulse these densities can also allow effective penetration of even the most densely vegetated areas, revealing otherwise hidden archaeological features beneath the tree canopy (Shaw, Corns 2011).
Archaeologists have long been aware that the presence of archaeological features can be revealed by relatively modest variations in the morphology of the ground surface. Ever since the beginnings of archaeological air photography researchers have made use of ‘raking’ (oblique) lighting to emphasise the shadows cast by small variations in the surface morphology. This diagnostic
It is worth mentioning here that interest in the LiDAR technique is not limited to its potential for penetrating woodland areas but also for its contribution to the study of open landscapes dominated by pastureland or arable cultivation. In these areas, as under woodland cover, the
9
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
availability of extremely precise digital models of the ground surface make it possible to highlight every tiny variation in level by using computer simulations to change the direction or angle of the light and/or to exaggerate the value of the z coordinate. If properly applied, the LiDAR technique could prove revolutionary in its impact on the process of archaeological mapping, making it possible to record the previously hidden archaeological resource within woodland areas and apparently-levelled landscapes. In favourable circumstances it might even be possible to uncover whole ‘fossil’ landscapes. This could have a dramatic impact on opportunities for archaeological and landscape conservation and management, as well as on scientific investigation of settlement dynamics at various times in the past.
(such as GPS survey, remote sensing and mobile technology etc) that are methodologically and technologically appropriate for meeting real archaeological needs. All of this has at the moment not come to pass for those who intend to operate in 3D (or 4D). This represents an absolutely central problem which lies at the root of many present-day limits in the diffusion of 3D working. In particular this very serious lacuna expresses itself most of all in the absence of 3D analytical instruments and therefore in the difficulty of extracting srcinal archaeological information – not otherwise identifiable in 2D – limiting the contribution of 3D to an increase in the quality of the documentation and to the successive elaborations focused on communication. A significant outcome, though altogether secondary in respect of the primary capacity which archaeological data ought to be explained: new archaeological information. As has been said many times by Maurizio Forte, of the Duke University in the USA, a fundamental need lies in the availability for archaeologists of an “OPEN-SPACE” into which it is possible to insert data acquired at various times in the past, stratifying the information and at every stage measuring and comparing the srcinal observations, data or stratigraphical relationships but also – wherever possible – modifying and updating that data in the light of new evidence. GIS provides an open working environment which allows the management, analysis, data enhancement, processing, visualisation and sharing of hypothetical interpretations. The first change of practice that this thought ought to provoke is a move towards the acquisition and management of 3D data from the very outset of any research project rather than (as so often happens) at the end of the cognitive process: we should no longer find ourselves in the position to hear people say “Now that we have studied everything let’s make a nice reconstruction” (Forte 2008). This should lead to a reversal of the process in which the 3D model no longer constitutes the end but rather the means of achieving better understanding through analysis and simulation. It should also promote the better sharing and communication of archaeological information, ideas and concepts, in the first instance amongst our professional colleagues and then with the public at large.
By this point we have hopefully overcome the first step in the process of understanding. It remains true, however, that the role of the archaeologist is fundamental in integrating observations made in the field with the threedimensional model so as to advance the ‘reading’ and interpretation of the landscape, monument or excavation in its 3D (or better still 4D) complexity through through new methods of working and the development of new hardware and software. Now, however, we must tackle another major problem: the software suites for the management of 3D and 4D data. In addressing this problem it may be useful to take a step back and to note the central role that has been played since the early 1990s by the availability of relatively lowcost and user-friendly GIS systems. Although essentially 2D in character these have permitted innovation in almost every sector of archaeological work, from landscape studies to excavation recording and consideration of the great intellectual themes of archaeology from the local to the global context (economy, production, exchange systems etc). GIS has above all provided a common language which has facilitated interaction and integration between all kinds of documents, observations or phenomena so long as they can be represented in one way or another in terms of geographical coordinates. GIS has provided a spur for innovative methods of measurement and documentation and hence for the development of methodologies and technologies which have led to the creation of a shared work-space within which data can be managed, visualised and integrated with other sources of information so as to maximise the outcome through
Paradoxically, 3D technology tends to be more appreciated and sought after by the general public than by archaeologists. The communication and entertainment
analysis and communication within a single digital environment. GIS has produced a common mechanism in the ambit of scientific innovation, acting as a trigger in a process that has forced archaeologists to re-think and then to extend the methodological and technical underpinning of archaeological practice. In more practical terms it has made a much-needed breach in a sector of study that has tended to be conservative and traditional. Weithin a decade or so archaeologists have realised that in addition to permitting the integration of almost any form of source of information gathered in the past by traditional means was necessary to adapt a whole series of present-day and future procedures through the introduction of solutions
sectors are amongst the major commitments of 3D models of the landscape, buildings and objects. A further leap in this direction can be attributed to the development and wider availability of mobile technology in the form of smartphones and tablets etc. Techniques such as augmented reality offer a wide scope for innovative development and practical applications. There remains one final aspect to be tackled, however: the reluctance of archaeologists themselves to start thinking in 3D. During the Fourth International Congress on Remote Sensing in Archaeology at Beijing in 2012, in the course of a discussion on problems associated with the spread of new technologies and methods of working within archaeology
10
ARCHAEOLOGICAL
ANDGEOMATIC NEEDS
Professor Armin Gruen told a revealing story. After weeks of difficult fieldwork and months of processing in the laboratory he was about to present to the archaeologists a very precise and detailed digital 3D model of their site. At the presentation he showed the archaeologists a whole series of opportunities for measurement, navigation and spatial analysis within the model (sections, slopes, surface erosion, varying display methods, perspective, etc). At the end of the domonstration the first comment by the archaeologists was: “Beautiful, extraordinary, but ... can it proviode us with a plan?” Clearly there is a hint of exasperation, and a degree of paradox, in this story but perhaps the most significant aspect lies in a statement made earlier in this contribution: we have been educated to reduce reality from three dimensions to two, and thus we are in the habit of thinking in 2D. Intuitively – or in some cases ‘theoretically’ – we are well aware of the informative value of the third dimension but we nevertheless find it difficult to imagine, visualise or represent objects, contexts, landscapes and phenomena from the past in three dimensions. Faced with this difficulty it is hard to imagine how complex it might be to achieve clear 3D thought processes that will permit the identification of archaeological problems and the framing of relevant questions in the search for solutions. This kind of ‘short circuit’ might perhaps be circumvented through a point mentioned earlier on – the need to apply, from the beginning to the end of the archaeological process, all of the technological instruments and procedures that we can call upon to help us to manage and benefit from the availability 3D data. In a sense we need a new “magic box”, an instrument which – like GIS in its own time – can act as a bridgehead for the implementation of 3D thinking in its totality, advancing from a two-dimensional to a three-dimensional vision both of the initial archaeological evidence and of the questions to which that evidence gives rise.
advantages to archaeological research in general. This could involve in the most extreme cases– at least in the initial stages – a central role for the technologist, though he in turn would have to acquire a basic competence in archaeology so as to work in close cooperation with archaeologists. This reversal of perspective, together with the collaboration with engineers, architects, geophysicists and information technologists etc, has made the writer reflect on the necessity of working together, of risking a degree of ‘technological pollution’ while at the same time conserving a proper scientific approach to innovation. Otherwise one might run the risk of running into another short circuit ... as Henry Ford once said: “ If I had asked my customers what they wanted they would have said a faster horse.”
1.2 CONCLUSION
DEVEREUX, B.J.; AMABLE, G.S.; CROW, P.; CLIFF, A.D. 2005. The potential of airborne lidar for detection of archaeological features under woodland canopies, Antiquity, 79 (305), pp. 648-660.
Reference
BATESON, G. 1979. Mind and Nature: A Necessary Unity (Advances in Systems Theory, Complexity, and the Human Sciences). Hampton Press. BERTOCCI, S.; BINI, M. 2012. Manuale di rilievo architettonico e urbano, Torino. BEWLEY, R.H. 2005. Aerial Archaeology. The first century. Bourgeois J., Meganck M. (Eds.), Aerial Photography and Archaeology 2003. A century of information, Academia Press, Ghent, pp. 15-30. BEWLEY, R.H.; CRUTCHLEY, S.; SHELL, C. 2005. New light on an ancient landscape: LiDAR survey in the Stonehenge World Heritage Site. Antiquity, 79 (305), pp. 636-647. CASTAGNOLI, F.; SCHMIEDT, G. 1957. L’antica città di Norba, in L’Universo, XXXVII, pp. 125-148. COWLEY, D.C.; OPITZ, R.O. 2012. Interpreting Archaeological Topography. 3D Data, Visualisation and Observation, Oxford.
Finally, a brief personal reflection on those who undertake research and the ways in which research can be pursured. The writer has long been a keen supporter of the view that technological and methodological research in archaeology, and in heritage management generally, should be initiated or at least guided by the desire to answer essentially historical questions. This implies a
DONEUS, M.; BRIESE, C. 2011. Airborne Laser Scanning in forested areas – Potential and limitations of an archaeological prospection technique. D. Cowley (Ed.): Remote Sensing for Archaeological Heritage Management, EAC Occasional Paper 5, Reykjavík
central role archaeologist at so the that same requires himfor to the acquire technical but skills hetime can work closely and productively with engineers, physicists and other specialist. Every other approach carries with it the risk of a degenerative drift in research. However, the experience of the last few years of experimentation in 3D technology has led him to take a more flexible line, without in any sense denying the central role of the archaeologist and of inherently archaeological questions. That said, he now sees possible value in testing innovative technologies without necessarily starting from specific archaeological question rather than from the desire to see whether such techniques can offer
Iceland 25-27 March 2010, Brussel, pp. 59-76. FORTE, M. 2008. La Villa di Livia, un percorso di Ricerca di archeologia virtuale, L’Erma di Bertschneider, Roma, pp. 54-68. RENFREW, C.; BAHN, P. 2012. Archaeology: Theories, Methods and Practice, London. RISBØL, O.; GJERTSEN, A.K.; SKARE, K. 2006. Airborne laser scanner of cultural remains in forest: some preliminary results from Norwegian project. S. Campana, M. Forte (Eds.): From Space to Place. 2nd International Conference on Remote Sensing in Archaeology, CNR – National Research Council
11
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Roma 4-7 December 2006, BAR Oxford 2006, pp. 107-112.
Forte (Eds.): From Space to Place. 2nd International Conference on Remote Sensing in Archaeology, CNR – National Research Council Roma 4-7 December 2006, BAR Oxford 2006, pp. 117-122.
SHAW, R.; CORNS, A. 2011. High Resolution LiDAR specifically for archaeology: are we fully exploiting this valuable resource?, D.C. Cowley (Ed.): Remote Sensing for Archaeological Heritage Management EAC Occasional Paper 5, Reykjavík Iceland 25-27 March 2010, Brussel 2011, pp. 77-86.
WEISHAMPEL, J.F.; CHASE, A.F.; CHASE, D.Z.; DRAKE, J.B.; SHRESTHA, R.L.; SLATTON, K.C.; AWE, J.J.; HIGHTOWER, J.; ANGELO, J. 2010. Remote sensing of ancient Maya land use features at Caracol, Belize related to tropical rainforest structure. S. Campana, M. Forte, C. Liuzza (Eds.): Space, Time, Place: 3rd International Conference on Remote Sensing in Archaeology, Tiruchirappalli Tamil Nadu India 17-21 August 2009, BAR Oxford, pp. 45-52.
SITTLER, B.; SCHELLBERG, S. 2006. The potential of LIDAR in assessing elements of cultural heritage hidden under forest or overgrown by vegetation: Possibilities and limits in detecting microrelief structures for archaeological surveys. S. Campana, M.
12
ARCHAEOLOGICAL
ANDGEOMATIC NEEDS
1.2 GEOMATICS AND CULTURAL HERITAGE F. REMONDINO
Geomatics, according to the Oxford Dictionary, is defined as the mathematics of the earth, thus the science of collecting (with some instruments), processing (with some techniques), analyzing and interpreting data related to the earth's surface. Geomatics is related to the data and techniques, although the term Geoinformatics is often also used.
Thus Geomatics for Cultural Heritage uses techniques (photogrammetry, laser scanning, etc.) and practices for scene recording and digital modeling, possibly in three dimensions (3D), for the successive analyses and interpretations of such spatially related data.
A Cultural Heritage can be seen as a tangible (physical) or intangible object which is inherit from the past
Traditional recording methods were mainly hand recording, e.g. by means of tape measurement, so subjective, time consuming and applicable only to small areas. On the other hand Geomatics 3D recording
generations. Physical heritage include buildings and historic places, monuments, artefacts, etc., that are considered worthy of preservation for the future. These include objects significant to archaeology, architecture and science or technology of a specific culture.
methods are modern, digital, objective, rapid, 3D and cost effective. Geomatics techniques rely on harnessing the electromagnetic spectrum and they are generally classified in active (ranges) and passive (images) techniques.
Figure 1. Geomatics and its related techniques and applications
13
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 2. Existing Geomatics data and sensors according to the working scale and object/scene to be surveyed
Figure 3. Geomatics techniques for 3D data acquisition, shown according to the object/scene dimensions and complexity of the reconstructed digital model
14
ARCHAEOLOGICAL
ANDGEOMATIC NEEDS
1.3 3D MODELLING AND SHAPE ANALYSIS IN ARCHAEOLOGY
Juan A. BARCELÓ Archaeology seems to be a quintessentially “visual” discipline, because visual perception makes us aware of such fundamental properties of objects as their size, orientation, shape, color, texture, spatial position, distance, all at once. Visual cues often tell us about more than just optical qualities. We “see” what we suppose are tools, rubbish generated by some past society, the
tools, or consumed waste material, or buildings, or containers, or fuel, etc. If objects appear in some locations and not in any other, it is because social actions were performed in those places and at those moments. Therefore, archaeological items have different shapes, different sizes and compositions. They also have different textures, and appear at different places and in different
remains of their houses… that weWhy are right? Why does this object lookAre likewea sure container? does this other seem an arrow point? Or are those stones being interpreted as the remains of a house? In which way an “activity area” within an ancient hunter-gatherers settlement can be recognized as such?
moments. to say, the changes and modifications in the form, That size,istexture, composition and location that nature experiences as the result of human action (work) are determined somehow by these actions (production, use, distribution) having provoked its existence. It is my view that the real value of archaeological data should come from the ability to be able to extract useful information from them. This is only possible when all relevant information has been captured and coded. However, archaeologists usually tend to only consider very basic physical properties, like size and a subjective approximation to shape. Sometimes, texture, that is, the visual appearance of a surface is also taken into account, or the mineral/chemical composition. The problem is that in most cases, such properties are not rigorously measured and coded. They are applied as subjective adjectives, expressed as verbal descriptions preventing other people will use the description without having seen
Most of these questions seem out of order for when using a range-scanner or a photogrametric camera. Current uses of technology in archaeology seem addressed to simply tell us what happens now at the archaeological site. They do not tell us what happened in the past, nor why or how. What is being “seen” in the present has been the consequence of human action in the past, interacting with natural processes through time. Human action exists now and existed in the past by its capacity to produce and reproduce labor, goods, capital, information, and social relationships. In this situation, the obvious purpose of what we “perceive” in the present is to be used as evidences of past actions. It is something to be explained, and not something that explains social action in the past. In that sense, production, use and distribution are the
the object. If the physical description of such visual properties is somewhat vague, then possibilities of discovering the function the artifact had in the past is compromised, we hardly can infer the object’s physical structure. The insufficiency and lack of a clear consensus on the traditional methods of form description – mostly visual, descriptive, ambiguous, subjective and qualitative – have invariably led to ambiguous and subjective interpretations of its functions. It is thus strongly advisable to systematize, formalize and standardize methods and procedures more objective, precise, mathematical and quantitative, and whenever possible automated.
social processes which in some way have produced (cause) archaeologically observed properties (size, shape, composition, texture, place, time) (effect). Archaeological artifacts have specific physical properties because they were produced so that they had those characteristics and not other. And they were produced in that way, at least partially, because those things were intended for some given uses and not to other: they were
15
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Let us consider the idea of “SHAPE”. Shape is the structure of a localised field constructed “around” an object (Koenderink 1990, Leymarie 2011). In other words, the shape of an object located in some space can be defined as the geometrical description of the part of that space occupied by the object, as determined by its external boundary – abstracting from location and orientation in space, size, and other properties such as colour, content, and material composition (Rovetto 2011).
think that it is just a question of realism in the representation of enhanced aesthetic qualities. As a matter of fact, visual impressions may seem more accurately represented by the two dimensional projective plane but experience proves the value of the third dimensión: the possibilities of understanding movement and dynamics, that is, “use” and “function”. The three dimensionality of space is a physical fact like any other. We live in a space with three different degrees of freedom for movement. We can go to the left or to the right. We can go forward or backward. We can go up or we can go down. We are allowed no more options. However, a rigid body in three-dimensional space has six degrees of freedom: three linear coordinates for defining the position of its center of mass--or any other point--and another three angles defining relative rotation around the body's center of mass. Rotations add three more closed dimensions (dimensions of orientation). Then, you can imagine a 6-D space with six intersecting lines, all mutually orthogonal. The three obvious lines resulting from the possibilities for describing movement in absolute terms (without considering the object itself), and three additional orientations resulting from a relative description of movement (considering not only the movement of the object with reference to a fixed point in the landscape, but considering also the movements of the object with respect to itself). Each of coordinates represents the set of all possible orientations about some axis. Any movement we make must be some combination of these degrees of freedom. Any point in our space can be reached by combining the three possible types of motion. Up / down motions are hard for humans. We are tied to the surface of the Earth by gravity. Hence it is not hard for us to walk along the surface anywhere not obstructed by objects, but we find it difficult to soar upwards and then downwards: many archaeologists prefer paper-and pencil, or digital pictures to resume what they can see. But such flat representations do not allow studying movement.
We call surfaces such boundaries of separation between two phases. A phase is a homogenous mass of substance, solid, liquid or gas, possessing a well-defined boundary. When we have two phases in mutual contact we have an interfacial boundary. This is called an interface. The surface of a solid, kept in atmosphere, is in fact an airsolid interface; although it is often simply referred to as a solid surface. We can also conceive of a solid-solid interface that occurs when two solids or solid particles are brought in mutual contact. By combining surfaces and discovering discontinuities between boundaries we recognize shapes in objects, so to speak, and this is how we linguistically understand shape as a property. These physical or organic shapes do not reflect the exact specifications of geometrical descriptions of the part of space occupied by each object. They approximate geometric shapes. We may treat the geometrical description of the part of space occupied by each object as if existed independently, but common sense indicates that it is an abstractions with no exact mind-external physical manifestation, and it would be a mistake to betray that intuition. That which we consider to be shapeis intimately dependent on that which has the shape. In the mindexternal world, shapes, it seems, are properties of things. They [things] must have a shape, i.e. be delineated by a shape. We say that a physical object exhibits a shape. Thus, shapes must always be shapes of something in the mind-external world. Outside idealized geometric space, it does not make sense to posit the existence of an independently existing shape, a shape with no bearer. The shape cannot exist, but as an idea, without an entity that bears, exhibits, or has that shape (Rovetto 2011). Shape so delineated is a property dimension, which is quite consistent with the fact that some shapes in turn have (second-order) properties such as ‘being symmetric’, ‘being regular’, ‘being polyhedrical’, and as having mathematical properties such as ‘eccentricity’ (Johansson 2008). If a shape is defined as having a particular number of sides (as with polygons), a particular curvature (as
Therefore, it is not a hobby of technicaly oriented professionals an insistence on working with threedimensional visual properties. Fortunately for us, technology has produced the right tool for such a task: range-scan and photogrametry devices. This book discusses such a technology. They can be considered as “instrumental-observers” able to generate as an output detailed point clouds of three-dimensional Cartesian coordinates in a common coordinate system that describe
with curved such assides, the circle and the ellipse), shapes, specific relations between or otherwise, then it should be apparent that we are describing properties of properties of things. We might be inclined to say that it is the shape that has a certain amount of angles and sides, rather than the object bearing the shape in question, but this is not entirely accurate (Rovetto 2011). The distinction between geometric and physical space, between ideas and ideal or cognitive constructions and material mind-external particulars is significant.
a pointAn cloud representing the expressed surfaces ofin the scanned object. object’s form is then terms of the resulting point cloud. We may call surfaces the boundaries of separation between two phases. A phase is a homogenous mass of substance, solid, liquid or gas, possessing a well-defined boundary. When we have two phases in mutual contact we have an interfacial boundary. This is called an interface. The surface of a solid, kept in atmosphere, is in fact an air-solid interface; although it is often simply referred to as a solid surface. We can also conceive of a solid-solid interface that occurs when two solids or solid
Why 3D is so important when measuring and coding “shape” information? There are still archaeologists that
16
ARCHAEOLOGICAL
particles are brought in mutual contact. By combining surfaces and discovering discontinuities between boundaries we recognize shapes in objects, so to speak, and this is how we linguistically understand shape as a property. These physical or organic shapes do not reflect the exact specifications of geometrical descriptions of the part of space occupied by each object. They approximate geometric shapes. We may treat the geometrical description of the part of space occupied by each object as if existed independently, but common sense indicates that it is an abstractions with no exact mind-external physical manifestation, and it would be a mistake to betray that intuition. That which we consider to be shapeis intimately dependent on that which has the shape. In the mind-external world, shapes, it seems, are properties of things. They [things] must have a shape, i.e. be delineated by a shape. We say that a physical object exhibits a shape. Thus, shapes must always be shapes of something in the mind-external world. Outside idealized geometric space, it does not make sense to posit the existence of an independently existing shape, a shape with no bearer. The shape cannot exist, but as an idea, without an entity that bears, exhibits, or has that shape (Rovetto 2011). Shape so delineated is a property dimension, which is quite consistent with the fact that some shapes in turn have (second-order) properties such as ‘being symmetric’, ‘being regular’, ‘being polyhedrical’, and as having mathematical properties such as ‘eccentricity’ (Johansson 2008). If a shape is defined as having a particular number of sides (as with polygons), a particular curvature (as with curved shapes, such as the circle and the ellipse), specific relations between sides, or otherwise, then it should be apparent that we are describing properties of properties of things. We might be inclined to say that it is the shape that has a certain amount of angles and sides, rather than the object bearing the shape in question, but this is not entirely accurate (Rovetto 2011). The distinction between geometric and physical space, between ideas and ideal or cognitive constructions and material mind-external particulars is significant.
ANDGEOMATIC NEEDS
shape-and-form variability within a well specified population of archaeological observables what interests us. This approach has some tradition in 2D shape analysis. Russ (2002) gives a list of some of them: 1)
Elongation. Perhaps the simplest shape factor to understand is Aspect Ratio, i.e., length divided by breadth, which measures an aspect of elongation of an object.
MaximumDiameter length or width MinimumDiameter 2)
Roundness. It measures the degree of departure from a circle of an object’s two-dimensional binary configuration. This is based not on a visual image or an estimate of shape; rather, it is based on the mathematical fact that, in a circular object with a fixed area, an increase in the length of the object causes the shape to depart from a circle.
4 Area
p
2
In the equation, p is the perimeter of the contour, and Area is a measure of the surface of the object. The roundness calculation is constructed so that the value of a circle equals 1.0, while departures from a circle result in values less than 1.0 in direct proportion to the degree of deformation. For instant, a roundness value of 0.492 corresponds approximately to an isosceles triangle. 3)
Shape Factor (or Formfactor). It is similar to Roundness, but emphasizes the configuration of the perimeter rather than the length relative to object area. It is based on the mathematical fact that a circle (Shape factor value also equal to 1.0), compared to all other two-dimensional shapes (regular or irregular), has the smallest perimeter relative to its area. Since every object has a perimeter length and an area, this mathematical relationship can be used to quantify the degree to which an object’s perimeter departs from that of a smooth circle, resulting in a value less than 1.0. Squares are around 0.78. A thin thread-like object would have the lowest shape factor approaching 0.
The most obvious use of range-scanning and photogrametry is then calculating the observation’s surface model. It can be defined in terms of lines and curves defining the edges of the observed object. Each line or curve element is separately and independently constructed based on srcinal 3D point co-ordinates. The resulting polygon mesh is a set of connected polygonally
4Area
p2 In theisequation, theobject. contour, and p isofthe Area a measure theperimeter surface ofofthe Notice
bounded planar surfaces. It is represented as a collection of edges, vertices and polygons connected such that each edge is shared by at most two polygons. The resulting 3D geometric models are no doubt impressive and contain all the information we would need to calculate the particular relationship between form and function. However, we should consider surface models as an intermediate step in the process of quantifying shape. Each 3D model has to be identified with a shape descriptor, providing a compact overall description of the shape. What we need is an approach towards the statistical analysis of shapes and forms. In other words, instead of the particular high resolution details of a single pot, knife or house, it is
that formfactor varies with surface irregularities, but not with overall elongation. 4)
Quadrature: The degree of quadrature of a solid, where 1 is a square and 0.800 an isosceles triangle. This shape is expressed by:
4
p Area
In the equation, p is the perimeter of the contour, and Area is a measure of the surface of the object.
17
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Shape inde es allow the integration of all parameters related with the 2D geo etry of the bjects’ inter acial boundaries i to a single easurement n such a wa that a statistical comparison of such parameters allo s a complete de cription of visual variabil ty in a population of material evidences (Ba celó 2010).
variations in the local properties of their surfaces like albedo and color variations, density, coarseness, rou hness, regul rity, hardne s, brightness, bumpiness, spe ularity, refle tivity, transparency, etc. exture is the name we give to the perceptio of these variations. Wha I a doing here is introducing a synonym f r “perceptual variability” or “ urface discontinuity”. It is a kind o perceptual in ormation complement ng shape information.
Unfortunately, many of he descriptors that have been proposed for 2D shape measures cannot be directly generalized to 3D (Lian, 2010), and we have al eady argued the r levance of a proper 3D an alysis. Up to now, just a few global form d scriptors with direct mea ings for 3D mod ls have been proposed, where each of them describes 3 objects in a quite differe t manner, th reby providing new and independent inform tion. Compactnes indices, for xample, may describe:
Col r, brightness, hue, are the most obviously “visual” app arances of a y entity. For too many times it has been des ribed subject vely using ords: green, ed, yellow… No , digital photography, espectometry and specialized soft are allows a formal quantification of colo information and its relative properties. Mo e complex is the ase of Surfa e Micro-topography. We ave already voc bulary of icro-topogr phic values: coarseness, rou hness, smo thness, polish, burnish, bumpiness, waviness, which are the re ult of micr -topographic irre ularities. Such variati n is of fundamental importance to discover the past functio of ancien objects, because the surface of solids plays as significan role in interfacial phenomena, and its actu l state is the result of physical forces that h ve acted on t at surface.
1) The exte t to which a 3D mesh is spherical (W dell, 1935; Asahina, 2011). The sph ricity, Ψ, of an observed entity (as measured sing the r ngescanning device) is the ratio of the surface area of a sphere with the same olume as the given entity to the surface a ea of the entity: Ψ
⅓ ⅔ π (6Vp) = –––––––– Ap
where V is volume f the object or archaeological building structure an Ap is the s rface area of the object. he sphericity of a sphere is 1 and, b the isoperim tric inequali y, any parti le which is ot a
To epresent mic o-variation t e only we h ve to do is to indicate the relative positions and elevations of surface’s poi ts with dif erential interfacial contribution. The resolution of mo ern range sc nners is eno gh to be able
sphere w ll have spher city less than 1. 2) The ext nt to which a 3D mes is a cube. The cubeness Cd (S)of an observed e tity (as mea ured using th range-scanning device) is the ratio of the surface rea of a cub with the same volume as the given entity to the surface area of the entity:
to easure depth tiny etails co plex micro-s ructures, and measuring and of heigth at well localized points wit in the surface, allowing s to measur their spatial variability A mo ern laser sc nner capture surface dat poi ts less than 50 microns (0,05 mm), apart from pro ucing high-d nsity triangular meshes with an average resolution of over 1000 points er cm2. As n the case of form, we do not have e ough with sim le spatial i variant measurement of heights and depths at the micro-level of a single sur ace. Modern research in surf ce analysis, notably in eometry and mat rial science have proposed dozens of suitable par meters of texture, like average roug ness, texture asp ct, texture direction, surface material volume, autocorrelation, average peak-to-valeyy, etc.
Where A(S) is the are of the enclosing surface. f the shape is subdivided i to facets or voxels, then n(S) reprents he number o different faces which for the shape. Cd (S) picks up the high st possible alue (which i 1) if and o ly if the m asured shape is a cube.
Alt ough archae logy has be n traditional y considered as quintessenti lly “visual” discipline (Shelley 1996),
Similar ind xes can be calculated for cylinder , or ellipsoids or even in the case of rectilinear shapes. The fundamental role of such indexes is th t they correspond to the diff rent ways rchaeological observables are judged to be similar or different. Accor ingly, the fo m of an archaeological artifa t can be efined as an ndimensional vector space, whose axes represent global shape-and-form paramet rs or further vector s aces denoting dif erent domains of the same idea of “shape”.
we eed also no -visual features to characterize ancien objects and materials (i.e., compositional ata based on mass spectrometry, chronological data based on radioactive decay measures, e c.). Once we include nonvisual data we would have the initial elements fo beginning true explanatory analysis of the recorded archaeological el ments. Wh archaeological artifacts are the way they are? pos ible answer to this que tion would be: “because objects have a di tinctive appearance for th sake of thei pro er functionin ”. This function would be distinguished fro other non-functional (or “accidental” uses by the
But we nee much more than “shape . The surfaces of archaeological objects, artifacts, and materials ar not uniform but contain many variations; s me of them re of visual or tactile nature. Archaeological materials have
18
ARCHAEOLOGICAL
ANDGEOMATIC NEEDS
Table 1. Microtopography: 3D surface texture References: [ASME-B46-1-2009],[VARADI_2004]; see [MASAD_2007], [WHITEHOUSE_2002] 3D Areal parameter
2D Profile parameter
Sa: average roughness
The arithmetic average deviation of the surface (the absolute values of the measured height deviations from the mean surface taken within the evaluation area).
Amplitude
Height
Sq: root mean square (rms) roughness
The root mean square average deviation of the surface (the measured height deviations from the mean surface taken within the evaluation area).
Amplitude
Height
Ssk: skewness
A measure of the asymmetry of surface heights about the mean surface.
Amplitude
Shape
Sku: kurtosis
A measure of the peakness of the surface heights about the mean surface.
Amplitude
Shape
Amplitude
Height
Sz: ten point height of the surface(8 nearest neighbor) Sds: density of summits Str: texture aspect ratio Sal: fastest decay autocorrelation length
2D approximate:Height function, Z(x, y); and Maximum area
peak height, Sp. 2D approximate:C number of peaks.
Amplitude
Is a measure of the spatial isotropy or directionality of the surface texture.
Only to be interpreted in 3D.
Power Spectral Density Function) Is determined by the APSD Angular ( Std: texture direction of and is a measure of the angular direction of the dominant lay surface comprising a surface. SΔq: area root mean square surface slope, (Sdq)
The root mean square sum of the x andy derivatives of the measured topography over the evaluation area.
SΔq (θ): area root mean The root mean square average of the derivative of the measured square directional slope,topography along a selected direction, θ( ), calculated over the sampling area. (Sdq θ )
Area Spacing
Spatial
Other parameters
Spatial
–
Spatial
Other parameters
Hybrid
Other parameters
Hybrid
Other parameters –
Ssc: mean summit curvature
Evaluated for each summit and then averaged over the area. Based on a summit.
Hybrid
Sdr: developed surface area ratio
Developed Interfacial Area Ratio. 2D approximate:Lr.
Hybrid Functional – Index family (*1)
Sbi: surface bearing index
Geometrically speaking, Sci represents the value of empty volume Sci: core fluid retention pertaining to a sampling surface unit of the core zone, as referred to Sq. index 2D approximates:Rk parameters. Svi: valley fluid retention index
It is a parameter similar to Sci. It represents the value of empty volume pertaining to a sampling surface unit of the valley zone, as referred to Sq.
Functional – Index family (*1) Functional – Index family (*1) Functional –
Sm: surface material
volume
Volume from top to 10% bearing area
Volume family (*1)
Sc: core void volume
Volume enclosed 10%-80% bearing area
Functional – Volume family (*1)
Sv: valley void volume
Volume from 80% to 100% bearing area
Functional – Volume family (*1)
The square of the amplitude of the Fourier transform of the measured topography. This 3D function is used to identify the nature of periodic features of the measured topography. 2D: Single profiles through the function can be used to evaluate lay characteristics.
??
Area power spectral density function,APSD
–
19
Other Parameters
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
This 3D function is used to determine the lateral scale of the dominant surface features present on the measured topography. Area auto covariance 2D: The auto covariance function is a measure of similarity between two function,AACV identical but laterally shifted profiles. Single profiles through the function can be used to evaluate lay characteristics.
Other Parameters
Other Parameters
Area autocorrelation function,AACF area waviness height, SWt
??
The area peak-to-valley height of the filtered topography from which roughness and part form have been removed.
surface bearing area The ratio of (the area of intersection of the measured topography with a selected surface parallel to the mean surface) to (the evaluation area). ratio
Waviness Other Parameters
average peak-to-valley roughness R and others
Is intended to include those parameters that evaluate the profile height by a method that averages the individual peak-to-valley roughness heights, each of which occur within a defined sampling length.
Additional Parameters for Surface Characterizati on (*2)
average spacing of roughness peaks AR
Is the average distance between peaks measured in the direction of the mean line and within the sampling length.
Additional Parameters for Surface Characterizati on (*2)
swedish height of irregularities (profiljup),R or H
Is the distance between two lines parallel and equal in length to the mean line and located such that 5% of the upper line and 90% of the lower line are contained within the material side of the roughness profile.
Additional Parameters for Surface Characterizati on(*2)
fact that the features that define the solid nature of the object owe its existence to this particular use. In other
1975, 2002). Quoting Kitamura et al., (2004), “functional models represent a part of (but not all) the designer’s
words, a single pot, axe, weapon, burial or house found at ancient the archaeological site is jewel, assumed to be like it is because it performed some particular action or behaviour in the past. The object was made to do something in a particular way, and the goal it had to fulfill could only be attained when the artifact got some determinate properties. A function is taken as an activity, which can be performed by an object. The object’s activity is in fact its operating mode; more generally it can be seen as an object behaviour specification.
intentions, al., 2008). so called design rationale (see also Erden
et
It has been suggested that in many cases there is a direct conditioning and even deterministic relationship between how a prehistoric artifact looks like (what the rangescanner has acquired) and its past functionality. “Design theory” is often defined as “a means of creating or adapting the forms of physical objects to meet functional needs within the context of known materials, technology, and social and economic conditions”. If this approach is right, then in order an archaeologist be capable of ascribing functions to observed archaeological evidences, she would need to combine Knowledge about how the designers intended to design the artifact to have the function, Knowledge about how the makers determined the physical structure of that artifact on the basis of their technological abilities, and Knowledge about how the artifact was determined by its physical structure to
I know that I am reducing too much the very meaning of “functionality”, but only such a crude simplification can make my point clearer. I am arguing that archaeological observables should be explained by the particular causal structure in which they are supposed to have participated. An object’s use can be defined as the exertion of control over a freely manipulable external object with the specific intention of (1) altering the physical properties of another object, substance, surface or médium via a dynamic mechanical interaction, or (2) mediating the flow of information between the tool user and the environment or other organisms in the environment. (St. Amant and Horton 2008. See also (Beck’s 1980, McGrew 1993, Amant 2002, Bicici and Amant 2003). The knowledge of the function of some perceived material element should reflect the causal interactions that someone has or can potentially have with needs, goals and products in the course of using such elements. Functional analysis is then the analysis of the object’s disposition to contribute causally to the output capacity of a complex containing system of social actions (Cummins
perform that function. Design theory principles assume that there are different kinds of constraints operating in the developing of solutions for each problem and that tradeoffs between constraints make it unlikely that there will be any single optimal solution to a problem but, rather, a number of more or less equally acceptable solutions that can be conceptualized. Among the most powerful of these constraints are functional requirements, material properties, availability, and production costs. In other words, understanding of function needs to be connected with the understanding of the physics of forces and causation. Changing the direction of forces, torques,
20
ARCHAEOLOGICAL
and impulses and devising plans to transmit forces between parts are two main problems that arise in this framework. To solve these, we need to integrate causal and functional knowledge to see, understand, and be able to manipulate past use scenarios (Brand 1997). We should add the rules of physics that govern interactions between objects and the environment to recognize functionality. The functional outcome cannot occur until all of the conditions in the physical environment are present, namely the object(s), its material, kinematics and dynamics. Once these conditions exist, they produce and process the relevant behaviours, followed by the outcome (Barsalou 2005).
Consequently, a basic requisite for inferring past uses tools and other artifacts or constructions is the recognition of additional properties which determine the possibilities and limits of mechanical interaction with the real world (Goldenberg and Spatt 2009). Given that solid mechanics is the study of the behaviour of solid material under external actions such as external forces and temperature changes the expression “mechanical properties” has been mentioned many times for referring to these additional properties, in the sense that the value of such properties is conditioned by the physical features of the solid material involved and also affected by various parameters governing the behaviour of people with artefacts.
Therefore, shape, texture and non-visual properties of archaeological entities (from artefacts to landscapes) should be regarded as changing not as a result of their input–output relations, but as a consequence of the effect of processes (Kitamura & Mizoguchi, 2004, Erden et al., 2008). Consequently, reasoning about the affordances of physical artifacts depends on the following factors and senses (Bicici and St. Amant 2003):
Form/Texture/Composition: For many tools, form,
Physical properties - are those whose particular values can be determined without changing the identity of the substance, i.e. the chemical nature of matter. Mechanical properties – Thir value may vary as a result of the physical properties inherent to each material, describing how it will react to physical forces. The main characteristics are ELASTIC, STRENGTH and VIBRATION. o
texture and composition is a decisive factor in their effectiveness.
Planning: Appropriate sequences of actions are key to tool use. The function of a tool usually makes it obvious what kinds of plans it takes part in.
Physics: For reasoning about a tool’s interactions with other objects and measuring how it affects other physical artifacts, we need to have a basic understanding of the naive physical rules that govern the objects.
Dynamics: The motion and the dynamic relationships
Causality: Causal relationships between the parts of
o
Work space environment: A tool needs enough work
Design requirements: Using a tool to achieve a known
PROPERTIES: Materials that behave elastically generally do so when the applied stress is less than a yield value. When the applied stress is removed, all deformation strains are fully recoverable and the material returns to its undeformed state. The Elastic modulus, or modulus of elasticity, is the ratio of linear stress to linear strain. It measures the
STRENGTH PROPERTIES: The material’s mechanical strength properties refer to the ability to withstand an applied stress without failure, by measuring the extent of a material’s elastic range, or elastic and plastic ranges together. Loading, which refers to the applied force to an object, can be by: Tension,
tools and their corresponding effects on other physical objects help us understand how we can use them and why they are efficient.
ELASTIC
stiffness of a given material and is measured in units of pressure MPa or N/mm2. It can be obtained by the Young modulus, bulk modulus, and shear modulus. The Poisson’s ratio is the ratio of lateral strain to axial strain. When a material is compressed in one direction, it usually tends to expand in the other two directions perpendicular to the direction of compression.
between the parts of tools and between the tools and their targets provide cues for proper usage.
ANDGEOMATIC NEEDS
space to be effectively applied. o
Compression, Bending, Shear, Torsion. VIBRATION PROPERTIES: Speed of sound and internal friction are of most importance in structural materials. Speed of sound is a function of the
task requires close interaction with the general design goal and requirements of the specific task.
modulus of elasticity and density. Internal friction is the term used for when solid material is strained and some mechanical energy is dissipated as heat, i.e. damping capacity.
Only the first category is the consequence of using rangescanning and similar technology. This list suggests that reasoning about the functionality of archaeological objects recovered at the archaeological site requires a cross-disciplinary investigation ranging from recognition techniques used in computer vision and robotics to reasoning, representation, and learning methods in artificial intelligence. To review previous work on approaches relevant to tool use and reasoning about functionality, we can divide current approaches in two main categories: systems that interact with objects and environments, and systems that do not.
Each property tells us something about the reaction the artefact would have shown in case prehistoric people brought it into a certain environment and used it a certain way. It wonders me the absolute lack of such information, not only in current virtual archaeology projects, but also in culture heritage databases. Archaeologists insist in documenting ancient artifacts, but such documentation never takes into account the physical and mechanical
21
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
properties of ancient materials. Without such information any effort in functional analysis is impossible.
investigation of changes in finite-element structure produces as a result of the simulated behaviours. We can distinguishe between:
Only by direct interaction with real objects made of solid materials may provide new insights into the complex dynamics of certain phenomena, such as event-based motion or kinematics. However, imagine the answer of a Museum director when we ask her to break a prehistoric object in order to discover the way it was used in the past! Given that prehistoric and ancient objects tools not always can be used in the present nor “touched” to preserve its integrality, we are limited to the possibly of manipulating a virtual surrogate of the object. That’s the reason that we need a solid model, but a solid model is much more than the “surface model” we have used to represent “shape” and “form”. An interpolated surface fitted to a point cloud acquired by means of a laser scan is not a solid model, because it does not give information on all the surfaces constituting the object. We need to characterize the objects as a closed shape in order to use it as a proper surrogate of the real observable.
The best approach is that of A FINITE ELEMENT MODEL. The basic concept is that a body or structure may be divided into smaller elements of finite dimensions called Finite Elements. The srcinal body or structure is then considered as an assemblage of these elements connected at a finite number of joints called Nodes or Nodal Points. Nodes are assigned at a certain density throughout the solid depending on the anticipated stress levels of a particular area.usually Regionshave which will receive large amounts of stress a higher node density than those which experience little or no stress. Points of interest may consist of: fracture point of previously tested material, fillets, corners, complex detail, and high stress areas. Each element in FE model alludes to the constructive block of a model and defines how nodes are joined to each other. Mathematical relation between elements characterizes one nodal degree of freedom in relation with the next one. This web of vectors is what carries the material properties to the object, creating many elements. The properties of the elements are formulated and combined to obtain the properties of the entire body.
Structural analysis consists of linear and non-linear models. Linear models use simple parameters and assume that the material is not plastically deformed. Non-linear models consist of stressing the material past its time-variant capabilities. The stresses in the material then vary with the amount of deformation. Fatigue analysis may help archaeologists to predict the past duration of an object or building by showing the effects of cyclic loading. Such analysis can show the areas where crack propagation is most likely to occur. Failure due to fatigue may also show the damage tolerance of the material. Vibrational analysis can be implemented to test a material against random vibrations, shock, and impact. Each of these incidences may act on the natural vibrational frequency of the material which, in turn, may cause resonance and subsequent failure. Vibration can be magnified as a result of load-inertia coupling or amplified by periodic forces as a result of resonance. This type of dynamic information is critical for controlling vibration and producing a design that runs smoothly. But it’s equally important to study the forced vibration characteristics of ancient or prehistoric tools and artefacts where a time-varying load excited a different response in different components. For cases where the load is not deterministic, we should conduct
a random vibration analysis, which takes a probabilistic approach to load definition.
What I am suggesting here is to infer prehistoric (ancient) functionality from the knowledge of physics. The idea would be to investigate the interaction among planning and reasoning, geometric representation of the visual data, and qualitative and quantitative representations of the dynamics in the artifact world. Since the time of Galileo Gallilei we know that the “use” of an object can be reduced to the application of forces to a solid, which in response to them moves, deforms or vibrates. Mechanics is the discipline which investigates the way forces can be applied to solids, and the intensity of consequences. Pioneering work by Johan Kamminga and Brian Cotterell proves that experimental knowledge of the properties of materials, many remarkable processes of shaping holding, pressing, cutting, heating, etc them are now well known, and expressed mathematically in equations. Archaeological finite analysis then implies the
Heat Transfer analysis models the conductivity or thermal fluid dynamics of the material or structure. This may consist of a steady-state or transient transfer. Steady-state transfer refers to constant thermo properties in the material that yield linear heat diffusion. Motion analysis (kinematics) simulates the motion of an artefact or an assembly and tries to determine its past (or future) behaviour by incorporating the effects of force and friction. Such analysis allows understanding how a series of artefacts or tools performed in the past – e.g., to analyze the needed force to activate a specific mechanism or to exert mechanical forces to study phenomena and processes such as wear resistance. It can be of interest in the cae of relating the use of a tool with the preserved material evidence of its performance: a lithic tool and the stone stelae the tool contributed to engrave. This kind of analysis needs additional parameters such as center of gravity, type of contact and position relationship between components or assemblies; time-velocity.
CONCLUSIONS
Archaeology should not be reduced to the visualization of artefacts and buildings, but a complete simulation where the archaeologist can modify the geometry and other characteristics, redefine parameters, assign new values
22
ARCHAEOLOGICAL
and settings or any other input data, select another simulation study or run a new simulation test, to test the validity of the model itself. The aim is not to prove that any single visualization correctly captures all the past but only that the explanantions are sufficiently diverse, given available knowledge, that the dynamics of a concrete historical situation should be contained within the proposed explanatory model.
ANDGEOMATIC NEEDS
functioning in the past depended both on its present form or structure, as well as on the mode and conditions of its past use. Once we know the form, the physical, structural and mechanical properties and the effective use of an object, then, by experiment (direct or virtual), the computer can simulate its functional behaviour. Consequently, the most productive way to understand artifact morphology, design, and technological organization is by analyzing each type of material evidence in its own terms, identifying the constraints and design strategies represented by each one, and then combining these strategies to understand entire assemblages. According to such assumptions, if one wants to create a specific tool meant to solve a specific problem, some of the things that people have had to consider in this design process include the size and weight of the tool; its overall form (for holding or halting); the edge angle where cutting, scraping, or holding was important; the possibility of halting; the duration of its use, how specialized the working parts needed to be; whether it was at all desirable to combine two or more functions in the same tool; how reliable the tool needed to be; and how easily repaired or resharpened it needed to be (Hayden 1998).
Function-based reasoning can be seen as a constraint satisfaction problem where functional descriptions constrain structure or structure constrains functional possibilities. The mappings available between form and function are actually many-to-many and recovering an object by matching previously recognized ones’ functionalities experience combinatorial growth, what may constrain us not to infer the actual functionality in the past, but it’s more improbable function(s). Would the object have behaved as expected? As we have been discussing, this depends on several interrelated issues, for these determine possible outcomes. Its geometry, i.e. form, its material, physical, structural and mechanical properties; the workspace, ... but also the physics involved in their manipulation. The artefact
23
2 LASER/LIDAR
LASER/LIDAR
2.1 AIRBORNE LASER SCANNING FOR ARCHAEOLOGICAL PROSPECTION R. BENNET
questions, and answers, that will aid effective and appropriate use of ALS data. This information is paired with and refers to the ALS case study at the end of the chapter.
2.1.1 INTRODUCTION
The adoption of airborne laser scanning (ALS) for archaeological landscape survey over the last decade has been a revolution in prospection that some have likened to the inception of aerial photography a century ago. Commonly referred to as LiDaR (Light Detection and Ranging)1, this survey technique records high resolution height data that can be modelled in a number of ways to represent the macro and micro topography of a landscape. Arguably the most exciting aspect of this technique is the ability to “remove” vegetation to visualise the ground surface beneath a tree canopy (Crow et al., 2007; Crow, 2009), but its value has also been shown in open landscapes and as a key component of multi-sensor survey (Bennett et al., 2011, 2012).
2.1.2 TECHNICAL BACKGROUND – HOW ALS DATA ARE COLLECTED AND PROCESSED
Unlike aerial photography or digital spectral imaging, ALS is an active remote sensing technique, meaning that measurements are taken using light emitted from the sensor unit rather than the reflection of natural light thus enabling night-time collection of data. The principle of laser scanning as a survey tool relies on the ability to calculate the time taken by a beam of light to travel from the sensor to the reflecting surface and back. The sensor scans in a direction perpendicular to the direction of flight creating a swath of points (Figure 1). Points are collected in a zig-zag or saw-tooth pattern resulting in an uneven distribution of spot heights along the swath. In addition, as the rotating mirror reaches the edge of each oscillation it slows down, resulting in a cluster of more-tightly spaced points at the edges of each flight-line.
The increased interest in and availability of ALS data has ensured its place in the tool kit of historical environment professionals. Increasingly, archaeologists are not just recipients of image data processed by environmental or hydrological specialists but are taking on the task of specifying and processing ALS data with archaeological prospection in mind from the outset. Despite this shift, the information and issues surrounding the capture, processing and visualisation of ALS data for historic environment assessment remain less well recorded than the applications themselves.
Combining this with the information about the sensor's real-time location via Global Positioning System (GPS) and the roll, pitch and yaw of the plane via the Inertial Measurement Unit (IMU), it is possible to precisely calculate the distance of the sensor from the ground (Figure 2). Although airborne laser systems were known to be able to record height to less than 1m accuracy in the 1970s, it was advancements in GPS and IMU technology throughout the 80s and 90s and the removal of signal scrambling by the US military in 2000, that enabled ALS sensors to be used for topographic mapping to an accuracy typically in the order of 0.1-0.2 m (Beraldin et al., 2010:20). This resolution means that features of archaeological interest that are represented in the macrotopography of a landscape can be captured in detail by ALS.
This chapter attempts to provide a balance of technical knowledge with archaeological application and to explain the benefits and disadvantage of ALS as a tool for archaeological landscape assessment. The aim here is not to overwhelm with detail (readers should look to Beraldin et al., (2010) for an excellent technical summary of ALS systems) but to provide historic environment professionals, researchers and students with the 1 Lidar is a broader term that can be used to describe a range of space, airborne and ground-based laser range measuring systems, while ALS relates to a particular type of airborne sensor which uses a rotating mirror to scan beneath the aircraft.
27
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 1. Demonstration and example of the zig-zag point distribution when ALS data are collected using an oscillating sensor
Figure 2. Schematic of the key components of the ALS system that enable accurate measurement of height and location The reflected data from the laser beam can be recorded by the sensor in one of two forms: Discrete Return or Full-Waveform (Figure 3). Discrete Return systems record individually backscattered pulses when the beam encounters an obstacle from which it partially reflects, such as vegetation as can be seen in Figure 3. A return is only recorded when the reflection exceeds a manufacturer-defined intensity threshold and there is typically a discrete time interval before any subsequent return can be recorded. Between four and six returns can typically be recorded, forming the basis for identifying and removing vegetation (see below).
Full-waveform sensors record the entire returning beam allowing the user to specify the “pulse points” that they wish to use to define vegetation after the data is collected. This method has been shown to improve the accuracy of vegetation filtering (Doneus and Briese, 2010). However this type of sensor is less common than the discrete return systems and applications are hampered by the computational power required to process the data, which poses a challenge for many applications. The initial processing steps for ALS data are most often done by the data supplier but are worth mentioning 28
LASER/LIDAR
Figure 3. Schematic illustrating the differences in data recorded by full-waveform and pulse echo ALS sensors
Figure 4. An example of “orange peel” patterning caused by uncorrected point heights at the edges of swaths. The overlay demonstrates uncorrected data which in the red overlap zones appears speckled and uneven compared with the same areas in the corrected (underlying) model briefly here as the techniques used may affect the ALS data supplied and input to subsequent processing steps. In addition to reconciling the data from the ALS sensor, GPS and IMU to create accurate spot points, the processing must also correct for effects caused by the angle of the sensor at the edge of adjacent flightlines. The
increased distance the laser has to travel at the edge of the flightline causes inaccuracies in the heights recorded. If left uncorrected, these inaccuracies result in an uneven “orange peel” effect of surface dimpling where flightlines overlap (see Crutchley 2010: pp. 26-27 and Figure 4). One of the most effective methods for correcting this is 29
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
the Least Squares Matching (LSM) algorithm (Lichti and Skaloud, 2010:121), but this type of correction currently requires specialised software.
The removal of non-terrain points is undertaken by classification of the point cloud into points that represent terrain and points that represent all other features. The non-terrain points can be identified and removed using a variety of algorithms that have been developed to automate this procedure. Sithole and Vosselman, (2004) provide a detailed evaluation of many filtering approaches with respect to their accuracy, which are found to be generally good for rural and level terrain but worse in complex urban or rough, vegetated terrain. This is because the simplest approaches apply only a local minimum height filter that leads to systematic errors in hilly or rough terrain (Briese, 2010:127). In addition, less sophisticated filtering techniques have been noted to remove archaeological features from the terrain model and add artefacts (Crutchley, 2010). Recently more sophisticated approaches have been developed, including segmentation-based methods and the identification of breaklines such as building edges as a pre-filtering step to improve the final interpolation, though as yet no fully automated procedure has been found that can be applied universally to all landscape areas (Briese, 2010:139). This means that manual checking and editing of the model is necessary to improve the results of the automated process, though this tends to be far more intensive in urban areas with complex local surface characteristics (ibid). It is worth noting that adaptive morphological methods such as those proposed by Axelsson (2000) and Chen et al., (2007) are the filtering techniques used by Terrascan and LASTools software and so are likely to be the most common.
2.1.3 BASIC AND ADVANCED PROCESSING Filtering
ALS creates a dense point cloud of spot heights as the laser beam scans across the landscape, resulting in a zigzag distribution of points each with an x, y and z value. The point cloud data can then be interpolated into two categories of terrain model: Digital Surface Models (DSM), give the surface of the topography (usually recorded from the first return per laser pulse), including buildings, trees etc.; and Digital Terrain Models (DTM) represent the bare earth surface stripped of vegetation, buildings and temporal objects such as cars (Briese, 2010) (Figure 5). For the purposes of disambiguation, both of these products can also be referred to as a Digital Elevation Models (DEM) as they represent elevation in its srcinal units of height above sea level. Another common product is the Canopy Height Model (CHM) or normalised DSM (nDSM), which is defined as the bare earth surface subtracted from the first return DSM. In terms of the historic environment, while the DSM provides environmental context for the model and should always be viewed to identify areas where the data may be affected by dense vegetation, the DTM, or filtered model, is most commonly used to view the terrain that is otherwise masked by vegetation and as such it is worth discussing the processing required to create a DTM in more detail here.
For full waveform data, the echo width and amplitude can be used to improve the classification and filtering process
Figure 5. An example of classification of points based on return which forms the most basic method to filter non-terrain points from the DSM 30
LASER/LIDAR
Figure 6. Two examples of common interpolation techniques: IDW (left) and Bicubic Spline (right) particularly in areas of dense, low vegetation such as
(Figure 6). There are many methods of interpolation from
forest understory. Although these techniques are still in development they have been shown to be very effective at defining ground hits from low-level vegetation based on texture (Doneus and Briese, 2010).
the mostheight basic ofoperation of taking or modal the points withinthe an mean, area tomedian, complex weighting of points and incorporation of breaklines to negate the impact of smoothing when interpolating over sharp changes in topography (Briese, 2010:125). Any of the common interpolation methods can be used; typically nearest neighbour, inverse distance weighting (IDW), linear functions (regularised / bicubic or bilinear spline), or kriging are the most common. In practise determining the “best” interpolation method depends on the topography, so trialling a number of techniques on sample areas is often necessary. The accuracy of the interpolation models can best be assessed by the collection of ground-observation point via Real Time Kinetic (RTK) GPS survey. There is no ‘standard’ or ‘best’ method for the interpolation of point data to raster, but users should be aware that models created using different interpolation methods will represent microtopography differently. Consequently if visualisation
Interpolation
Although the general morphology of a landscape is an important feature for archaeologists to observe, most individual sites and features representing past human interaction with the landscape can be observed as microtopographic changes. Such features are difficult to visualise from the point cloud itself, so while the preprocessing steps described above use the point cloud, for visualisation the survey is most often processed by interpolating the x, y, z points into a 2.5D surface either as a raster grid or triangulated irregular network (TIN) format. Although a TIN or mesh is commonly used for terrestrial and object laser scanning (see Remondino this volume), for rasterised. archaeological applications to date most ALS data are Rasterisation is advantageous for the landscape researcher as it allows ALS data to be visualised, processed, interrogated and interpreted in a Geographical Information System (GIS) alongside a range of other geographical or archival data, such as historic mapping, aerial photographs and feature data derived from Historic Environment Records. The disadvantage of rasterisation is the loss of geometric complexity as described below.
techniques to on be the compared with eachtechnique. other they should all beare based same interpolation 2.1.4 INTENSITY DATA
In addition to height data, the ALS sensor also captures the intensity of the returned beam and there has been some speculation regarding the use of the intensity measure as a means to detect archaeological features with varying reflectance properties (Coren et al., 2005; Challis et al., 2011). While the intensity has been used for a number of studies including environmental applications
The process of interpolating takes the data from a number of points to provide a height for a cell in the image 31
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 7. Comparison of visualisation techniques mentioned in this chapter such as canopy determination (e.g. Donoghue et al., 2007) and landcover classification (e.g. Yoon et al., 2008) and earth science applications such as volcanology e.g. Spinetti et al., 2009) and glaciology (e.g. Lutz et al.,
calibration of most models of ALS scanner, the intensity bears little correlation to reflectance recorded at the same wavelengths by an airborne spectral sensor (Bennett, 2012).
2003), thereto archaeological are an numberprospection of problems its application at thewith present time. The foremost of these is the fact that the intensity measure is affected by many factors in addition to the reflectance properties of the target, including the range distance from the sensor to the target, the power of the laser, angle of reflection, optical properties of the system and attenuation through that atmosphere (see Starek et al. 2006) for a fuller discussion). Additionally recent research has shown that standard ALS wavelengths (generally between 800 nm and 1550 nm) are generally less sensitive for archaeological feature detection than shorter NIR wavelengths and due to the lack of
Sensor technology is improving, with calibrated ALS / hyperspectral systems such as that developed by the Finnish Geodetic Institute (Hakala et al., 2012) soon to be available commercially. Although these have yet to be tested regarding their application for archaeological research, it is anticipated that this new generation of combined sensors will provide higher quality spectral information than the intensity data currently collected. For now at least, the use of ALS intensity data for archaeological prospection is limited in scope by the factors listed above and users may find analysis of complementary data a more profitable use of time. 32
LASER/LIDAR
highlight micro-topography) the shaded model must also be calculated with a low solar altitude, typically 8°-15°. This means that shaded relief models work poorly in areas of substantial macro topographic change, with deep shadows obscuring micro-topography regardless of illumination direction (Hesse, 2010).
2.1.5 VISUALISATION TECHNIQUES
Due to their subtle topography, archaeological features can be difficult to determine from the DTM, even when the height component is exaggerated to highlight topographic change. To map these features some form of visualisation technique is required to highlight their presence in the DTM to the viewer. This section covers the most common forms of visualisation applied to archaeological research and explains their uses, and some pitfalls, for archaeological prospection. Figure 7 gives an example of each of the visualisation techniques mentioned below.
Recent research by the author has also shown that the choice of both the azimuth and angle of light impact feature visibility in a quantifiable way. For example, altering the angle of the light from the “standard” output of 45º to 10º improved feature detection by 6%. Additionally when eight shaded-relief models with identical altitude illumination but varying azimuths were assessed a 12% difference in the number of detectable features was observed between the best and worst performing angles (see case study and Bennett, 2012). These differences result in the requirement to create and assess multiple models from a variety of illumination angles and azimuths; a serious expenditure of time for significant yet diminishing return. One of the proposed solutions to this is by statistical combination of the shaded-relief models through Principal Components Analysis (see below).
Shaded Relief models
The creation of shaded relief models is the most common process used to visualise ALS data for archaeology (Crutchley 2010). This technique takes the elevation model and calculates shade from a given solar direction (or azimuth) and altitude (height above the horizon – see Figure 8), thus highlighting topographic features (Horn, 1981). Shaded relief models provide familiar, photogenic views of the landscape and can be used to mimic ideal raking light conditions favoured by aerial photographic interpretors (Wilson, 2000:46).
A final point to consider when using these models to plot potential archaeological features is locational inaccuracy. As the shaded-relief model is a computation of light and shade the perceived location of features alters as the angle of illumination is changed (Figure 9) as the observer plots not the topographic feature itself but the area of light or shadow. This can lead to substantial locational inaccuracies, especially when using low angle light required to highlight microtopography (see case study).
Despite their frequent use and familiarity, shaded relief images pose some problems for the interpretation and mapping of archaeological features. Linear features that align with of illumination will not be easily visible in the the direction shaded relief model, requiring multiple angles of illumination to be calculated and inspected (Devereux et al., 2008). To mimic raking light (and so
Figure 8. Angle and Illumination of a shaded relief model 33
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 9. Different angles of illumination highlighting different archaeological features In all the shaded-relief model provides an aesthetically pleasing view of the landscape for illustrative purposes but, due to the issues outlined above, it should always be combined with at least one other visualisation technique in order to map potential archaeological features.
Principle Components Analysis of Multiple Shaded Relief Images
Principle Components Analysis (PCA) is a multivariate statistical technique used to reduce redundancy in multi-
34
LASER/LIDAR
dimensional or multi-temporal images. It has been skilfully applied by Kvamme to geophysical data (2006) and is used for minimising the number of images to be analysed. PCA has also received some attention in archaeological work (Winterbottom and Dawson, 2005; Challis et al., 2008; Devereux et al., 2008).
inclination from the horizontal. Aspect mapping produces a raster that indicates the direction that slopes are facing, represented by the number of degrees north of east. Although common for geographical applications, there has been limited application of slope, aspect and curvature mapping for the detection of micro-topographic change relating to archaeological features, though course resolution aspect and slope terrain maps are well established in predictive models of site location (Kvamme and Jochim, 1989; Challis et al., 2011). It is anticipated that topographic anomalies relating to archaeological features will be identifiable in these images, in particular the slope and aspect maps may aid pattern recognition for features such as the lynchets of a field system.
While the PCA transformation reduces the dimensionality of the shaded relief technique, the interpreter must still analyse a large number of shaded images to access the information content of the terrain model. Also, to ensure the most representative model of the topography, every possible angle and azimuth should be processed. At the time of writing this approach has never been undertaken; the only published method for using the technique with ALS shaded relief images used 16 angles of illumination at the same azimuth (Devereux et al., 2008). The limit on the number of input images is principally due to the relatively diminished return of new information compared with the increased costs in terms of computation and interpretation time.
Horizon Modelling or Sky View Factor
To overcome some shortfalls of shaded relief models, specifically the issues of illumination angle and multidimensionality of data, the technique of horizon or sky view factor (SVF) has been applied recently by researchers in Slovenia (Kokalj et al., 2011). The calculation is based on the method used to compute shadows for solar irradiation models. The algorithm begins at a low azimuth angle from a single direction and computes at what point the light from that angle 'hits' the terrain. The angle is increased until it reaches the angle where it is higher than any point in the landscape (on that
PC images represent statistical variance in light levels of the shaded relief models, rather than the topographic data collected by the sensor. While this might seem a pedantic distinction to make, the visibility of archaeological features is highly dependent on angle and azimuth of illumination. The PCA will reduce some of this directional variability but cannot account for the features that were poorly represented in the srcinal shaded relief images. The output of the PCA will therefore be highly influenced by the selection of these factors at the outset and this could prove a limiting factor for subsequent interpretation. Consequently, the choices made in the processing of shaded relief and PC images (see above) may mask features that were present in the srcinal ALS data. Additionally, for profiles drawn across archaeological features in a PC image, the z component of the profile will not be a logical height measurement as in the srcinal DTM but a product of the statistical computation of varying light levels. While related to the topographic features, this light-level scale is unhelpful when trying to quantify or describe the feature as it is entirely dependent on the input parameters chosen for the shaded-relief models.
line of sight). Thisofprocedure is then replicated for ofa specified number angles producing a number directional files which can then be added together to produce a model that reflects the total amount of light that each pixel is exposed to as the sun angle crosses the hemisphere above it. Consequently, positive features appear brighter and negative features are darker, replicating the visual results of the shaded relief models but without bias caused by the direction of illumination. As with all light-level techniques, the SVF does not provide a direct representation of topographic change and additionally has been noted to accentuate data artefacts more than other techniques (Bennett et al., 2012). Local Relief Modelling (LRM)
Although applying PCA to the shaded-relief images does reduce reduce redundancy somewhat as the first PC will typically contain 95-99% of all variation in an image or
While shaded models provide useful images, there has been much recent emphasis on developing better methods for extracting the micro-topography that represents
data stack, itinformation has beenis shown significant archaeological detectablethatin subsequent PCs (Bennett et al., 2012). As PCA can compute as many PC images to assess as srcinal input images used, it is still necessary to assess multiple images to derive the full archaeological potential from this technique.
archaeological modern features the information landscape that surrounds themorwhile retaining thefrom height as recorded by the sensor. One of these methods, Local Relief Modelling or LRM devised by Hesse (2010) for analysing mountainous and forested terrain in Germany, has received particular attention for its robust methodology and accurate results. The technique reduces the effect of the macro-topography while retaining the integrity of the micro-topography, including archaeological features by subtracting a low pass filtered model from the srcinal DTM and extracting features outlined by the 0m contour. The advantage of this technique over the others mentioned is that it allows the
Slope and Aspect
Slope, aspect and curvature maps are commonly used for analysing topographic data in other geographic disciplines. Slope mapping produces a raster that gives slope values for each grid cell, stated in degrees of 35
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
creation of a model that is not only unaffected by shadow but which retains its topographic integrity allowing measurements to be calculated from it in a way that is not possible using shaded relief models, PCA or Horizon View mapping. However the extent of distortion of the micro-topographic feature extracted has yet to be quantified as the development of the model took place without any ground control data.
wooded, mountainous environment. These include colour-ramped DTMs, slope, LRM (termed here as trend removal) SVF and a number of variants of solarillumination They also incorporate a survey of 12 users with a range of experience in ALS data interpretation and propose a method of quantifying efficiency of the techniques by means of assessing contrast between cells of the image using the median of five different standard deviations. Noise is also calculated using the standard deviations of the standard deviations although there is little further explanation of how this enabled the authors to quantify contrast and noise.
Although developed for mountain environments, the technique has also be applied to gently undulating landscapes to highlight archaeological features (see casestudy). Due to the isolation of the microtopography the LRM model could also have the potential to be used as a base topographic layer for digital combination with other data.
These comparisons all conclude that there is no silver bullet, but all the papers agree on the following points:
Selecting a visualisation technique
Unfortunately for users of ALS data, there is no “perfect” visualisation technique for identifying archaeological topography. However, thanks to the development of techniques such as the LRM and SVF described above, archaeologists now have access to both generic and specific tools to visualise ALS data. Understanding how and when to apply these techniques is not an easy task and until recently there was little published comparative data meaning that users could not assess the appropriateness of any technique for their research environment. To this end readers are advised to consult
Although visually pleasing and the most commonly used visualisation technique shaded-relief modelling is a poor method for identifying and accurately mapping archaeological features Multi-method analysis is recommended, with LRM and SVF and slope shown to be valuable techniques Users should try to familiarise themselves with the potential pitfalls of any technique prior to its application The most effective and appropriate selection comes from the trial of a number of visualisation techniques for a given environment.
et Consulting existing Challis , (2011); the Bennett , (2012) andpapers Štular by , (2012) al. et al.comparative et al.
the the casecomparison study presented with this chapter, which focuses on of ALS visualisation techniques for a site on the Salisbury Plain, Wiltshire, UK (see Bennett et al., 2012 for a further publication relating to this).
is a good place to start, but visualisation techniques will need to be tailored to the landscape surveyed and type of feature to be detected. It is recommended that a pilot study is undertaken using a number of sample areas of m) with the coefficients
A typical value of the B/D ratio in terrestrial photogrammetry should be around than 0.5, even if in practical situations it is often very difficult to fulfil this requirement. Generally, the larger the baseline, the better
of the linearized collinearity equations; x = unknowns vector (exterior parameters, 3D object coordinates, eventually interior parameters); l = observation vector (i.e. the measurements).
the accuracy the computed object coordinates, although large ofbaselines arise problems in finding automatically the same correspondences in the images, due to strong perspective effects. According to Fraser (1996), the accuracy of the computed 3D object coordinates (σXYZ) depends on the image measurement precision (σxy), image scale and geometry (e.g. the scale number S), an empirical factor q and the number of images k:
Generally a weight matrix P is added in order to weight the observations and unknown parameters during the estimation procedure. The estimation of x and the variance factor σ is usually (but not exclusively) attempted as unbiased, minimum variance estimation, performed by means of least squares and results in: xˆ
( A T PA )
1
A T Pl
(3)
0
v T Pv r
qS xy k
(6)
The collinearity principle and the Gauss-Markov model of the least squares are valid and employed for all those images acquired with frame sensors (e.g. a SLR camera). In case of linear array sensors, other mathematical approaches should be employed. The description of such methods is outside the scope of this chapter.
with the residual v and the standard deviation a posteriori (σ0) as:
v A xˆ l
XYZ
(4)
The entire photogrammetric workflow used to derive metric and accurate 3D information of a scene from a set of images consists of (i) camera calibration and image orientation, (ii) 3D measurements, (iii) structuring and modelling, (iv) texture mapping and visualization. Compared to the active range sensors workflow, the main difference stays in the 3D point cloud derivation: while range sensors (e.g. laser scanners) deliver directly the 3D data, photogrammetry requires the mathematical processing of the image data to derive the required sparse or dense 3D point clouds useful to digitally reconstruct the surveyed scene.
(5)
with r the redundancy of the system (number of observations - number of unknowns). The precision of the parameter vector x is controlled by its covariance matrix C xx 02 ( AT PA) 1 .
For (ATPA) to be uniquely invertible, as required in (Eq. 3), the image network needs to fix an external “datum”
67
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Table 1: Photogrammetric procedures for calibration, orientation and point posi ioning Method
Observations
Unkno ns
General bundle adj.
tie points, evt. datum
exterior param., 3D co rdinates
Self-calibrat ng bundle adj.
tie points, evt. datum
interior and exterior, 3
Resection
tie points, 3D coordinat s
interior and exterior param.
Intersection
tie points, interior and e terior param.
3D coordinates
coord
b
a
Figure 2. A typical terrestrial image network acq ired ad-hoc or a camera calibration procedure, with c nvergent and rotated images (a). A set of terrestrial i ages acquired ad-hoc for a 3D reconstruction purp se (b)
3.1.4 DIGI AL CAME A CALIBR TION AND IMA E ORIENT TION
3.1.
GEOMET IC CAME A CALIBR TION
The geometric calibration of a camera ( emondino Fraser, 2006) is defined s the dete mination o deviations of the physical re lity from a geometrically ideal imaging system based o the collinea ity principle: the inhole came a. Camera calibration cont nues to be an area of active esearch wit in the Computer Vision community, with a perhaps u fortunate ch racteristic o much of the wo k being that it pays too little heed to pre ious findings from photogrammetry. Part of this mig t well be explained in te ms of a lack of emphasis and interest in accuracy aspec s and a basic premise that not ing whateve needs to be known abo t the camer whi h is to be ca librated within a linear projective rathe
Camera cali ration and i age orientation are proce ures of fundame tal importance, in particular for all hose Geomatics pplications hich rely on the extracti n of accurate 3D geometric information rom images. The early theories and formul tions of orie tation proce ures were develo ed many years ago and to ay there is a great number of rocedures a d algorithms available ( ruen and Huang, 001). Sensor calibration and image orientation, although conceptually equivalent, follow different strategies according to the employed imaging se sors. The camera calibration rocedure ca be divided in geometric and radiometric calibration (in the fol owing only the geometric calibration of terrestrial rame camer s is reported). A camera calib ation proced re determines the interior par meters while the exteri r parameter are determined with the image orientation procedur . In photogrammetry the two procedure are very often separated a an image network op imal for camera calibration i not optimal for image o ientation (Fi . 2). Other approaches fuse the determination of interio and exterior par meters usin the same set of images and procedure b t the results are normally poor and not very accurate.
than Euclidean s ene reconstr ction. In photogrammetry, a c mera is co sidered cali rated if its focal length, principal point o fset and a se t of Additional Parameters (APs) are know . The camera calibration procedure is bas d on the collinearity model which is extended in ord r to model the systematic image errors and reduce the phy ical reality of the sensor geometry to t e perspective mo el. The mo el which has proved to be the most effective, in pa ticular for close-range sensors, was dev loped by . Brown ( 971) and expresses the corrections (Δx, Δy) to the measured image coordinates (x, ) as:
68
PHOTOGRAMMETRY
(7)
(8) with:
x x x0 ; y yy ; r2
0
x2 y2 ;
Brown’s model is generally called “physical model” as all its components can be directly attributed to physical error sources. The individual parameters represent: Δx0, Δy0, Δf
= correction for the interior orientation elements; Ki = parameters of radial lens distortion; Pi = parameters of decentering distortion; Sx = scale factor in x to compensate for possible nonsquare pixel; a = shear factor for non-orthogonality and geometric deformation of the pixel.
Figure 3. Radial (a) and decentering (b) distortion profiles for a digital camera set at different focal lengths
The three APs used to model radial distortion Δr are generally expressed via the odd-order polynomial Δr =
of the self-calibration is the overall network geometry
3
5
7
1r + K2r + K3r , where r is the radial distance. A K typical Gaussian radial distortion profile Δr is shown in Fig. 3a, which illustrates how radial distortion can vary with focal length. The coefficients Ki are usually highly correlated, with most of the error signal generally being accounted for by the cubic term K 1r3. The K2 and K3 terms are typically included for photogrammetric (low distortion) and wide-angle lenses, and in higher-accuracy vision metrology applications. The commonly encountered third-order barrel distortion seen in consumer-grade lenses is accounted for by K1.
and especially the configuration camera stations. Some good hints and practical rules for camera calibration can be summarized as follows:
Decentering distortion is due to a lack of centering of lens elements along the optical axis. The decentering distortion parameters P1 and P2 are invariably strongly projectively coupled with x0 and y0. Decentering distortion is usually an order of magnitude or more less than radial distortion and it also varies with focus, but to a much less extent, as indicated by the decentering distortion profiles shown in Fig. 3b. The projective coupling between P1 and P2 and the principal point offsets (Δx0, Δy0) increases with increasing focal length and can be problematic for long focal length lenses. The extent of coupling can be diminished, during the calibration procedure, through both use of a 3D object point array and the adoption of higher convergence angles for the images. The solution of a self-calibrating bundle adjustment leads to the estimation of all the interior parameters and APs, starting from a set of manually or automatically measured image correspondences (tie points). Critical to the quality
69
acquire a set of images of a reference object, possibly constituted of coded targets which can be automatically and accurately measured in the images;
the image network geometry should be favourable, i.e. the camera station configuration must comprise highly convergent images, acquired at different distances from the scene, with orthogonal roll angles and a large number of well distributed 3D object points;
the accuracy of the image network (and so of the calibration procedure) increases with increasing convergence angles for the imagery, the number of rays to a given object point and the number of measured points per image (although but the incremental improvement is small beyond a few tens of points);
a planar object point array can be employed for camera calibration if the images are acquired with orthogonal roll angles, a high degree of convergence and, desirably, varying
object distances;
orthogonal roll angles must be present to break the projective coupling between IO and EO parameters. Although it might be possible to achieve this decoupling without 90° image rotations, through provision of a strongly 3D object point array, it is always recommended to have ‘rolled’ images in the self-calibration network.
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Nowadays self-calibration via the bundle adjustment is a fully automatic process requiring nothing more than images recorded in a suitable multi-station geometry, an initial guess of the focal length and image sensor characteristics (and it can be a guess) and some coded targets which form a 3D object point array. A 2D flat paper with some targets on it could be used to calibrate a camera but, due to the flatness of the scene, great care must be taken in the image acquisition.
On the other hand, automated procedures (“dense image matching”) are employed when dense surface measurements and reconstructions are required, e.g. to derive a Digital Surface Model (DSM) to document detailed and complex objects like reliefs, statues, excavations areas, etc. (fig. 4b,c). The latest development in automated image matching (Pierrot-Deseilligny & Paparoditis, 2006; Hirschmuller, 2008; Remondino et al., 2008; Hiep et al., 2009; Furukawa & Ponce, 2010) are demonstrating the great potentiality of the image-based 3D reconstruction method at different scales of work, comparable to point clouds derived using active range sensors and with a reasonable level of automation. Overviews on stereo and multi-image image matching techniques can be found in (Scharstein & Szeliski, 2002; Seitz et al., 2006). Some commercial, open-source and web-based tools are also available to derive dense point clouds from a set of images (Photomodeler Scanner, MicMac, PMVS, etc.).
3.1.6 IMAGE ORIENTATION
In order to survey an object, a set of images needs to be acquired considering that a detail can be reconstructed in 3D if it is visible in at least 2 images. The orientation procedure is then performed to determine the position and attitude (angles) where the images were acquired in order, afterwards, using the collinearity principle, to derive 3D information. Having the images (and the calibration parameters), a set of tie points needs to be identified (manually or automatically) in the images – at least 5 – respecting the fact that the points are well distributed on the entire image format, non-coplanar nor collinear. These image observations are then used to form a system of collinearity equations (Eq. 1), iteratively solved with the Gauss-Markov model of least squares (Eq. 2 and 3) in order to derive the sought EO parameter and 3D coordinated of the measured tie points.
3.1.8 POLYGONAL MODEL GENERATION
Once a sparse or dense point cloudsis obtained, a polygonal model (“mesh” or TIN) is generally produced for texturing purposes, better visualization and other issues. This process is logically subdivided in several sub-steps that can be completed in different orders depending by the 3D data source (Berger et al., 2011). In case of sparse clouds, the polygonal elements are normally createdpoint with interactive procedure, firstly creating lines, then polygons and finally surfaces.
A typical set of images, acquired for 3D reconstruction purposes, forms a network which is generally not suitable for a calibration procedure. Therefore it is always better to separate the two photogrammetric steps or to adopt a set of images suitable for both procedures.
In case of dense point clouds, While meshing is a pretty straightforward step for structured point clouds, for an unstructured point cloud it is not so immediate. It requires a specific process like Delaunay, involving a projection of the 3D points on a plane or another primitive surface, a search of the shorter point-to-point connection with the generation of a set of potential triangles that are then reprojected in the 3D space and topologically verified. For this reason the mesh generation from unstructured clouds may consist in: a) merging the 2.5D point clouds reducing the amount of data in the overlapped areas and generating in this way a uniform resolution full 3D cloud; b) meshing with a more sophisticate procedures of a simple Delaunay. The possible approaches for this latter step are based on: (i) interpolating surface that build a
3.1.7 PHOTOGRAMMETRIC 3D POINT CLOUDS GENERATIONS
Once the camera parameters are known, the scene measurements can be performed with manual or automated procedures. The measured 2D image correspondences are converted into unique 3D object coordinates (3D point cloud) using the collinearity principle and the known exterior and interior parameters previously recovered. According to the surveyed scene and project requirements, sparse or dense point clouds are derived (fig. 4 and 5).
triangulation with more elements than needed and then prune away triangles not coherent with the surface (Amenta & Bern, 1999); (ii) approximating surfaces where the output is often a triangulation of a best-fit function of the raw 3D points (Hoppe et al., 1992; Cazals & Giesen, 2006).
Manual (interactive) measurements, performed in monocular or stereoscopic mode, derive sparse point clouds necessary to determine the main 3D geometries and discontinuities of an object. Sparse reconstructions are adequate for architectural or 3D city modelling applications, where the main corners and edges must be identified to reconstruct the 3D shapes (fig. 4a) (Gruen & X. Wang, 1998; El-Hakim, 2002). A relative accuracy in the range 1:5,000-20,000 is generally expected for such kinds of 3D models.
Dense image matching generally consist of unstructured 3D point clouds that can be processed with the same approach used for the above mentioned laser scanner unstructured point clouds. No alignment phase is needed as the photogrammetric process deliver a unique point cloud of the surveyed scene.
70
PHOTOGRAMMETRY
a
b
c Figure 4. 3D rec nstruction o architectural structures wi h manual measurements in order to gen rate a simple 3D m del with the ain geometrical features a). Dense 3D reconstruction via automa ed image atching (b). igital Surfac Model (DS ) generation from satellit imagery (Geo-E e stereo-pair) for 3D land cape visualization (c)
3.1.9 TEXTURE MAPPING AND V SUALIZATION
text ring of 3D point clou s (point-based rendering techniques (Kob elt & Bots h, 2004) allows a faste visualization, but for detailed and complex 3D models it is n t an appropr ate method. In case of meshed data the text re is automa ically mapped if the camera parameters are nown (e.g. if it is a photo grammetric odel and the ima es are orient d) otherwise an interactiv procedure is req ired (e.g. if t e model has een generated using range sensors and the exture come from a sep rate imaging sensor). Indeed homologue points between the 3D mesh and the 2D image to-be-mapped should be identified in ord r to find the alignment transformation necessary to map the colour information onto the mesh. Although
A polygonal 3D model can be visualized in wiref ame, shaded or te tured mode. A textured 3 geometric odel is probably he most desirable 3D object documentation by most since it gives, at the same time, a full geo etric and appear nce representation and allows unrestricted interactive visualization and manipulation at a variety of lighting con itions. The hoto-realisti representati n of a polygonal model (or e en a point loud) is achieved mapping a olour image onto the 3 geometric data. The 3D data can be in form of points or triangles (mesh), according t the applications and requirements. The
71
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Figure 5. 3D reconstruction rom images: according to the project ne ds and requirements, sparse or dense po nt clouds can be derived
some auto ated appro ches were proposed in the research co munity (Lensch et al., 2000; Corsini et al., 2009), no automated com ercial solution is availabl and this is a bot leneck of th entire 3D odelling pip line. Thus, in pr ctical cases, the 2D-3D lignment is done with the well-known DLT approac (Abdel-Aziz & Karara, 1971), often refer ed as Tsai m thod (Tsai, 1986). Correspondi g points bet een the 3D eometry and a 2D image to-be-mapped are sought to retrieve the in erior and exterior unknown camera parameters. The colour information is then projected (or assig ed) to the surface polygons using a colour-vertex e coding, a esh parameteriz tion or an external texture.
(BRDF) of the object (Lensch et al., 2003). High dyn mic range ( DR) images might also be acquired to recover all scene details and illumination (Reinhard et al., 200 ) while col ur discontin ities and aliasing effects must be remove (Debevec et al., 2004; meda et al., 200 ).
In Computer Graphics ap lications, the texturing ca also be performe with techni ues able to raphically modify the derived 3D geomet y (displace ent mappin ) or simulating t e surface irr gularities without touching the geometry ( ump mappi g, normal apping, parallax mapping).
pos ible contact with the 3D data. Theref re a realistic and accurate visualization is often required. Furthermore the ability to easily interact ith a huge 3D model is continuing and i creasing problem. Indeed model sizes (both in geometry and texture) are increasin at faster rate than computer hardware advances and t is limits the pos ibilities of i teractive and real-time vi ualization o the 3D results. D e to the gen rally large a ount of dat and its complexi y, the rendering of large 3D models is don with multi-resolution pproaches d splaying the large meshes with different Levels of etail (LOD), sim lification and optimization approache (Dietrich e al., 2007).
The photo-realistic 3D pro uct needs finally to be visualized e.g. for commu ication and presentation pur oses. In case of large and complex model the pointbas d rendering technique does not giv satisfactory results and does not provide ealistic visu lization. The visualization of a 3D model i often the only product o inte est for the external w rld, remaining the only
In the textu e mapping phase some problems can arise due to lig ting variations of the images, surface specularity nd camera ettings. Often the image are exposed wit the illumination at imagi g time but it may need to be eplaced by i lumination consistent wit the rendering p int of view and the reflectance prop rties
72
PHOTOGRAMMETRY
been adopted by the photogrammetric community in order to automate most of the steps of the 3D modelling pipeline. Computer vision researchers have indeed developed different image processing tools which can be used e.g. for automated 3D reconstruction purposes: ARC3D, Photosynth, Bundler, etc., just to mention some of them.
3.1.10 OTHER IMAGE-BASED TECHNIQUES
The most well-known technique similar to photogrammetry is computer vision (Hartley and Zisserman, 2001). Even if accuracy is not the primary goal, computer vision approaches are retrieving very interesting results for visualization purposes, object-based navigation, location-based services, robot control, shape recognition, augmented reality, annotation transfer or image browsing purposes. The typical computer vision pipeline for scene’s modelling is named “structure from motion” (Pollefeys et al., 2004; Pollefeys et al., 2008; Agarwal et al., 2009) and it is getting quite common in applications where metrics is not the primary aim. For photogrammetry, the greatest benefit of the recent advances in computer vision is the continuous development of new automated image analysis algorithms and 3D reconstruction methods. These have
There are also some image-based techniques which allow the derivation of 3D information from a single image. These methods use object constraints (Van den Heuvel, 1998; Criminisi et al., 1999; El-Hakim, 2000) or estimating surface normals instead of image correspondences with methods like shape from shading (Horn & Brooks, 1989), shape from texture (Kender, 1978), shape from specularity (Healey and Binford, 1987), shape from contour (Meyers et al., 1992), shape from 2D edge gradients (Winkelbach & Wahl, 2001).
73
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
3.2 UAV: PLATFORMS, REGULATIONS, DATA ACQUISITION AND PROCESSING F. REMONDINO
GNSS/INS systems, necessary to navigate the platforms, predict the acquisition points and possibly perform direct geo-referencing. Although conventional airborne remote sensing has still some advantages and the tremendous improvements of very high-resolution satellite imagery are closing the gap between airborne and satellite mapping applications, UAV platforms are a very important alternative and solution for studying and exploring our
3.2.1 INTRODUCTION
According to the UVS (Unmanned Vehicle System) International definition, an Unmanned Aerial Vehicle (UAV) is a generic aircraft design to operate with no human pilot onboard [1]. The simple term UAV is used commonly in the Geomatics community, but also other terms like Drone, Remotely Piloted Vehicle (RPV), Remotely (MAV), Operated Unmanned Aircraft (ROA), Aerial Vehicles Combat Micro Air Vehicle (UCAV), Small UAV (SUAV), Low Altitude Deep Penetration (LADP) UAV, Low Altitude Long Endurance (LALE) UAV, Medium Altitude Long Endurance (MALE) UAV, Remote Controlled (RC) Helicopter and Model Helicopter are often used, according to their propulsion system, altitude/endurance and the level of automation in the flight execution. The term UAS (Unmanned Aerial System) comprehends the whole system composed by the aerial vehicle/platform (UAV) and the Ground Control Station (GCS). [2] defines UAVs as Uninhabited Air Vehicles while [3] defines UAVs as uninhabited and reusable motorized aerial vehicles.
environment, in particularPrivate for heritage locationsare or rapid response applications. companies now investing and offering photogrammetric products (mainly Digital Surface Models – DSM – and orthoimages) from UAV-based aerial images as the possibility of using flying unmanned platforms with variable dimensions, small weight and high ground resolution allow to carry out flight operations at lower costs compared to the ones required by traditional aircrafts. Problems and limitations are still existing, but UAVs are a really capable source of imaging data for a large variety of applications. The paper reviews the most common UAV systems and applications in the Geomatics field, highlighting open problems and research issues related to regulations and data processing. The entire photogrammetric processing workflow is also reported with different examples and critical remarks.
In the past, the development of UAV systems and platforms was primarily motivated by military goals and applications. Unmanned inspection, surveillance, reconnaissance and mapping of inimical areas were the primary military aims. For Geomatics applications, the first experience was carried out three decades ago but only recently UAVs in the Geomatics field became a common platform for data acquisition (Fig. 1). UAV photogrammetry [4-5] indeed opens various new applications in the close-range aerial domain, introducing a low-cost alternative to the classical manned aerial photogrammetry for large-scale topographic mapping or detailed 3D recording of ground information and being a valid complementary solution to terrestrial acquisitions. The latest UAV success and developments can be explained by the spreading of low-cost platforms combined with amateur or SRL digital cameras and
3.2.2 UAV PLATFORMS
The primary airframe types are fixed and rotary wings while the most common launch/take-off methods are, beside the autonomous mode, air-, hand-, car/track-, canister- or bungee cord launched. A typical UAV platform for Geomatics purposes can cost from 1000 Euro up to 50000 Euro, depending on the on-board instrumentation, payload, flight autonomy, type of platform and degree of automation needed for its specific applications. Low-cost solutions are not usually able to perform autonomous flights, but they always require 74
PHOTOGRAMMETRY
Figure 1. Available Geomatics techniques, sensors and platforms for 3D recording purposes, according to the scene’ dimensions and complexity human assistance in the take-off and landing phases. Low-cost and open-source platforms and toolkits were
medium altitude long endurance systems. The mass varies from few kg up to 1,000 kg, the range from few
presented in [6-11]. and hand-launched UAVs which perform flights Simple autonomously using MEMS-based (Micro Electro-Mechanical Systems) or C/A code GPS for the auto-pilot are the most inexpensive systems [12], although stability in case of windy areas might be a problem.
km uptoto5500 the endurance flight altitude few hundred meter km km, and the fromfrom some minutes to 2-3 days.
More bigger and stable systems, generally based on an Internal Combustion Engine (ICE), have longer endurance with respect to electric engine UAVs and, thanks to the higher payload, they allow medium format (reflex) camera or LiDAR or SAR instruments on-board [13-18].
Strategical
Special tasks UAVs like unmanned combat autonomous
vehicles, lethal and decoys systems. UAVs for Geomatics applications can be shortly classified according their engine/propulsion system in:
The developments and improvements at hardware and platform levels are done in the robotics, aeronautical and optical communities where breakthrough solutions are sought in order to miniaturize the optical systems, enhance the payload, achieve complete autonomous navigation and improve the flying performances [19-20]. Researchers are also performed studies on flying invertebrates to understand their movement capabilities, obstacle avoidance or autonomous landing/takeoff capabilities [21-22].
Based on size, weight, endurance, range and flying altitude, UVS International defines three main categories of UAVs:
UAVs, including high altitude long endurance, stratospheric and exo-stratospheric systems which fly higher than 20,000 m altitude and have an endurance of 2-4 days.
Tactical UAVs which include micro, mini, close-,
unpowered platforms, e.g. balloon, kite, glider, paraglide; powered platforms, e.g. airship, glider, propeller, electric, combustion engine. Alternatively, they could be classified according to the aerodynamic and “physical” features as: lighter-than-air, e.g. balloon, airship; rotary wing, either electric or with combustion engine, e.g. single-rotor, coaxial, quadrocopter, multi-rotor; fixed wing, either unpowered, electric or with combustion engine, e.g. glider or high wing.
In table 1, pros and cons of different UAV typologies are presented, according to the literature review and the authors’ experience: rotor and fixed wing UAVs are
short-, medium-range, medium-range endurance, low altitude deep penetration, low altitude long endurance, 75
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Table 1. Evaluation of some UAV platforms employed for Geomatics applications, according to the literature and the authors’ experience. The evaluation is from 1 (low) to 5 (high) Fixed Wing
Rotary wings
Kite / Balloon
electric
Payload
3
3
4
2
4
Wind resistance
4
2
3
2
4
Minimum speed
4
2
2
4
4
Flying autonomy
–
3
5
2
4
Portability Landing distance
3 4
2 3
2 2
3 4
3 4
ICE engine
compared to more traditional aerial low-cost kite and balloons.
UAVs were srcinally developed for military applications, with flight recognition in enemy areas, without any risk for human pilots. The first experiences for civil and Geomatics applications were carried out at the end of the 70’s [46] and their use greatly increased in the last decades thanks to the fast improvements of platforms, communication technologies and software as well as the growing number of possible applications. Thus the use of such flying platforms in civil applications imposed to increase the security of UAV flights in order
Some UAVs civilian applications are mentioned in [23] while [24] reports on UAV projects, regulations, classifications and application in the mapping domain. The application fields where UAVs images and photogrammetrically derived DSM or orthoimages are generally employed include: Agriculture: producers can take reliable decisions to
save money and time (e.g. precision farming), get quick and accurate record of damages or identify potential problems in the field [25];
to avoid dangers fortohuman Thus criteria the international community started definebeings. the security for UAV some years ago. In particular, NATO and EuroControl started their cooperation in 1999 in order to prepare regulations for UAV platforms and flights. This work has not lead to a common and international standard yet, especially for civil applications. But the great diffusion and commercialization of new UAV systems has pushed several national and international associations to analyze the operational safety of UAVs. Each country has one or more authorities involved in the UAV regulations, that operates independently. Due to the absence of a cooperation between all these authorities, it is difficult to describe the specific aims of each of them without loss of generality. The elements of UAV regulations are mainly keen to increase the reliability of the platforms, underlining the need for safety certifications for each platform and ensuring the public safety. This rules are in continuous progress in most of the countries. As they are
Forestry: assessments of woodlots, fires surveillance,
vegetation monitoring, species identification, volume computation as well as silviculture can be accurately performed [7, 26-28];
Archaeology and architecture: 3D surveying and map-
ping of sites and man-made structures can be performed with low-altitude image-based approaches [29-33];
Environment: quick and cheap regular flights allow the
monitoring of land and water at multiple epochs [3435], road mapping [36], cadastral mapping [37], thermal analyses [38], excavation volume computation, volcano monitoring [39] or natural resources documentations for geological analyses are also feasible;
Emergency management: UAV are able to quickly
conditioned by and technical developments andequal safety standards, rules certifications should be set to those currently applied to comparable manned aircraft, although the most important issue, being UAVs unmanned, it is the citizens security in case of an impact.
Traffic monitoring: surveillance, travel time estimation,
UAVs have currently different safety levels according to their dimension, weight and onboard technology. For this reason, the rules applicable to each UAV could not be the same for all the platforms and categories. For example, in U.S., the safety is defined according to their use (public or civic), in Europe according to the weight, as this parameter is directly connected to the damage they can
acquire images for the early impact assessment and the rescue planning [40-42]. The flight can be performed over contaminated areas without any danger for operators or any long pre-flight operations;
ICE engine
3.2.4 HISTORICAL FRAMEWORKAND REGULATIONS
3.2.3 UAV APPLICATIONS
electric
trajectories, lane occupancies and incidence response are the most required information [43]. UAV images are also often used in combination with terrestrial surveying in order to close possible 3D modeling gaps and create orthoimages [44-45]. 76
PHOTOGRAMMETRY
Figure 2. Typical acquisition and processing pipeline for UAV images produce when a crash occurs. Other restrictions are defined in terms of minimum and maximum altitude,
intrinsic parameters of the on-board digital camera. The desired image scale and used camera focal length are
maximum payload, area to be surveyed, etc. The indirect control of a pilot from the GCS may lead to increased accidents due to human errors. For this reason, in several countries UAV operators need some training and qualifications.
generally in perspective order to derive mission flying height. Thefixed camera centersthe (“waypoints”) are computed fixing the longitudinal and transversal overlap of the strips (e.g. 80%-60%). All these parameters vary according to the goal of the flight: missions for detailed 3D model generation usually request high overlaps and low altitude flights to achieve small GSDs, while quick flights for emergency surveying and management need wider areas to be recorded in few minutes, at a lower resolution.
3.2.5 DATA ACQUISITION AND PROCESSING
A typical image-based aerial surveying with an UAV platform requires a flight or mission planning and GCPs (Ground Control Points) measurement (if not already available) for geo-referencing purposes. After the acquisitions, images can be used for stitching and mosaicking purposes [9] or they can be the input of the photogrammetric process. In this case, camera calibration and image triangulation are initially performed, in order to generate successively a Digital Surface Model (DSM)
The flight is normally done in manual, assisted or autonomous mode, according to the mission specifications, platform’s type and environmental conditions. The presence onboard of GNSS/INS navigation devices is usually exploited for the autonomous flight (take-off, navigation and landing) and to guide the image acquisition. The image network quality is strongly influenced by the typology of the performed flight (Fig.
or Digital Terrain Model (DTM). These products can 3D be finally used for the production of ortho-images, modelling applications or for the extraction of further metric information. In Fig. 2, the general workflow is shown, while the single steps are discussed more in detail in the following sections.
3): in theofmanual mode, the image andwhile the geometry acquisition is usually veryoverlap irregular, the presence of GNSS/INS devices, together with a navigation system, can guide and improve the acquisition. The navigation system, generally called auto-pilot, is composed by both hardware (often in a miniaturize form) and software devices. An auto-pilot allows to perform a flight according the planning and communicate with the platform during the mission. The small size and the reduced pay-load of some UAV platforms is limiting the transportation of high quality navigation devices like those coupled to airborne cameras or LiDAR sensors. The cheapest solution relies on MEMS-based inertial sensors
Flight planning and image acquisition
The mission (flight and data acquisition) is normally planned in the lab with dedicated software, starting from the knowledge of the area of interest (AOI), the required Ground Sample Distance (GSD) or footprint and the 77
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
b
c
a Figure 3. Different modalities of the flight execution delivering different image block’s quality: a) manual mode and image acquisition with a scheduled interval; b) low-cost navigation system with possible waypoints but irregular image overlap; c) automated flying and acquisition mode achieved with a high quality navigation system which feature a very reduced weight but accuracy not sufficient, to our knowledge, for direct geo-referencing. More advanced and expensive sensors, maybe based on single/double frequency positioning mode or the use of RTK would improve the quality of positioning to a decimetre level, but they are still too expensive to be commonly used on low-cost solutions. During the flight, the autonomous platform is normally observed with a Ground Control Station (GCS) which shows real-time flight data such as position, speed, attitude and distances, GNSS observations, battery or fuel status, rotor speed, etc. On the opposite, remotely controlled systems are
[48], possible with strips at different flying heights. Camera calibration and image orientation tasks require the extraction of common features visible in as many images as possible (tie points) followed by a bundle adjustment, i.e. a non-linear optimization procedure in order to minimize an appropriate cost function [49-51]. Procedure based on the manual identification of tie points by an expert operator or based on signalized coded markers are well assessed and used today. Recently fully automated procedures for the extraction of a consistent and redundant sets of tie points from markerless closerange images have been developed for photogrammetric
piloted by operator ground station.following Most of the systems allow then from imagethe data acquisition computed waypoints while low-cost systems acquire images with a scheduled interval. The used devices (platform, auto-pilot and GCS) are fundamental for the quality and reliability of the final result: low-cost instruments can be sufficient for little extensions and low altitude flights, while more expensive devices must be used for long endurance flights over wide areas. Generally, in case of light weight and low-cost platforms, a regular overlap in the image block cannot be assured as there are strongly influenced by the presence of wind, piloting capabilities and GNSS/INS quality, all randomly affecting the attitude and location of the platforms during the flight. Thus higher overlap, with respect to flights performed with manned vehicles or very expensive UAVs, are usually recommended to keep in count these problems. Camera calibration and image orientation
applicationshave[52-53]. Some efficient commercial solutions also appeared on the market (e.g. PhotoModeler Scanner, Eos Inc; PhotoScan, Agisoft) while commercial software for aerial applications still need some user interaction or the availability of GNSS/INS data for automated tie points extraction. In Computer Vision, the simultaneous determination of camera (interior and exterior) parameters and 3D structure is normally called “Structure from Motion” [5456]. Some free web-based approaches (e.g. Photosynth, 123DCatch, etc.) and open source solutions (VisualSfM [57], Bundler [58], etc.) are also available although generally not reliable and accurate enough in case of large and complex image blocks with variable baselines and image scale. The employed bundle adjustment algorithm must be reliable, able to handle possible outliers and provide statistical outputs to validate the results. The collected GNSS/INS data, if available, can help for the automated tie point extraction and can allow
Camera calibration and image orientation are two fundamental prerequisites for any metric reconstruction from images. In metrological applications, the separation of both tasks in two different steps should be preferred [47]. Indeed, they require different block geometries, which can be better optimized if they are treated in separated stages. On the other hand, in many applications where lower accuracy is required, calibration and orientation can be computed at the same time by solving a self-calibrating bundle adjustment. In case of aerial cameras, the camera calibration is generally performed in the lab although in-flight calibration are also performed
the direct geo-referencing the captured images.e.g. In applications with low metricof quality requirements, for fast data acquisition and mapping during emergency response, the accuracy of direct GNSS/INS observation can be enough [59-60]. If the navigation positioning system cannot be directly used (even for autonomous flight) as the signal is strongly degraded or not available (downtowns, rainforest areas, etc.), the orientation phase must rely only on a pure image-based approach [61-64] thus requiring GCPs for scaling and geo-referencing. These two latter steps are very important in order to get metric results. To perform indirect geo-referencing, there are basically two ways to proceed: 78
PHOTOGRAMMETRY
b
c
a
Figure 4. O ientation res lts of an aerial block over a flat area of ca 10 km (a). The derived amera poses are shown in red/green, while color dots are the 3 object point on the grou d. The absen e of ground onstraint (b) can led to a wrong so ution of the computed 3D shape (i.e. gr und deformation). The more rigorous a proach, base on GCPs used as observations in the bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e. a flat terrain
1) import at least three CPs in the undle adjustment solution, treating them a weighted observations i side the least squares minimiz tion. This approach is the most rigorous as (i) it minim zes the pos ible image lock deformation and possible systematic errors, (ii) it a oids instability o the bundle s lution (conv rgence to a rong solution) and (iii) it hel s in the determination of the correct 3D s ape of the surveyed scene.
extract dense point clouds to define the ob ect’s surface and its main geometric discontinuities. herefore the poi t density mu t be adaptively tuned to preserve edges and, possibly, av id too many points in flat areas. At the same time, a cor ect matching result must e guaranteed also in regions with poor textu es. The actual state-of-theart is the multi-i age matching technique [67-69] based on s emi-global matching algorithms [70-71], patch-based met ods [72] or optimal flow algorithms [73]. The last two methods ha been impl mented into open source pac ages named, respectively, PMVS and MicMac.
2) use a free-network approach in the bundle adjustment [65-66] and apply only at the end of the bun le a similarity ( elmert) transformation in order to bring the image network results int a desired reference coordinate system. Thi approach i not rigorous: the soluti n is sought mini izing the t ace of the ovariance matrix, introducing the necessary datum ith some initial
The derived unstructured point clouds need to be afte wards struct red and inte polated, maybe simplified and finally textured for photo-realistic visualization. Dense point clo ds are generally preferred in case o
approximati As no ecannot ternal deter constraint if the bund ens.solution ine is theintrod right ced, 3D shape of th surveyed scene, the su cessive simi arity transformati n (from the initial relativ orientation to the external one would not i prove the re ult.
terr (e.g. exc in/surface vation, fores reconstructio ry area, etc.) nwhile sparsearchaeological clouds which are afterward turned into simple polygonal information can be preferred when modeling man-made scenes like buildings.
The two ap roaches, in heory, are t us not equivalent and they ca lead to tota ly different esults (Fig. ): in the first ap roach, the uality of the bundle is only influenced y the redundant control information and, moreover, a ditional che k points can be used to derive some statistics of the djustment. n the other, the second appr ach has no external shape constraints in the bundle adjustment thus the solution is only based on the integrity and quality of the multi-ray r lative orient tion. The fundamental requirement is thus to have a good image network in order to achieve corr ct results in terms of computed object coordinates and scene’s 3D shap .
For the creation of orthoimages, a dense point cloud is mandatory in order to achieve precise orth -rectification and for a comple e removal o terrain disto tions. On the other hand, in case of low- ccuracy applications (e.g. rapid response, d saster assess ent, etc.) a simple image rectification met od (without the need of dense image mat hing) can be applied followed by a stitching ope ation [9].
3.2.
CASE ST DIES
As already mentioned, images acquired flying UA plat orms give useful information for different applications, su h as arch eological d cumentation, geological studies and monitoring, urban area modeling and monitoring, emergency assessment and so on. The typical required products re dense point clouds, pol gonal models or orthoim ges which are afterwards use for mappi g, volume omputation, displacement analyses, visuali ation, city odeling, map generation, etc.. In the foll wing sections an overview of some applications is gi en and the chieved results are shown. The data presented in the fo lowing case studies were acq ired by the uthors or by some project partners and they were proces ed by the authors using t e Apero [53]
Surface rec nstruction and orthoimage generatio
Once a set of images has been orien ed, the following steps in the 3D reconstr ction and modeling wor flow are the sur ace measure ent, orthop oto creation and feature extraction. Starting from the known camera orientation parameters, a scene an be digitally reconstructe by means of interacti e procedures or automated dense image atching tech iques. The output is normally sparse or a dense point cl ud, describing the salient corn rs and features in the f rmer case or the entire surface’s shape of he surveyed scene in the latter case. Dense image matching algorithm should be a le to
79
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
b
c
a
d e Figure 5. O ientation res lts of an aerial block over a flat area of ca 10 km (a). The derived amera poses are shown in red/green, while color dots are the 3 object point on the grou d. The absen e of ground onstraint (b) can led to a wrong so ution of the computed 3D shape (i.e. gr und deformation). The more rigorous a proach, base on GCPs used as observations in the bundle solution (c), deliver the correct 3D shape of the surveyed scene, i.e. a flat terrain
and Mic-Mac [73] open- ource tools ustomized for the specific UA applications.
The availability of acc rate 3D in ormation is very important during excavati n in order to define the st te of works/excavations at a particular epoch or to digitally reconstruct the findings that had be n discovere for documentati n, digital reservation and visualization purposes.
complete flights in autonom us mode, b t the stored coo dinates of the projection entres were not sufficient for direct geo-r ferencing. For this reas n, a set o reli ble GCPs (measured with total station on corners and feat res of the te ple) was necessary to derive scaled and geo-referenced 3D results. he orientati n procedure pro essed simult neously terrestrial and U V images (c 190 in order to ring all the ata in the sa e coordinate system. After the recovery of the camera poses, a DSM was produced for docume tation and visualization pur oses [74].
An example of such application is giv n in Fig. 5, here the Neptun Temple i the archa ological area of Paestum (It ly) is shown. Given the shape, complexity
A second example is reported in Fig. 6, showing the archaeological area of Pava (ca 60 x 50 m) surveyed eve y year at th beginning nd end of t e excavation
and dimens ons of the monument, a combination of terrestrial a d UAV (vertical and oblique) images was employed in order to gu rantee the completeness of the 3D surveying work. The employed AV is a 4-rotors MD4-1000 icrodrone s stem, entire y of carbon fibre which can carry up to 1,0 kg ins ruments wit an endurance longer than 45 minutes. For the nadir images, the UAV ounted an Olympus -P1 camera (12 Megapixels, 4.3 μm pixel size) with 1 mm focal l ngth while for the oblique ima es it was use an Olympus XZ1 (10 Megapixels, 2 μm pixel size) with 6 mm focal length. For both flights, the average GSD of the ima es is ca 3 cm. The auto-pilot ystem allow d to perform two
peri tovolume monitor nd theproduce advance ulti-tempora of the work,l compute the exa od tion orthoimages of t e area. The flights (35 height) were performed wit a Microdrone MD4-200 in 2010 and 2011. The heri age area is quite windy, so an electric platform was pro ably not the ost suited one. For each ession, using mul iple shootin s for each aypoint, a reliable set o ima es (ca 40) as acquired, with an aver ge GSD of 1 cm. In order t evaluate t e quality of the image tria gulation pro edure, some circular targ ts, measured wit a total statio , are used as ground control (GCP) and other as check points (CK). A ter the orient tion step, the RMSE on the CK resulted 0..037 m in planimetry and
Archaeological site 3D r coding and
odelling
80
PHOTOGRAMMETRY
b
a
d
e Fi ure 6. A mos aic view of the excavation area in Pava (Siena, Italy) urveyed with UAV images for volume excavati n computati n and GIS applications (a). The derived DSM shown as shaded (b) and textured mode (c) and the p oduced ortho-image (d) [75]. If multi-temporal imag s are available, SM differen es can be co puted for volume exactio estimation (e)
0.023 m in eight. The derived DSMs (Fig. 6b, c) were used within the Pava’s GIS to prod ce vector l yers, ortho-image (Fig. 6d) and to check t e advances i the excavation or the excavation volumes ( ig. 6e).
for cartographic, mapping a d cadastral applications. These images h ve very hig resolution if flights are performed at 10 -200 m hei ht over the ground. Very hig overlaps are recomme ded in ord r to reduce occluded areas a d achieve more complete and detailed DS . A sufficient number of CPs is mandatory in orde to geo-reference he processed images within the bundle adjustment and derive point clouds: the nu ber of GCPs varies according to the image block dimensions and the
Urban area
An UAV pl tform can be used to sur ey small urban or heritage are , when national regulation allows doi g it,
81
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
a
b
c
d e Figure 7. A mosaic over an urban area in Bandu g, Indonesia (a). Visualiza ion of the bundle adjustment results (b) f the large U V block (ca 270 images) nd a close view of the pro uced DSM over the urban area, shown as point cloud (c, d) an shaded mode (e)
complexity f the surveyed area. The uality of achieved point clouds is usually very high (up t few centim tres) and this data can thus b used for further analysis and feature extraction.
application domains. Althoug automation is not always demanded, the reported achi vements de onstrate the hig level of au onomous photogrammetric processing. UA s have recently received lot of attention, since they are fairly inexpe sive platfor s, with navi ation/control devices and rec rding sensors for quick digital dat pro uction. The reat advantage of actual AV systems is t e ability to q ickly delive high temporal and spatial resolution information and to llow a rapid response in nu ber of critic l situations here immediate access to 3D geo-information is crucial.. Indeed they feature real-
In Fig. 7, a dense urban area in Band ng (Indones a) is shown: the area was surv yed with an lectric fixed-wing RPV platfor at an aver ge height of about 150 m. Due to weather c nditions (quite strong wing) and the ab ence of an auto-pilot onboard, the acquire images (ca 270, average GS is about 5 m) are not p erfectly aligned in strips (Fig. 9b). After t e bundle bl ck adjustment, a dense DSM was created for the stimation o the population i the surveye area and ma production.
tim capability for fast data ac uisition, transmission and, pos ibly, processing. UAVs can be used in high risk situ tions and in ccessible areas although t ey still have some limitations in particular for the payload, insurance and stability. Rotary wing UA platforms can even takeoff and land vert cally, thus no runway area is required, while fixed win UAVs can cover wider areas in fe min tes. For so e applicati ns, not de anding very acc rate 3D results, complete remote sensing solutions, bas d on open h rdware and software are lso available. And in case of s all scale ap lications, U Vs can be complement or replacement of terrestri l acquisition (im ges or range data). The derived high-resolution
3.2.7 CON LUSIONS ND FUTURE DEV LOPMENT
The article presented a overview of existing AV systems, problems and applications with particular attention to the Geomatics field. The examples reported in the paper show the current tate-of-the-art of photogrammetric UAV technolog in different
82
PHOTOGRAMMETRY
images (GSD generally in the centimetre level) can be used, beside very dense point cloud generation, for texture mapping purposes on existing 3D data or for orthophoto production, mosaic, map and drawing generation. If compared to traditional airborne platforms, they decrease the operational costs and reduce the risk of access in harsh environments, still keeping high accuracy potential. But the small or medium format cameras which are generally employed, in particular on low-cost and small payload systems, enforce the acquisition of a higher number of images in order to achieve the same image coverage at a comparable resolution. In these conditions, automated and reliable orientation software are strictly recommended to reduce the processing time. Some reliable solution are nowadays available, even in the lowcost open-source sector.
2-3 pixels are normally reported, also due to the camera performances, image network quality, un-modelled errors, etc. In the near future, the most feasible improvement should be related to payload, autonomy and stability issues as well as faster (or even real-time) data processing thanks to GPU programming [76]. High-end navigation sensors, like DGPS and inexpensive INS would allow direct georeferencing with accurate results. In case of low-end navigation systems, real-time image orientation could be achieved with onboard advanced SLAM (Simultaneous Localisation And Mapping) methods [77-78]. Lab postprocessing will be most probably always mandatory for applications requiring high accuracy results. On the other hand, the acquisition of image blocks with a suitable geometry for photogrammetric process is still a critical task, especially in case of large scale projects and non-flat objects (e.g. buildings, towers, rock faces, etc.). While the planning of image acquisition is quite simple when using nadir images, the same task becomes much more complex in the case of 3D objects requiring convergent images. Two or more flights can be necessary over large areas, when UAV with reduced endurance limits are used, leading to images with illumination changes due to the different acquisition time that may affect the DSM generation and orthoimage quality. Future research has also to be addressed to develop tools for simplifying this task. Other hot research issues tied to UAV applications are related to the use of new sensors on-board like thermal, multispectral [80] or range imaging cameras [81], just to cite some of them.
The stability of low-cost and light platforms is generally an important issue, in particular in windy areas, although camera and platform stabilizers can reduce the weather dependency. Generally the stability issue is solved shooting many images (continuous acquisition or multiple shots from the predefined waypoints) and using, during the processing phase, only the best image. High altitude surveying can affect gasoline and turbine engines while the payload limitation enforce the use of low weight GNSS/IMU thus denying direct geo-referencing solutions. New reliable navigation systems are nowadays available, but the cost has limited their use until now to very few examples. A drawback is thus the system manoeuvre and transportation that generally requires at least two persons. UAV regulations are under development in several countries all around the world, in order to propose some technical specifications and areas where these devices can be used (e.g. over urban settlements), increasing the range of their applications. At the moment, the lack of precise rule frameworks and the tedious requests for flight permissions, represent the biggest limitation for UAV applications. Hopefully the incoming rules will regulate UAV applications for surveying issues with a simple letter of intent. Considering an entire UAV-based field campaign (Fig. 8) and based on the authors’ experience, we can safely say that, although automation has reached satisfactory level of performances for automated tie point extraction and DSM generation, an high percentage of the time is
Figure 8. Approximate time effort in a typical UAV-based photogrammetric workflow
absorbed by in theparticular image iforientation and GCPs measurements, direct geo-referencing cannot be performed. The time requested for the feature extraction depends on the typology of feature to be extracted and is generally a time-consuming phase too.
References
[1]. http://www.uvs-international.org/ (last accessed: December, 2012). [2]. SANNA, A.; PRALIO, B. Simulation and control of mini UAVs. Proc. 5th WSEAS Int. Conference on Simulation, Modelling and Optimization, 2005, pp. 135-141. [3]. VON BLYENBURG, P. UAVs-Current Situation and Considerations for the Way Forward. In: RTO-AVT Course on Development and Operation of UAVs for Military and Civil Applications, 1999.
The GCPs measurement step represents an important issue with UAV image blocks. As the accuracy of the topographic network is influencing the image triangulation accuracy and the GSD of the images is often reaching the centimetre level, there might be problems in reaching sub-pixel accuracies at the end of the image triangulation process. So far, in the literature, RMSEs of 83
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
[4]. COLOMINA, I.; BLÁZQUEZ, M.; MOLINA, P.; PARÉS, M.E.; WIS, M. Towards a new paradigm for highresolution low-cost photogrammetry and remote sensing. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B1), pp. 1201-1206. [5]. EISENBEISS, H. UAV photogrammetry. Diss. ETH No. 18515, Institute of Geodesy and Photogrammetry, ETH Zurich, Switzerland, Mitteilungen Nr. 105, 2009; p. 235. [6]. BENDEA, H.; BOCCARDO, P.; DEQUAL, S.; GIULIO
remotely sensed data from a tethered balloon. Remote Sensing of Environment, 2006, Vol. 103, pp. 255-264. [15]. WANG W.Q.; PENG, Q.C.; CAI, J.Y. Waveformdiversity-based millimeter-wave UAV SAR remote sensing. Transactions on Geoscience and Remote Sensing, 2009, Vol. 47(3), pp. 691-700. [16]. BERNI, J.A.J.; ZARCO-TEJADA, P.J.; SUÁREZ, L.; GONZÁLEZ-DUGO, V.; FERERES, E. Remote sensing of vegetation from UAV platforms using lightweight multispectral and thermal imaging
TONOLO MARENCHINO , D.; PIRAS M. Low cost UAV for, F.; post-disaster assessment. In:, Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B1), pp. 1373-1379. [7]. GRENZDÖRFFER, G.J.; ENGEL, A.; TEICHERT, B. The photogrammetric potential of low-cost UAVs in forestry and agriculture. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. 37, Part B1, Beijing, China, 2008; pp. 1207-1213. [8]. MEIER, L.; TANSKANEN, P.; FRAUNDORFER, F.; POLLEFEYS, M. The PIXHAWK open-source computer vision framework for MAVS. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011; Vol. 38 (1/C22). [9]. NEITZEL, F.; KLONOWSKI, J. Mobile 3D mapping with low-cost UAV system. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011; Vol. 38 (1/C22). [10]. NIETHAMMER, U.; ROTHMUND, S.; SCHWADERER, U.; ZEMAN, J.; JOSWIG, M. Open source imageprocessing tools for low-cost UAV-based landslide investigation. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011; Vol. 38 (1/C22). [11]. STEMPFHUBER, W.; BUCHHOLZ, M. A precise, lowcost RTK GNSS system for UAV applications. Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011; Vol. 38 (1/C22). [12]. J. VALLET, F.; PANISSOD, C.; STRECHA, M.; TRACOL. Photogrammetric performance of an ultralight weight Swinglet UAV. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011, Vol. 38 (1/C22). [13]. NAGAI, M.; SHIBASAKI, R.; MANANDHAR, D.; ZHAO, H. Development of digital surface and feature extraction by integrating laser scanner and CCD sensor with IMU. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Science, Istanbul, Turkey, 2004; Vol. 35(B5). [14]. VIERLING L.A.; FERSDAHL, M.; CHEN, X.; LI, Z.; ZIMMERMAN, P. The Short Wave Aerostat-Mounted Imager (SWAMI): A novel platform for acquiring
sensors. Sensing In: Int.andArchives of Photogrammetry, Remote Spatial Information Sciences, Hannover, Germany, 2009; Vol. 38 (1-4-7/W5). [17]. KOHOUTEK, T.K.; EISENBEISS, H. Processing of UAV based range imaging data to generate detailed elevation models of complex natural structures. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012, Vol. 39(1). [18]. GRENZDOFFER, G.; NIEMEYER, F.; SCHMIDT, F. Development of four vision camera system for micro-UAV. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012, Vol. 39(1). [19]. HUCKRIDGE, D.A., EBERT, R.R. Miniature imaging devices for airborne platforms. Proc. SPIE, Vol. 7113, 71130M, 2008, http://dx.doi.org/10.1117/ 12.799635 [20]. SCHAFROTH, D., BOUABDALLAH, S., BERMES, C., SIEGWART, R.: From the test benches to the first prototype of the muFly micro helicopter. Journal of Intelligent and Robotic Systems, 2009, Vol. 54(1-3), pp. 245-260. [21]. FRANCESCHINI, N., RUFFIER, F., SERRES, J.: A bioinspired flying robot sheds light on insect piloting abilities. Current Biology, 2007, Vol. 17(4), pp. 329-335. [22]. MOORE, R.J.D., THURROWGOOD, S., SOCCOL, D., BLAND, D., SRINIVASAN, M.V.: A bio-inspired stereo vision system for guidance of autonomous aircraft. Proc. Int. Symposium on Flying Insects and Robots, Ascona, Switzerland, 2007. [23]. NIRANJAN, S.; GUPTA, G.; SHARMA, N.; MANGAL, M.; SINGH, V. Initial efforts toward mission-specific imaging surveys from aerial exploring platforms: UAV. In:CD-ROM. Map World Forum, Hyderabad, India, 2007; on [24]. EVERAERTS, J. The Use of Unmanned Aerial Vehicles (UAVS) for Remote Sensing and Mapping. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B1), pp. 1187-1192. [25]. NEWCOMBE, L. Green fingered UAVs. Unmanned Vehicle, 2007. [26]. MARTINEZ, J.R.; MERINO, L.; CABALLERO, F.; OLLERO, A.; VIEGAS, D.X. Experimental results of automatic fire detection and monitoring with UAVs. 84
PHOTOGRAMMETRY
Forest Ecology and Management, 2006, 234S (2006) S232. [27]. RÉSTAS, A. The regulation Unmanned Aerial Vehicle of the Szendro Fire Department supporting fighting against forest fires 1st in the world! Forest Ecology and Management, 2006, 234S. [28]. BERNI, J.A.J.; ZARCO-TEJADA, P.J.; SUÁREZ, L.; FERERES, E. Thermal and Narrowband Multispectral Remote Sensing for Vegetation Monitoring From an Unmanned Aerial Vehicle. Transactions on Geoscience and Remote Sensing, 2009, Vol. 47, pp.
[37]. MANYOKY, M.; THEILER, P.; STEUDLER, D.; EISENBEISS, H. Unmanned aerial vehicle in cadastral applications. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Zurich, Switzerland, 2011; Vol. 38 (1/C22). [38]. HARTMANN, W.; TILCH, H. S.; EISENBEISS, H.; SCHINDLER, K. Determination of the UAV position by automatic processing of thermal images. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012; Vol. 39 (5). [39]. SMITH, J.G.; DEHN, J.; HOBLITT, R.P.; LAHUSEN, R.G.; LOWENSTERN, J.B.; MORAN, S.C.; MCCLELLAND, L.; MCGEE, K.A.; NATHENSON, M.; OKUBO, P.G.; PALLISTER, J.S.; POLAND, M.P.; POWER, J.A.; SCHNEIDER, D.J.; SISSON, T.W. Volcano monitoring. Geological Monitoring, Geological Society of America, Young and Norby (Eds), 2009, pp. 273-305, doi: 10.1130/2009. [40]. CHOU, T.-Y.; YEH, M.-L.; CHEN, Y.C.; CHEN, Y.H. Disaster monitoring and management by the unmanned aerial vehicle technology. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vienna, Austria, 2010; Vol. 38(7B), pp. 137-142. [41]. HAARBRINK, R.B.; KOERS, E. Helicopter UAV for Photogrammetry and Rapid Response. In: Int. Archives of Photogrammetry, Remote Sensing and
722-738. [29]. ÇABUK, A.; DEVECI A.; ERGINCAN F. Improving heritage documentation. GIM International, 2007, Vol. 21 (9). [30]. LAMBERS, K.; EISENBEISS, H.; SAUERBIER, M.; KUPFERSCHMIDT, D.; GAISECKER, TH.; SOTOODEH, S., HANUSCH, Th. Combining photogrammetry and laser scanning for the recording and modelling of the late intermediate period site of Pinchango Alto, Palpa, Peru. Journal of Archaeological Science 2007, Vol. 34 (10), pp. 1702-1712. [31]. OCZIPKA, M.; BEMMAN, J.; PIEZONKA, H.; MUNKABAYAR, J.; AHRENS, B.; ACHTELIK, M. and LEHMANN, F., Small drones for geo-archaeology in the steppes: locating and documenting the archaeological heritage of the Orkhon Valley in Mongolia. Remote Sensing for Environmental Monitoring, GIS Applications and Geology IX, 2009, Vol. 7874, pp. 787406-1. [32]. CHIABRANDO F.; NEX F.; PIATTI D.; RINAUDO F. UAV And RPV Systems For Photogrammetric Surveys In Archaelogical Areas: Two Tests In The Piedmont Region (ITALY). In: Journal of Archaeological Science, 2011, Vol. 38, pp. 697-710, ISSN: 0305-4403, DOI: 10.1016/j.jas. 2010.10.022. [33]. RINAUDO, F.; CHIABRANDO, F.; LINGUA, A.; SPANÒ, A. Archaeological site monitoring: UAV photogrammetry could be an answer. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012; Vol. 39(5). [34]. THAMM, H.P.; JUDEX, M. The “Low cost drone” – An interesting tool for process monitoring in a high spatial and temporal resolution. Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Enschede, The Netherlands, 2006; Vol. 36 (7). [35]. NIETHAMMER, U.; ROTHMUND, S.; JAMES, M.R.; TRAVELETTI, J.; JOSWIG, M. UAV-based remote sensing of landslides. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Newcastle upon Tyne, UK, 2010; Vol. 38 (5), on CD-ROM. [36]. ZHANG, C. An UAV-based photogrammetric mapping system for road condition assessment. In: International Archives of Photogrammetry, Remote Sensing and Spatial Information Science, Beijing, China, 2008; Vol. 37.
Spatial Information Sciences, Antwerp, Belgium, 2006; Vol. 36 (1/W44). [42]. MOLINA, P.; COLOMINA, I.; VITORIA, T.; SILVA, P. F.; SKALOUD, J.; KORNUS, W.; PRADES, R.; AGUILERA, C. Searching lost people with UAVs: the system and results of the close-search project. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012; Vol. 39(1). [43]. PURI, A.; VALAVANIS, P.; KONTITSIS, M. Statistical profile generation for traffic monitoring using realtime UAV based video data. In: Mediterranean Conference on Control & Automation, Athens, Greece, 2007; on CD-ROM. [44]. PUESCHEL, H.; SAUERBIER, M.; EISENBEISS, H. A 3D model of Castle Landemberg (CH) from combined photogrammetric processing of terrestrial and UAV-based images. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B6), pp. 96-98. [45]. REMONDINO, F.; GRUEN, A.; VON SCHWERIN, J.; EISENBEISS, H.; RIZZI, A.; SAUERBIER, M.; RICHARDS-RISSETTO, H. Multi-sensors 3D documentation of the Maya site of Copan. In: Proc. of 22nd CIPA Symposium, Kyoto, Japan, 2009; on CD-ROM. [46]. PRZYBILLA, H.-J.; WESTER-EBBINGHAUS, W. Bildflug mit ferngelenktem Kleinflugzeug. Bildmessung und Luftbildwesen. Zeitschrift fuer Photogram85
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
metrie und Fernerkundung. Herbert Wichman Verlag, Karlsruhe, Germany, 1979. [47]. REMONDINO, F.; FRASER, C. Digital cameras calibration methods: considerations and comparisons. Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2006; Vol. 36(5), pp. 266-272. [48]. COLOMINA, I.; AIGNER, E.; AGEA, A.; PEREIRA, M.; VITORIA, T.; JARAUTA, R.; PASCUAL, J.; VENTURA, J.; SASTRE, J.; BRECHBÜHLER DE PINHO, G.; DERANI, A.; HASEGAWA, J. The uVISION project for
Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne, Australia, 2012; Vol. 39(7). [60]. ZHOU, G. Near real-time orthorectification and mosaic of small UAV video flow for time-critical event response. IEEE Trans. Geoscience and Remote Sensing, 2009; Vol. 47(3), pp. 739-747. [61]. EUGSTER, H.; NEBIKER, S.. UAV-based augmented monitoring – real-time georeferencing and integration of video imagery with virtual globes. Int. Archives of Photogrammetry, Remote Sensing and
helicopter-UAV photogrammetry remotesensing. In: Proceedings of the 7th and International Geomatic Week, Barcelona, Spain, 2007. [49]. BROWN, D.C. The bundle adjustment – progress and prospects. In: International Archives of Photogrammetry, 1976, Vol. 21 (3). [50]. TRIGGS, W.; MCLAUCHLAN, P.; HARTLEY, R.; FITZGIBBON, A. Bundle adjustment – A modern synthesis. W. Triggs, A. Zisserman, and R Szeliski (Eds), Vision Algorithms: Theory and Practice, LNCS, Springer Verlag, 2000; pp. 298-375. [51]. GRUEN A.; BEYER, H.A. System calibration through self-calibration. Calibration and Orientation of Cameras in Computer Vision, Gruen and Huang (Eds.), Springer Series in Information Sciences, 2001, Vol. 34, pp. 163-194. [52]. BARAZZETTI, L.; SCAIONI, M.; REMONDINO, F. Orientation and 3D modeling from markerless terrestrial images: combining accuracy with automation. The Photogrammetric Record, 2010, Vol. 25 (132), pp. 356-381. [53]. PIERROT-DESEILLIGNY, M.; CLERY, I. APERO, An Open Source Bundle Adjustment Software for Automatic Calibration and Orientation of Set of Images. Int. Archives of Photogrammetry, Remote Sensing and Spatia Information Sciences, 2011; Vol. 38 (5/W16), Trento, Italy (on CD-ROM). [54]. HARTLEY, R.; ZISSERMAN, A. Multiple View Geometry in Computer Vision. Cambridge University Press, 2004. [55]. SNAVELY, N.; SEITZ, S. M.; SZELISKI, R. Modeling the world from Internet photo collections. Int. Journal of Computer Vision, 2008, 80 (2), pp. 189210. [56]. D.P. ROBERTSON; R. CIPOLLA. Structure from
Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B1), pp. 1229-1235. [62]. WANG, J.; GARRATT, M.; LAMBERT, A.; WANG, J.J.; HAN, S.; SINCLAIR, D. Integration of GPS/INS/vision sensors to navigate unmanned aerial vehicles. Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008; Vol. 37 (B1), pp. 963-969. [63]. BARAZZETTI, L.; REMONDINO, F.; SCAIONI, M. Fully automated UAV image-based sensor orientation. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Calgary, Canada, 2010; Vol. 38 (1), on CD-ROM. [64]. ANAI, T.; SASAKI, T.; OSARAGI, K.; YAMADA, M.; OTOMO, F.; OTANI, H. Automatic exterior orientation procedure for low-cost UAV photogrammetry using video image tracking technique and GPSRemote information. Int. and Archives of Photogrammetry, Sensing Spatial Information Sciences, Melbourne, Australia, 2012, Vol. 39(7). [65]. GRANSHAW, S.I. Bundle adjustment method in engineering photogrammetry. In: Photogrammetric Record, 1980, Vol.10 (56), pp. 111-126. [66]. DERMANIS, A. The photogrammetric inner constraints. In: ISPRS Journal of Photogrammetry and Remote Sensing, 1994, Vol. 49 (1), pp. 2539. [67]. SEITZ, S.; CURLESS, B.; DIEBEL, J.; SCHARSTEIN, D.; SZELISKI, R., 2006. A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proc. IEEE Conf. CVPR’06, New York, 17-22 June 2006; Vol. 1, pp. 519-528. [68]. VU., H.H.; KERIVEN, R.; LABATUT, P.; PONS, J.-P. Towards high-resolution large-scale multi-view stereo. Proc. IEEE Conf. CVPR’09, 2009, pp. 14301437. [69]. ZHU, Q.; ZHANG, Y.; WU, B.; ZHANG, Y. Multiple close-range image matching based on self-adaptive triangle constraint. The Photogrammetric Record, 2010, Vol. 25 (132), pp. 437-453. [70]. GERKE, S.; MORIN, K.; DOWNEY, M.; BOEHRER, N.; FUCHS, T. Semi-global matching: an alternative to LiDAR for DSM generation? Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Calgary, Canada, 2010; Vol. 38 (1), on CD-ROM.
Motion. Practical Vision, John Wiley,Image Varga,Processing M. (eds.), and 2009.Computer [57]. WU, C. VisualSFM: A Visual Structure from Motion System. http://www.cs.washington.edu/ homes/ccwu/vsfm/, 2011. [58]. SNAVELY, S.; SEITZ, S.: M.; SZELISKI, R. Modeling the World from Internet Photo Collections. International Journal of Computer Vision, 2007, Vol. 2(80), pp. 189-210. [59]. PFEIFER, N.; GLIRA, P.; BRIESE, C. Direct georeferencing with on board navigation components of light weight UAV platforms. In: Int. 86
PHOTOGRAMMETRY
[71]. HIRSCHMÜLLER, H. Stereo processing by SemiGlobal Matching and Mutual Information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, Vol. 30 (2), pp. 328–341. [72]. FURUKAWA, Y.; PONCE, J. Accurate, dense and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2010, Vol. 32 (8), pp. 1362-1376. [73]. PIERROT-DESEILLIGNY, M.; PAPARODITIS N. A multiresolution and optimization-based image matching approach: an application to surface
[76]. WENDEL A, MAURER M, GRABER G, POCK, T., BISCHOF, H.: Dense reconstruction on-the-fly. In: Proc. IEEE Int. CVPR Conference, 2012, Providence, USA. [77]. KONOLIGE, K.; AGRAWAL, M. Frameslam: from bundle adjustment to realtime visual mapping. IEEE Journal of Robotics and Automation, 2008, Vol. 24 (5), pp. 1066-1077. [78]. NUECHTER, A.; Lingemann, K.; Hertzberg, J.; Surmann, H. 6D SLAM for 3D mapping outdoor environments. Journal of Field Robotics (JFR),
reconstruction from ofSPOT5-HRS stereo imagery. In: Int. Archives Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey, 2006; Vol. 36 (1/W41), on CD-ROM. [74]. FIORILLO F, JIMENEZ FERNANDEZ-PALACIOS B, REMONDINO F, BARBA S, 2012. 3D Surveying and modeling of the archaeological area of Paestum, Italy. In: Proc. 3rd Inter. Conference Arquelogica 2.0, 2012, Sevilla, Spain. [75]. REMONDINO, F., BARAZZETTI, L., NEX, F., SCAIONI, M., SARAZZI, D: UAV photogrammetry for mapping and 3D modeling – Current status and future perspectives. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2011, Vol. 38(1/C22). ISPRS Conference UAV-g, Zurich, Switzerland.
Special Issue on Quantitative Performance Evaluation of Robotic and Intelligent Systems, 2007; Vol. 24 (8-9), pp. 699-722. [79]. STRASDAT, H.; MONTIEL, J. M. M.; DAVISON, A.J. Scale drift-aware large scale monocular SLAM. Robotics: Science and Systems, 2010. [80]. BOLTEN A, BARETH G. Introducing a low-cost MiniUAV for Thermal- and Multispectral-Imaging. In: Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Melbourne (Australia), 2012, Vol. 39(1). [81]. LANGE S, SÜNDERHAUF N, NEUBERT P, DREWS S, PROTZEL P. Autonomous corridor flight of a UAV using a low-cost and light-weight RGB-D camera. In: Advances in Autonomous Mini Robots, Proc. 6th AMiRE Symposium, 2011, pp. 183-192, ISBN 978-3-642-27481-7.
87
4 REMOTE SENSING
REMOTE SENSING
4.1 EXPLORING ARCHAEOLOGICAL LANDSCAPES WITH SATELLITE IMAGERY NIKOLAOS GALIATZATOS
natural environment with human values and interpretations and belief intertwined. Remote sensing may show that two landscapes are environmentally similar, but it will never show the difference in significance of the two landscapes in the values and beliefs (for example the mountain Olympos in ancient Greece) (Philip, personal communication, 2003).
4.1.1 INTRODUCTION
According to Clark et al., (1998): “Landscape archaeology is a geographical approach whereby a region is investigated in an integrated manner, studying sites and artefacts not in isolation, but as aspects of living societies that once occupied the landscape”
In the following paragraphs, the archived and existing satellite data will be discussed according to their properties. Then, there will be discussion on the current
In Landscape archaeology, the integration of data such as land cover, use, ofvegetation, geology, geomorphology, and theland location major roads and hydrographical features help to provide the context for human activity and evidence of human occupation in the landscape. The spatial context and geographical distribution are important for the interpretation and understanding of the historic landscape. For example, as part of the theoretical framework of landscape archaeology, roadways reflect the interplay among technology, environment, social structure, and the values of a culture (Trombold, 1991). And under certain circumstances, it may be possible to make inferences regarding past environments by interpreting the contemporary landscapes. Gathering such data from the ground might be possible, but often is prohibitively expensive because of the need to collect large amounts of data. Therefore, archaeologists are increasingly interested in effective, objective and costeffective methods to gather information from the sources such as aerial photography and satellite remote sensing.
modelling techniques to bring the data to a form that can be processed/combined/analysed for the information extraction. This is the information that the application needs. The quality of the information depends on the properties of the selected satellite imagery, the quality of the reference data needed for the pre-processing stage, the recording of certainty, and last but not least it depends on what the application needs.
Aerial photography often mechanism to detect traces of the past like soilprovides and cropamarks (Scollar et al., 1990). On the other hand, satellite remote sensing provides environmental and archaeological information over large areas such as patterns in the landscape and field systems. By allowing archaeologists to recognise patterns at different spatial and temporal scales, satellite imagery provides the means for moving from local to regional and from static to dynamic descriptions of the landscape (Kouchoukos, 2001).
photography could The be same used year, for the creation of topographic maps. the first stereophotography is produced. In 1858, Gaspard Felix Tournachon took the first known photographs from an overhead platform, a balloon (Philipson, 1997). For the next 101 years, aerial photography was developed and widely used in military and civilian applications. The platforms changed to include kites, pigeons, balloons and airplanes (chapter 2 in Reeves, 1975). In 1957, the USSR (Union of Soviet Socialist Republics) put the first satellite, Sputnik 1 into orbit and the era of satellite remote sensing began with the first systematic satellite observation of the Earth by the meteorological satellite
4.1.2 SATELLITE IMAGES AND THEIR PROPERTIES
Photography existed long before satellite observation. L.J.M. Daguerre and J.N. Niepce developed the first commonly used form of photograph between 1835 and 1839. In 1845, the first panoramic photograph was taken, and in 1849 an exhaustive program started to prove that
The young archaeologist should never forget that landscapes are cultural products, that is, a combination of 91
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 1. Illustration of the spatial resolution property TIROS-1 (Television Infrared Observation Satellite Program) in 1960. This was a meteorological satellite, designed for weather forecasting. The era of satellite photogrammetry1 starts in 1960 with the CORONA military reconnaissance program. The era of using satellite images for mapping and making measurements
has access. Further imagery can be ordered from the current operating remote sensing satellites. However, these data cost money and time to acquire; hence one must be careful with the choice of satellite imagery to use in the project/application. Some of the factors to take into consideration are the format (digital or analogue), the
starts in 1962 with the CORONA KH-4 satellite design. While civilian satellites evolved along the lines of the multispectral concept (Landgrebe, 1997) with the advent of ERTS-1 (Earth Resources Technology Satellites) or Landsat-1 in 1972, the military reconnaissance satellites proceeded to use higher resolution imagery and follow a very different path (Richelson, 1999). After 1972, several satellite sensor systems similar to Landsat were launched such as SPOT HRV (Système Pour l’Observation de la Terre – High Resolution Visible) and the Indian Liss. Other highlights in the history of satellite remote sensing include the insertion of radar systems into space, the proliferation of weather satellites, a series of specialised devices dealing with environmental monitoring or with thermal and passive microwave sensors, and the more recent hyperspectral sensors. The first commercial very high resolution (VHR) satellite to be launched successfully was IKONOS-2 in 1999. It was followed by
time (which year, when in the year), the spatialthat detail (spatial resolution), the spectral range/response is reflected or emitted by the surface of Earth (spectral resolution), the range and number of brightness values (radiometric resolution or dynamic range), the spatial coverage (swath width) and the cost. The general purpose of acquiring remote sensing image data is to be able to identify and assess either surface materials or their spatial properties, which can be then inferred to the application needs.
Quickbird in 2001, EROS-B1, OrbView-3Resource-DK-1 in 2003, and more to follow (Kompsat-2, in 2006 only).
is enough. will In theprobably same application 1m spatial resolution show the IKONOS-2 individual trees but it will miss the forest.
All these remote sensing satellites created (and keep creating) a large archive of imagery, in which everybody
Figure 2 illustrates the radiometric resolution advantage. While in most satellites the radiometric resolution is 8-bit (this means 256 different levels of gray), in IKONOS-2 (illustrated in figure 2) and other modern satellites the radiometric resolution increases to 11-bit (or more) which translates to 2048 different levels of gray. This results in seeing clearer the features that are covered by shadow or that are on top of a very reflective area.
To illustrate the above-mentioned properties of satellite imagery, some examples will be used. In figure 1, the spatial resolution is displayed. The Landsat image has a spatial resolution of 30 m hence features like the fences are not visible. As mentioned earlier though, it also depends on the application. For example, if we are interested in simply land cover mapping (e.g. forest, urban land, and agricultural land) then the Landsat image
1
A definition of satellite photogrammetry may be found in Slama et al., (1980): “Satellite photogrammetry, as distinguished from conventional aerial photogrammetry, consists of the theory and techniques of photogrammetry where the sensor is carried on a spacecraft and the sensor’s output (usually in the form of images) is utilised for the determination of coordinates on the moon or planet being investigated.” From this definition, it is obvious that people were not openly aware of the military use of same techniques towards our planet, Earth.
92
REMOTE SENSING
Figure 2. The high radiometric resolution of IKONOS-2 (11-bit) allows for better visibility at the shadows of the clouds The spectral resolution depends on which part of the spectrum the satellites can detect, and how many portions of this part can be detected. The hyperspectral sensors can provide a smoother spectral signature of the different features, which can be easily compared and verified with laboratory measurements and hence confirm the type of feature under observation. The multispectral sensors can distinguish features in a less smooth fashion, but still enough for many applications. The reason why the satellites look only at a particular part of the spectrum lays on the route of the light through the atmosphere where it is reflected, diffused, and absorbed. The absorption occurs mainly because of vapours and carbon dioxide gas. This leaves open the so-called atmospheric windows for the satellite sensor to use. When the light returns to the sensor after its travel from the sun to the earth and then back to space, it transfers information about the features it met. Each feature reflects light in a
seamless image coverage of a large area can be very useful for many applications, and here is where large swath width can prove useful. Apart from the above satellite image characteristics (properties), the companies also offer a variety of different products from each satellite. These vary from raw data to over-precise data. For example the table 1 displays the different Landsat processing levels. Almost every year the American Society of Photogrammetry and Remote Sensing (ASPRS) produces a list of the existing and future remote sensing satellite.2 This list is not exhaustive. According to the list there are 31 optical satellites in orbit and 27 planned, and 4 radar satellites in orbit with 9 planned. There have been efforts to classify the satellite imagery against the applications. They are not necessary wrong, but they forget the ingenuity of human to discover new ways to use the data.
spectral signature unique way, which is called we can detect more distinctive parts of this signature. Ifthen we have better chances to successfully recognise the feature. Understandably, there is no archaeological spectral signature.
So, after choosing an image (or many images) that suits the application with the least possible cost, it is time to proceed to its processing. As you will notice (unless you have purchased the high-precision product, which is georectified but simultaneously expensive) the image will not display the real world accurately. There is need to rectify this with the use of sensor models.
In figure 4, the swath width of the satellite imagery is presented. The satellite sensor covers particular area per scene, and this can be useful when calculating the cost. This is because the area of interest does not necessarily need a whole Landsat scene. However, the user buys it as a whole scene. For this, the cost is usually calculated per useful square km. On the other hand, instantaneous
2
http://www.asprs.org/Satellite-Information/Guide-to-Land-ImagingSatellites.html / (last accessed: December 2011).
93
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 3. The left part displays the spectral resolution of different satellites. The right part illustrates the spectral signature from the point of view of hyperspectral, multispectral and panchromatic images respectively
Figure 4. Illustration of the different spatial coverage or swath width (nominal values in parenthesis) (reproduced from http://www.asprs.org/a/news/satellites/ASPRS_ DATABASE_021208.pdf – Last accessed December 2011) Table 1. Landsat processing levels as provided Level 0 reformatted (0R, RAW)
Pixels are neither resampled nor are they geometrically corrected or registered. Radiometric artefacts are not removed
Level 1 Radiometrically Corrected (1R, RADCOR)
Pixels are neither resampled nor are they geometrically corrected or registered. Radiometric artefacts are removed and is calibrated to radiance units
Level 1 System Corrected (1G) GTCE (Ground Terrain Corrected Enhanced) or L1T
Standard product for most users. Radiometrically and geometrically corrected to known map projection, image orientation and resampling algorithm. No atmospheric corrections are applied Rectified using the SRTM, NED, CDAD, DTED, GTOPO30 DEMs, and control points. Accuracy below 50 m RMSE
94
REMOTE SENSING
where x, y are normalised pixel coordinates on the image; X,Y,Z are normalised 3D coordinates on the ground, and aijk, bijk, cijk, dijk are polynomial coefficients. If we limit the polynomials to the third order (0 m1 3, 0 m2 3, 0 m3 3, m1+m2+m3 3), then the above equations can be re-written as follows:
4.1.3 THE SENSOR MODELS
To rectify the relationship between image and object, sensor models are required. They are separated into two categories: physical sensor models, and generalised sensor models. Physical sensor models represent the physical imaging process, and they need parameters such as orbital information, sensor, ephemeris data, Earth curvature, atmospheric refraction, and lens distortion to describe the posi-tion and orientation of the sensor with respect to an object’s
row
(1 Z Y X ... Y 3 X 3 ) ( ao a1 ... a19 )T (1 Z Y X ... Y 3 X 3 ) (1 b1 ... b19 )T
column
(1 Z Y X ... Y 3 X 3 ) ( co c1 ... c19 )T (1 Z Y X ... Y 3 X 3 ) (1 d1 ... d19 )T
position. These parameters are statistically uncorrelated, as each parameter has a physical significance. Physical models are rigorous, such as with collinearity equations, and they normally produce a highly accurate model.
The superscript T denotes a vector transpose. Ratios of first order terms represent distortions caused by optical projection; ratios of second order terms approximate the corrections of Earth curvature, atmospheric refraction, Because they are sensor-dependent, it is not convenient lens distortions and more; ratios of third order terms can for users to switch among different software packages or be used for the correction of other unknown distortions add new sensor models into their systems. And in some with high order components (Tao et al., 2000). Grodecki cases, the physical sensor models are not always available. (2001) offers a detailed explanation of the RFM. Without knowing the above-mentioned parameters, it is very difficult to develop a rigorous physical sensor model. The polynomial coefficients are called Rational Function Coefficients (RFCs) (Tao et al, 2000) or Rational For these reasons, generalised sensor models were Positioning Capability (RPC) data (Open GIS developed independent of sensor platforms and sensors. Consortium, 1999), and the imagery provider gives them These involved modelling the transformation between to the user for the application of the model. They are also image and object as some general function without the termed Rational Polynomial Coefficients (RPCs, a term inclusion of the physical imaging process. The function used by SpaceImaging and Fraser and Hanley, 2003), can be in several different forms such as polynomials, while the RFM is also termed Universal Sensor Model and since they do not require knowledge of the sensor (Open GIS Consortium, 1999). Dowman and Dolloff geometry, they are toapplicable different sensor (2000) separate RFM and USM, considering USM to be and offer support real-timetocalculations, whichtypes are an extension of the RFM. Like everything new, the used in military surveillance applications. Also, because terminology is not universal, but it varies according to of their independence from the physical parameters, they whom is discussing the topic. provide a mechanism for commercial vendors to keep information about their sensors confidential. In the case of RPCs, there are differences among the VHR satellites. For example, the RPCs for the IKONOS However, when using conventional polynomials, there is provide very good results and a shift is enough to a tendency to oscillation, which produces much less improve supplied RPCs (Fraser and Hanley, 2005; accuracy than if using a rigorous sensor model. Thus, Toutin, 2006), and the imaging system is free of there was a need for the civilian and military satellite significant non-linearities (Fraser et al., 2002). The RPCs companies/agencies to develop a generalised sensor for the Quickbird perform better when corrected with model with high accuracy and without a functional higher order polynomials (Fraser et al., 2006; Toutin, relationship to the physical parameters of the satellite. For 2006), and even better when the preprocessing occurs in this reason, the Rational Function Model (RFM) was combination with a rigorous approach (Valadan Zoej and developed. This model is currently used by most VHR Sadeghian, 2003; Toutin, 2003), and this is because the satellite companies. Quickbird is provided with complete metadata physical model information, while IKONOS physical model is still The RFM is a generic form of polynomial models. It defines the formulation between ground point and the corresponding image point as ratiosa of polynomials:
not published. this approach sensitivecontrol to the number and However, distribution of the isground (Wolniewicz, 2004). In all cases, the relief of the ground can influence the results.
m1 m 2 m 3
aijk X YZ i
x
p1( X , Y , Z ) p 2( X , Y , Z )
j
k
bijk X YZ
k
i 0 j 0 k 0 n1
n2
n3 i
j
4.1.4 PREPROCESSING STAGE
i 0 j 0 k 0 m1 m 2 m 3
cijk X YZ i
y
p3( X , Y , Z ) p 4( X , Y , Z )
j
The preparation of the data before the processing stages has become a key issue for applications using multisource digital data. The main steps include the translation of all data into digital format, and geometric and radiometric correction. The kind of application and the
k
i 0 j 0 k 0 n1
n2
n3
dijk X YZ i
j
k
i 0 j 0 k 0
95
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Table 2. Description of error sources (Toutin, 2004) Source
Relation
Platform-related
Acquisition system
Sensor-related
Description of error
Platform movement (altitude, velocity) Platform attitude (roll, pitch, yaw) Viewing angles Panoramic effect with field of view Sensor mechanics (scan rate, scanning velocity, etc.)
Area of interest
Instrument-related
Time-variations or drift Clock synchronicity
Atmosphere-related Earth-related Map-related
Refraction and turbulence Rotation, curvature, topographic relief Choice of coordinate system, approximation of reality
level of accuracy required define the methods utilised for preprocessing. It mainly depends on the data characteristics and the nature of the application. The data preprocessing stage demands high accuracy and so the best approach must always be sought according to the available means.
geometric transformation of an image to an absolute coordinate system. In this project, there is no absolute coordinate system. Instead, there are datasets of satellite and reference data, with an estimated error from an absolute coordinate system. For this reason, a better definition needs to be adopted for the action of the integration of the project data. According to Ehlers (1997), this is defined as registration, which is the process of an actual geometric transformation of a ‘slave’ image to the geometry of a ‘master’ image or dataset.
4.1.5 THE BASEMAP PROBLEM
An important concept in spatial integration is the spatial standard. Geographical information systems (GIS)
The possible error sources that need to be corrected in an
provide tools makeother, two orbut more different spatial to dataa sources matchto each without reference common basemap standard it is difficult to go any further. A spatial basemap provides a common framework that any data source can be registered to, and once registered, all other data meeting that same standard are immediately available for comparison with the new data. A basemap also commonly includes control points, precisely located benchmark coordinates that allow the error and accuracy of positional data to be readily determined.
image into two broadand categories, errors becauseare of separated the acquisition system, the errorsthebecause of the observed area of interest. Some of these distortions, especially those related to instrumentation, are corrected at the ground receiving stations. Toutin’s (2004) categorisation of all such errors is shown in Table 2. There are two main ways to rectify these distortions. Both require models and mathematical functions to be applied. One way is the use of rigorous physical models. These models are applied in a distortion-by-distortion correction at the ground receiving station to offer different products (for example the IKONOS group of image products).
Thus, it is important to establish a good basemap standard where all data can be registered. The basemap should offer good and reliable control for the rest of the data. For this, one should have a look at the quality of the data, and their suitability as a basemap. The available project data are separated into two broad categories, the satellite data, and the reference data.
The other way is the use of generalised models either with polynomial or rational functions. The generalised models method was tried with success and is mostly used in this research project. The polynomial functions are still in use today by many users mainly because of their simplicity. Their usage was prevalent until the 1980s. But with the increased need for accuracy, other more detailed functions replaced them. Today, polynomial models are limited to nadir-viewing images, systematically corrected images or small images on relatively flat terrain (Bannari et al., 1995), and according to De Leeuw et al., (1988), the GCPs3 have to be numerous and distributed as evenly as possibly in the area of interest.
4.1.6 IMAGE RECTIFICATION AND RESAMPLING
At this point, there is understanding about the satellite data, all the data are in digital format, and the basemap layer is assessed. The next step is the rectification and resampling of the data for integration under a common basemap.
3
Chen and Lee (1992) use the term RPCs (Registration Control Points) that is more precise. However, to avoid confusion, this term will not be used.
Ehlers (1997) defines remote sensing image rectification (or ‘geocoding’) as the process of an actual pixelwise 96
REMOTE SENSING
However, one must always keep in mind that the RMSe can be a useful indicator of accurate image rectification, only if another means of calibration is available to evaluate standards (Morad et al., 1996). Otherwise, it is just a diagnostic of weak accuracy value (McGwire, 1996). The use of independent well-distributed test points that are not used in the image geometric transformation would give a more precise estimate of the residual error (Ehlers, 1997).
span many fields, including city and regional planning, architecture, geology and geomorphology, hydrology, geography, computer science, remote sensing and surveying. Given such wide application and divergent constituencies, there is no single universally accepted definition of GIS. An early functional definition states (Calkins and Tomlinson, 1977): “A geographical information system is an integrated software package specifically designed for use with geographic data that performs a comprehensive range of data handling tasks. These tasks include
Buiten and van Putten (1997) suggest a way to assess qualitatively the satellite data registration through applying tests. Thus, the user could gain a better insight into the quality of the image registration. For a detailed review on image registration see Zitová & Flusser (2003).
data and output, addition to a input, wide storage, variety retrieval of descriptive and inanalytical programs.”
A more recent definition (Goodchild et al., 1999) states the meaning of geographical information and geographical information science:
The image registration process has one more step after the application of the model, the resampling, which is part of the registration process. When transforming the ‘slave’ image to the new geometry/location, then the pixel position will not be the same. Thus, new pixel values need to be interpolated. This is called resampling (Ehlers, 1997).
“Geographical information (GI) can be defined as information about the features and phenomena located in the vicinity of the surface of the Earth. […] The fundamental primitive element of GI is the record where U represents some ‘thing’ (a class, a feature, a concept, a measurement or some variable, an activity, an organism, or any of a myriad possibilities) present at some location (x,y,z,t) in space-time.”
The three main resampling methods are nearest neighbour, bilinear interpolation, and cubic convolution interpolation. Nearest-neighbour resampling is simply the assignment of the brightness value of the raw pixel that is nearest to the centre of the registered pixel. Thus, the raw
Information science generally can be defined as the
brightness values areif retained. This resampling is mainly preferred the registered image method is to be classified. The bilinear interpolation uses three linear interpolations over the four pixels that surround the registered pixel. The cubic convolution interpolation uses the surrounding sixteen pixels. Both bilinear and cubic interpolations smooth the image, and they are mainly used for photointerpretation purposes. However, they are not suggested if the spectral detail is of any importance to the application (Richards and Jia, 1999).
systematic study according to scientific principles of the nature and properties of information. From this position it is easy to define GIScience as the subset of information science that is about GI. In every application, the inclusion of GIS as a processing tool is an approach that leads to a different perspective on the underlying problem. By using the simple topological figures of polygon, line and point, one can express everything that exists in the (x, y, z, t) space. Under one coordinate system, all real world data can be overlaid and analysed. It would take many pages to analytically detail the capabilities of GIS in data analysis. In few words, the conceptual components of a GIS are (Jakeman et al., 1996).
According to Richards & Jia (1999), the cubic convolution would be the resampling method used for photointerpretation purposes. But Philipson (1997) argues that the contrast must be preserved and not smoothed. The resampling method is merely personal choice of the photointerpreter.
The database is all data files that can be assessed by the user. Data are organised via some common and controlled approach. The database manager performs all database retrieval, handling storage manipulation consists and of tasks neededfunctions. to respondThe to simple user data summary requests and preliminary to analytical processes. The data entry and cleaning are procedures for entering and editing data. The user interface is the main working environment that has moved from one-dimensional command line to the object-oriented one. It is the interaction space between the user and the computer. The analysis includes procedures that derive information from the data. It is the most important part of a GIS. It incorporates a variety of analytical techniques, which when combined answer the specific needs of the user of the system.
4.1.7 GEOGRAPHICAL INFORMATIONSCIENCE
The idea of portraying different layers of data on a series of base maps, and relating things geographically, has been around much longer than computers. The bestknown example is the case of Dr. John Snow. He used a map showing the locations of death by cholera in central London in September 1854, trying to track the source of the outbreak to a contaminated well. For such applications, a software system was developed that later expanded into a science, the Geographical Information Systems/Science (GIS). The users of GIS 97
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
After all data are integrated in a common space, it is time to proceed with either qualitative or quantitative information extraction, in other words photointerpretation or image analysis.
roles. On one hand, if digital image processing is applied beforehand to enhance the imagery, then this helps the photointerpreter in his work. On the other hand, image analysis depends on information provided at key stages by an analyst, who is often using photointerpretation (Richards & Jia, 1999).
Colwell (1960) defined photographic interpretation (also termed photointerpretation) as
Konecny (2003) defines remote sensing and photogrammetry according to their object of study:
“the process by which humans examine photographic images for the purpose of identifying objects and
“Photogrammetry concerns itself with the geometric measurement of objects in analogue or digital
judging their significance”
images” “Remote sensing can be considered as the identification of objects by indirect means using naturally existing or artificially created force fields”.
4.1.8 PROCESSING STAGE
With the advent of computer technology, the methods for photographic interpretation changed and the new term image analysis (also termed quantitative analysis) came to complement (underlined) the old term:
Thus, photogrammetric techniques were adopted by remote sensing mainly for quantitative analysis. In its turn, remote sensing expanded the data that could aid an image analyst with the extraction of quantitative information.
“Image analysis is the process by which humans and/or machines examine photographic images and/or digital data for the purpose of identifying objects and judging their significance” (Philipson,
1997) Photointerpretation involves direct human interaction, and thus it is good for spatial assessment but not for quantitative accuracy. By contrast, image analysis requires little human interaction and it is mainly based on machine computational capability, and thus it has high quantitative accuracy but low spatial assessment capability.
All of the above terms give a specific meaning to the approaches, but the approaches complement each other when it comes into implementation. In other words, the sciences of photogrammetry and remote sensing moved from the previous independent way of working, towards a more interdisciplinary network, where in comparison with other sciences like Geographical Information
Today, both techniques are used in very specific and complementary ways, and the approaches have their own
Systems, they produce better results andGeodesy, increaseand the Cartography, processing capability for modern day applications (figure 5).
Figure 5. Classical and modern geospatial information system (reproduced from Konecny, 2003) 98
REMOTE SENSING 4.1.9 SUMMARY
4.1.10 EXERCISES
The main aim of this paper was to provide a brief overview of the satellite imagery today, and the main approaches to extract information from the image data. It is not enough though, and the reader should get better informed through dedicated classics such as Lillesand et al., (2004), Richards and Jia (1999), Sabins (2007), Jensens (2006) to name a few.
The satellite imagery is nothing more than data collected at a particular moment in time, from a specific area. There is an abundance of archived satellite imagery in different spatial, radiometric, spectral resolutions and different swath widths and prices. Furthermore, there is an abundance of active satellite sensors with different characteristics that can provide imagery. Careful selection of the appropriate for the application/project imagery is the first step.
For any practical application of remote sensing that you specify, define and critically discuss its spatial, temporal and spectral characteristics and the kind of image data required. Which aspects need to be considered to assess if the statement “Remote sensing data acquisition is a cost effective method” is to be true? Which types of sensors are mostly used in your discipline or field-of-interest?
ABSTRACT
This data (the satellite imagery) should then be spatially integrated in a common co-ordinate system with the other data of the project, e.g. ground collected data, reference data, etc. To achieve this, first the data must be in a common format (e.g. digital). Then, a base layer is needed that can accommodate all data with as good spatial certainty as possible and as close to reality as possible. There is a variety of sensor models to use so as to approximate reality. And all preprocessing steps should be recorded. This way, it is
The era of satellites for earth observation started in 1960s for meteorological and military applications. The multispectral concept helped earth observation to ‘takeoff’ in other applications with Landsat in 1972. Since then, a huge archive of image data has become available for almost every place on Earth, and satellite imagery was utilised in many different specialties. Today, more and more satellites are launched with improved characteristics and special properties according to the application market targeted. However, there is no satellite tailor-made for archaeological applications. This chapter will provide an insight on the existing and future satellite image data along with their properties, and how can the archaeologist make the best use of them.
possible todata. provide a picture of the quality of the integrated
References
The last stage involves the extraction of useful information from the data. The approaches can provide qualitative and/or quantitative information. The qualitative information extraction has long been done through photointerpretation approach. It is an approach highly dependent on human physiology and psychology, and it is analysed into elements and levels. The computer as a tool increases the interpreter’s perception and photointerpretation ability through image enhancements in the radiometric, spectral and spatial space, and in the spatial frequency domain. On the other hand, the quantitative information extraction needs numerical preprocessing certainty for a statistically robust result. However, it still needs the human analyst who translates the results from numbers to something more meaningful by using photointerpretation and human perception
BANNARI, A.; MORIN, D.; BÉNIÉ, G.B. and BONN, F.J. 1995. A theoretical review of different mathematical models of geometric corrections applied to remote sensing images, Remote sensing reviews, vol. 13, pp. 27-47. BUITEN, H.J. and VAN PUTTEN, B. 1997. Quality assessment of remote sensing image registration – analysis and testing of control point residuals, ISPRS journal of photogrammetry and remote sensing, vol. 52, pp. 57-73. CALKINS, H.W. and TOMLINSON, R.F. 1977. Geographic Information Systems: methods and equipment for land use planning. Ottawa, International Geographical Union, Commission of Geographical Data Sensing and Processing and U.S. Geological Survey.
during the different stages of the process.
CLARK C.D.; GARROD S.M. and PARKER PEARSON M. 1998. Landscape archaeology and Remote Sensing in southern Madagascar, International journal of remote sensing, vol. 19, No. 8, pp. 1461-1477. COLWELL, R.N. (ed.) 1960. Manual of photographic interpretation, American society of photogrammetry. DE LEEUW, A.J.; VEUGEN, L.M.M. and VAN STOKKOM, H.T.C. 1988. Geometric correction of remotelysensed imagery using ground control points and orthogonal polynomials, International journal of remote sensing, vol. 9, Nos. 10 and 11, pp. 17511759.
The computations and the imagery may display particular information, but they cannot display the significance, and they cannot know how to proceed in a process. It is always up to the human to understand the importance of the results and adopt certainty for the final decisions/conclusions. Similarly, remote sensing may provide information about the surface features, but it cannot detect the cultural significance of the landscape morphology. However, the closer to reality are this information, the larger the chance for the human to proceed into correct conclusions. 99
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
DOWMAN I.; DOLLOFF J.T. 2000. An evaluation of rational functions for photogrammetric restitution,
journal of geographical information systems,
vol. 10, No. 3, pp. 347-353. OPEN GIS CONSORTIUM, 2004. The OpenGIS™ Abstract Specification, Topic 7: The Earth Imagery Case 99107.doc, http://www.opengeospatial.org/standards/as PHILIPSON, W.R. (ed.), 1997. Manual of photographic interpretation, Science and engineering series, American society of photogrammetry and remote sensing. REEVES, R.G. (ed.), 1975. Manual of remote sensing, American society of photogrammetry. RICHARDS, J.A. and JIA, X. 1999, Remote Sensing digital image analysis – An introduction, Springer-Verlag Berlin Heidelberg. RICHELSON, J.T. 1999. U.S. Satellite Imagery, 19601999, National Security Archive Electronic Briefing Book No 13. http://www.gwu.edu/~nsarchiv/ NSAEBB/NSAEBB13/ (Last accessed: December 2011). SABINS, F.F. 2007. Remote sensing: principles and interpretation, 3rd edition, Waveland Pr. Inc. SCOLLAR, I.; TABBAGH, A.; HESSE, A. and HERZOG, I. 1990. Archaeological prospecting and Remote Sensing, Cambridge University Press. TAO, C.V.; HU, Y.; MERCER, J.B.; SCHNICK, S. and ZHANG, Y. 2000. Image rectification using a generic sensor model – rational function model, International archives of photogrammetry and remote sensing , vol. XXXIII, part B3, Amsterdam. TOUTIN, T. 2003. Error tracking in IKONOS geometric processing using a 3D parametric model, Photogrammetric engineering and remote sensing , vol. 69, pp. 43-51. TOUTIN, T. 2004. Review article: Geometric processing of remote sensing images: models, algorithms and methods, International journal of remote sensing, vol. 25, No. 10, pp. 1893-1924. TOUTIN, T. 2006. Comparison of 3D physical and empirical models for generating DSMs for stereo HR images, Photogrammetric engineering and remote sensing, vol. 72, pp. 597-604. TROMBOLD, C.D. 1991. Ancient road networks and settlement hierarchies in the New World, Cambridge University Press. VALADAN ZOEJ, M.J.V. and SADEGHIAN, S. 2003. Rigorous and non-rigorous photogrammetric processing of IKONOS Geo image, Proceedings of
International archives of photogrammetry and remote sensing, Vol. XXXIII, part B3, Amsterdam.
EHLERS, M. 1997. Rectification and registration In: Star, J.L.; Estes, J.E. and McGwire, K.C., (eds.), Integration of geographic information systems and remote sensing: Topics in remote sensing 5 ,
Cambridge University Press. FRASER, C.S.; HANLEY, H.B. and YAMAKAWA, T. 2002. Three-dimensional geopositioning accuracy of Photogrammetric record, vol. 17, IKONOS No. 99, pp.imagery, 465-479. FRASER, C.S. and HANLEY, H.B. 2003. Bias compensation in rational functions for IKONOS satellite imagery, Photogrammetric engineering and remote sensing, vol. 69, No. 1, January 2003, pp. 53-57. FRASER, C.S. and HANLEY, H.B. 2005. Bias-compensated RPCs for sensor orientation of high-resolution satellite imagery, Photogrammetric engineering and remote sensing, vol. 71, pp. 909-915. FRASER, C.S.; DIAL, G. and GRODECKI, J. 2006. Sensor orientation via RPCs, ISPRS journal of photogrammetry and remote sensing, vol. 60, pp. 182194. GOODCHILD, M.F.; EGENHOFER M.J.; KEMP K.K.; MARK D.M. and SHEPPARD E. 1999. Introduction to the Varenius project, International journal of geographical information science, vol. 13, no.8, pp. 731-745. GRODECKI, J. 2001. IKONOS Stereo Feature Extraction—RPC Approach, Proceedings of ASPRS 2001 conference, 23-27 April, St. Louis. JAKEMAN, A.J.; BECK M.B. and MCALEER M.J. 1996. Modelling change in environmental systems, John Wiley and Sons. JENSEN, R.J. 2006. Remote sensing of the environment: an earth resource perspective (2nd edition), Prentice Hall. KONECNY, G. 2003. Geoinformation: Remote sensing, Photogrammetry and Geographic Information Systems, Taylor and Francis, London. KOUCHOUKOS, N. 2001. Satellite images and Near Eastern landscapes, Near Eastern archaeology, vol. 64, No. 1-2, pp. 80-91.
LANDGREBE D. 1997. The evolution of Landsat data analysis, , Photogrammetric engineering and remote sensing, vol. 63, No. 7, pp. 859-867. LILLESAND, T.M.; KIEFER, R.W. and CHIPMAN, J.W. 2004. Remote sensing and image interpretation, 5th edition, John Wiley & Sons. MCGWIRE, K.C. 1996. Cross-validated assessment of geometric accuracy, Photogrammetric engineering and remote sensing, vol. 62, No. 10, pp. 1179-1187. MORAD, M.; CHALMERS, A.I. and O’REGAN. P.R. 1996. The role of mean-square error in the geotransformation of images in GIS, International
ISPRS joint workshop “High resolution mapping from space”, Hannover, Germany
WOLNIEWICZ, W. 2004. Assessment of geometric accuracy of VHR satellite images, Proceedings of the XXth international archived of the photogrammetry, remote sensing and spatial information sciences,
35(part B1), Istanbul, Turkey, pp.1-5 (CD-ROM). ZITOVÁ, B. and FLUSSER, J. 2003. Image registration methods: a survey, Image and vision computing, vol. 21, pp. 977-1000. 100
5 GIS
GIS
5.1 2D & 3D GIS AND WEB-BASED VISUALIZATION Giorgio AGUGIARO
5.1.1 DEFINITIONS
5.1.1.1 Geodata
In the simplest terms, a geographical information system (GIS) can be considered as the merging of cartography, statistical analysis, and database technology. A more elaborate definition is given by Clarke (1986): a geographical information system can be defined as a computer-assisted system for capturing, storing, retrieving, analysing, and displaying spatial data. Due to
An information system relies on a data model, i.e. a description of how to structure and save the data to handle (Hoberman, 2009). If real objects are going to be represented, their description can be done by means of descriptive data identifying them clearly and univocally. For example, a car can be described in terms of manufacturer, model type, engine, colour, number plate,
its general authors (e.g.allows Cowen, have arguedcharacter, that such some a vague definition the1988) term GIS to be applied to almost any software system able to display a map (or map-like image) on a computer output device. Nevertheless, what characterises a GIS is its capability to handle spatial data that are geographically referenced to a map projection in an Earth coordinate system, and to perform spatial analyses using such data (Maguire et al., 1991). Moreover, most today’s GIS software packages allow geographical data to be projected from one coordinate system into another, so that heterogeneous data from different sources can be collected into a common database and layered together for mapping purposes and further analyses.
thematic etc. areretrieved generallyby called or attribute dataThese . Data data can be means of queries , which define the criteria how to extract data from the archive, e.g. all red cars built in a certain year. Moreover, it is possible to model and to store relations between different objects: a person (object) can be the owner (relation) of a certain car (object). Therefore, proper queries are performed to extract data according to this relational information, e.g. all cars belonging to the same owner. A geographical information system relies itself on a data model, the only difference consisting in the fact that it must deal not only with attribute data, but also with geometric data. The latter describe the geographical position, the shape, the orientation and the size of objects.
It must be noted that the terms geographical and spatial are often used interchangeably, for example when referring to data or describing geographical features. However, strictly speaking, “spatial” is more general and
Moreover, in a GIS spatial (topological) relations among
is intended for any type of information tied to a location, while “geographical” refers only to information about or near the Earth’s surface. Another word often used for geographical data is geodata.
different objects can be defined and stored. Unlike geometry, which describes the absolute shape and position in space, topology describes relations between objects in terms of neighbourhood. Typical topological relations are union/disjunction, inclusion and intersection. Given two objects A and B, a spatial query is performed for example to determine whether A is inside B, A intersects B, or to obtain the object resulting from the union of A and B.
Geographical information systems find application in many different fields, ranging for example from cartography, urban planning, environmental studies, and resource management, up to archaeology, agriculture, marketing and risk assessment – not to mention the steadily growing number of web-based applications, which have been booming in the past ten years (e.g. Google Maps, OpenStreetMap, etc.)
GIS are therefore characterised by the integrated management of attributes, geometry and topology. The advantage is that queries or data analyses can be carried
103
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Figure 1. Example o relational m del: two tabl s (here: cou tries and cities) are depicted schematically (top). Attr bute names and data types are listed for each table. T e black arro represents t e relation ex sting betw en them. Data contained i the two tabl s is presente in the botto left, and th result of a p ssible query in the botto right. The li k between t e two tables is realized by means of the country_id columns
out using all above menti ned types of information t the same time. I a geographi al informati n system, data are stored acco ding to a so-called rel tional mode : all attributes of homologous objects, and their relations with each other, re organised and stored by means of tables linked to th geometric features (Date, 2003). A simple example is given in Figur 1.
an externally linked table, itself containing attributes (e.g. soil classes). Thanks to their regularly grid ed structure ope ations can be carried out y means of o-called map algebra: different maps are layered upo each other fun tions then combine the va ues of each raster's matrix acc rding to so e criteria. aps can be for example ove laid to identify and com ute overlapping areas, o statistical analyses can are carr ed out on the cell values.
5.1.1.2 Geo etry models: rasters and vectors
In vector mod l, objects a e geometrically described using primitives points, lines and polygons), which can be further aggregated in order d on unorde ed groups. poi t is stored in terms of its coordinates in a given refe ence system a line is de ined through its start and end points. Mult ple ordered lines define polyline, in that the end point of a line is at the same time the start poi t of the succ essive line. he surface delimited by a
As far as geometry is co cerned, two broad models are mainly ado ted and used to store data in a GIS, depending on whether the object to be represented is a continuous ield, like in case of elev tions, or dis rete, like a bridge or a house. The former is enerally stor d by means of raster data, the l tter as vector data. In a raster odel the area of interest i regularly di ided (tessellated) into elementary units, call d cells and h ving all the same size and shape. They are conceptually analogous t pixels (pict re elements) in a digital i age (see Figure 2, top). ost commo are square or rectangular ells. Every ell holds a umeric value, so that the dat structure is conceptuall comparable to a matrix. The size of t e cells determines also the resolution o the data. Ra ter data mod ls are best us ed to represent continuously v rying spatial phenomena like terrain elevation, amount of rainfall, opulation de sity, soil classes, etc. The cell can store v lues directly (e.g. elevation va ues), or values, which re resent the keys to
planar and clo ed polyline defines a polygon. I geo etric primitives are grou ed, more co plex objects like a river or a uilding, can be represented (see Figure 2, b ottom). A group of homol gous geometries results in a m lti-geometry. For exampl , a multipoint geometry i for ed by several points, while a multipoly on is formed by group of po ygons. If heterogeneous geometries are gro ped, e.g. points, polylin s and polygons together then a geometry collection is c eated. Another vector data structure frequently u ed in GIS to represent surfac s are TINs (Triangula ed Irregula Net orks): they consist of ir egularly dist ibuted node
104
GIS
Figure 2. Raster top) and vec or (bottom) representation of point, line and polygon features in a GIS
and lines wi h 2D or 3D c oordinates th t are organized in a network of non-overla ping triangl s. TINs are often
same planimetric position: one for the cave floor, one fo the ave ceiling, nd finally o e for the mo ntain surface
used to repr sent raster terrain m s odels, rfaces,differ or anyntother uous features. Unlike pointconti densities are allowed in the sa e TIN m del, thus higher resolutions llow for a ore precise representati n of details (e.g. n the terrain), a lower resolution can be used in areas that are less varia le or less int resting.
on top of the cave.
Every objec in a vector representation is given a u ique key linking it to a table containing the ttributes. A single object can be therefo e characterised by multiple attributes, e ch one stor d in a sepa ate table column. Unlike map algebra with rasters, operations with vector data like calculations of reas or inte sections are more complicated however mo t GIS packa es offer now days tools to pe form the ost commo operations (e.g. buffering, o erlay, geostatistics, etc.).
A possibility to store 3D features in a raster-like fashion is ffered by v xels (volumetric pixel or Volumetric Pict re Element): each voxel represents the smallest sub ivision unit f a volumetric domain and has typically the form of a c be or of cu oid, with a given height wid h and depth. Similarly to a raster cell, a voxel can stor a numeric value (like a c ell in a raster), or a key to an xternally linked table, itself containi g attributes. Voxels are typic lly used to represent thre -dimensional continuous features, like fo example eological o atm spheric strat and their ch racteristics. oxels can be also used in certain appli ations to overcome the limitations of ra ters and rep esent terrain features, i.e. ove hangs, caves, arches, or ot er 3D featur s.
Regarding dimensions, several are the possibilities offered by IS to repre ent two- or three-dimensional
Co pared to the raster approach, vector data offer more cha ces when it comes to ultiple dimensions. The
geographical features. Due to their structure, raster data can best represent 2D ob ects like the surface of a lake. However, li e in the cas of an eleva ion model, height values store in the cells rovide some information bout the third di ension. Give that for eac raster cell only a single value can be stored, such model are defined .5D, as to indica e an “intermediate status between 2 and 3D models. The height is a function of the xy planar coordinates, so no multiple height values can exist t the same position in a ra ster cell. T is leads to the impossibilit to represent vertical feat res like walls, or objects like ridges, or ca es in a mountain. The rea on is that for a ca e, for instance, more values are needed t the
coo dinates of a oint can be stored either n 2D (e.g. x y) r in 3D (x, y, z). It fol ows that lines, polylines pol gons and th resulting a gregations can be stored also in a three-di ensional sp ce. Moreover, vector data is n t subject to the raster’s li itations: for any given xy position, multiple z values can be stored in a vector-based mo el. So etimes also the time variable can be added to the three dimensio s in space, resulting in a 4D representation. This is achie ed, for exa ple, in that eve y object is given a times amp definin the object’ pro erties at a c rtain mome t. This enables to explore
105
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
data not only by means of spatial queries, but also to include time and investigate how a spatial feature has evolved or changed over the course of time.
Finally, most modern GIS allow to connect directly to already existing on-line data repositories, which publish geodata by means of web mapping services, thus facilitating data reuse for different purposes.
5.1.2 GIS FUNCTIONS
Geodata can be saved in different file formats or into different databases. In practice, every commercial GISsoftware producer tends to define and implement its own formats. This has led in the past to a plethora of existing proprietary formats, both for raster and for vector data, although in the last decade much more effort has been put into defining standards to facilitate data interoperability.
Compared to “classical” maps, a geographical information system allows to greatly extend the number of possible uses with regards to the type and amount of data it stores and manages. Several functionalities are offered by most GIS environments, from data capture and maintenance to visualisation and geodata analysis tools.
If ESRI’s shapefiles have become de facto an accepted file-based exchange format for vector data, the same cannot be stated for raster data, although (geo)tiff files are generally wide-spread. Moreover, there is a gradual shift in the storage strategies, in that more and more GIS packages offer the choice to use a spatially-enabled database management system (DBMS) to store and retrieve data, instead of relying on local file-system solutions.
5.1.2.1 Data capture and management
Data capture, i.e. the process of entering information into the system and store it digitally, is a fundamental activity, which applies to any GIS. Therefore, several methods exist according to the type and nature of data to be imported. For existing “older” data sources, like maps printed on paper or similar supports, an intermediate digitalisation process is required. Printed maps are scanned and transformed into raster maps. It is the case, for example, of older analog photos (often aerial imagery), which are scanned and imported as raster data. Alternatively, by means of a digitiser, older maps or images are used as
Spatially-enabled DBMS differ from “standard” databases in that they are able to handle not only usual attribute, but also spatial data, both in terms of vectors and rasters. Commercial examples are IBM’s DB2 with its Spatial Extender or Oracle Spatial, while PostgreSQL DBMS with its spatial extension PostGIS, or SQLite
reference trace points, polygons that thematic are later stored asto vector data lines and and enriched with attributes.
coupled withalternatives. its spatial extension Spatialite are free and open-source Nevertheless, any GIS offers several function to convert geodata between different standards and proprietary formats, whilst geometrically transforming the data in the process. These are generally called spatial ETL (Extract, Transform, Load) tools.
The digitisation process requires generally proper editing tools to correct errors or to further process and enhance the data being created. Raster maps may have to be corrected from flecks of dirt found on the srcinal scanned paper. In case of vector maps, errors in the digitising process may result in incorrect geometries or invalid topology, e.g. two adjacent polygons actually intersecting. Particular care must also be taken when assigning attributes.
With growing amounts of stored data, it becomes vital that metadata are collected and properly managed along geodata. Metadata give, among others, information about srcin, quality, accuracy, owner and structure of the data they refer to. Most geographical information systems allow to edit and manage metadata to some extents, or to retrieve them by means of external software products or on-line services.
Another possibility consists in acquiring and importing data directly from surveying instruments. For example, modern portable GNSS receivers offer up to subdecimetre accuracy and can be directly interfaced with a GIS environment to download the measured features,
5.1.2.2 Data processing
sometimes even directly on the field.
As geodata are collected (and stored) in various ways, a GIS must be able not only to convert data between different formats, but also between different data structure models, e.g. from vector data to raster data and vice-versa. If a vector is to be transformed into a raster, this operation is called rasterisation. The user must set the cell dimension and which attribute field is to be converted into the values contained in the raster cells. Since the rasterisation process introduces a fixed discretisation in space, it is crucial that the raster resolution be set properly and according to the needed application. As a general rule, a raster cell is assigned a
Nowadays, nearly all surveying devices produce digital data, making their integration in a GIS environment more straightforward since the intermediate digitalisation step can be skipped. Point clouds from laser scanning (both aerial and terrestrial) or digital imagery from close range cameras (e.g. UAV), up to satellite multi-spectral or radar data area are also typical products for GIS data collection. In particular, satellite-based imagery plays an important role due to the high frequency of data collection and the possibility to process the different bands to identify objects and classes of interest, such as land cover.
106
GIS
value if the majority of its surface is covered by the input vector feature. If no vector features (points, lines or polygons) fall within or intersect a cell, its value remains set to null.
found for example at the Geospatial Analysis website (www.spatialanalysisonline.com).
Queries according to some criteria are the first and more immediate type of spatial analysis. Data is selected and extracted from a larger dataset, e.g. for immediate visualisation or possibly for further use. Selection criteria can be according to “standard” attributes (e.g. “How many provinces belong to a given Italian region?”) or can be of spatial nature (e.g. “Select all cities touched by the Danube river”).
If rasterised data is to be converted into a vector, this operation is called vectorisation. In case vector points are to be created, the coordinates of the raster cell centre are generally used for the geometry and the raster cell value is stored as attribute. In case of linear of areal features, lines and polygons are created by grouping neighbouring raster cells with the same value and assigning it to the attribute table.
Whenever a buffer is created, a closed area around the input object (a point, a line or a polygon) is created. Its boundaries identify a delimited portion of space, which is no farther than a certain distance from the input object, which remains unchanged. Sometimes buffers can be used to perform analysis by mean of overlay operations. The term overlay refers to a series of operations where a set of two or more thematic maps is combined to create a new map. This process is conceptually similar to overlaying Venn diagrams and performing operations like union, intersection or difference. The geometric features, and the accompanying attributes, of two or more input maps are combined by means of union, in that all features and attributes from both maps are merged. In case of an intersection overlay, only overlapping features are kept. With rasters, map overlay operations can be accomplished in the framework of map algebra by means of Boolean operators.
Another common GIS feature consists in georeferencing. This operation consists in defining the location of a dataset in terms of map projections or coordinate systems, i.e. its relation (reference) to the physical space (geo). For example, with vector point data one can assign longitude and latitude (and height) values to each point. A positioning device like a GNSS receiver can be used for this purpose. In this way, the position of a point is univocally set on the Earth’s surface. If raster-based imagery is to be georeferenced, then a common approach consists in identifying control points on the image, assigning known geographic coordinates to them, choosing the coordinate system and the projection parameters, and eventually performing the actual coordinate transformation by means of intermediate operations (data interpolation, reduction of distortions, adjustment to the chosen coordinate system).
The term spatial analysis refers to a vast range of
Terrain analysis tools are generally widely available in most modern GIS packages. Starting from a terrain model (e.g. a DTM or a DSM), generally provided as a raster, typical products are maps of slope, aspect or surface curvature. They are all obtained by using the value of a cell and its neighbouring cells. A slope map gives information about the tangent to the terrain surface at a certain position, while the aspect map refers to the horizontal direction to which the slope faces. In other words, if the north is taken as srcin for the aspect, a valley stretching west to east has its northern side facing south (thus an aspect value of 180°), and the southern side facing north (thus an aspect value of 0°). The northern side of the valley is therefore the one, which generally receives more sunlight (in the northern hemisphere). Other functions allow to create contour lines from a terrain model or viceversa, or to obtain a continuous raster DTM/DSM from contour lines (see interpolation, later on).Since water
operations that can be performed in most common GIS packages, although at different levels of complexity. In general terms, spatial analysis is defined as a set of processes to obtain useful information from raw data in order to facilitate decision-making. The goal is to discover and explore patterns and/or relations between objects, whereas these relations may not be immediately visible.
always flows down a slope. Slope and aspect maps are a prerequisite for hydrological analysis tools. Given a DTM, watersheds and drainage basins are computed automatically. The latter corresponds to an area of land where surface water from rain and melting snow or ice converges to a single point, usually the exit of the basin (e.g. a sea, a lake, another river), while the former is the line separating neighbouring drainage basins.
Given the vast range of existing spatial analysis techniques, this subject can be covered only to a limited extent in this chapter, so only the most common ones will be presented. A comprehensive review can be
Visibility analyses allow identification of all areas visible from a given position, or from a given set of positions. In this case view shed maps are created. Astronomical analyses can also be performed, in that the amount of
Whenever geodata are given in different coordinate systems, it is of primary relevance to transform them to a common one, in order to facilitate data comparison and analysis. Once the input and the output coordinate systems are known, a GIS can perform the coordinate transformation on the fly (e.g. when visualising heterogeneous data), or create an actual copy of the data in the new coordinate system. Common operations are map translations and/or rotations, transformation from geographic coordinates to any map projections (e.g. UTM, Universal Transverse Mercator) and vice-versa, up to more complex transformation from one national reference system to another. 5.1.2.3 Spatial analysis
107
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Figure . Qualitative examples of ifferent inter olation algo ithms startin from the sa e input (left). Surface interpolated using an In erse Distance Weighting i terpolator (center) and a pline with Tension interpolator (right)
sunlight rea hing a certai position ca be computed, as well as the shadowing effect of a hilly/mountainous region.
In addition, inter olation met ods can be lassified into glo al or local with regards t whether th y use all the available sample points to g nerate predi tions for the whole area of interest, or only a sub et of them respectively. So e algorith s with global behaviou incl de kriging, polynomial trend ana yses, spline inte polation and finite element method (F M) and these met ods can be sed to eval ate and sepa ate trends in the ata. In case of a local ap roach, the predicted value is obtained instead only fro known po nts within a cert in distance, hereas the concept of distance does no refe strictly to the Euclid an one onl , but more gen rally to neig bourhood. Algorithms bel nging to thi class include, for example, ne rest neighbo r and natural
Often spatial data are acq ired or can e sampled o ly at certain posit ons, howeve it may be n cessary to predict values of a ertain variab e at unsampled location ithin the area of interest. This is, in ge eral terms, what interpolatio consists o (Burrough and McDo nell, 1998). Interpolation is to be di ferentiated from extrapolatio , which deals with the prediction of values of a certain variable outs de the sampling area. Us ally, the goal of i terpolation i to convert p int data to surface data. All GIS packages offer several interpolation ools, which implement The fundamental idea differe behin t interpolatio interpolati nn algorithms. is that “ ear” points are ore related ( r “similar”) han distant points and, theref re, near points generally receive higher weights tha far away p ints. The ob ained surfac can pass throug the measured points or not. In this case interpolation methods are classified into exact and inexact. In c se of an exa t interpolator, a predicted alue at a sample location coincides with the measure ents values at the same location, otherwise t is the case of an inexact inte polator: pre ictions are ifferent fro the measured alues at sampled loc tions and their differences re used to gi e a stateme t about the odel quality. The very large n mber of existing interpolation models allo s to define different cla sification criteria, according to their charact ristics.
neighbour interpolation. Another typical GIS application is re resented by net ork analysis, which is ba ed on the gr ph theory. gra h is a mathe atical structure where rela ions between objects are mod lled pair wi e. Topologically, a graph con ists of nodes (also called vertices), and edge con ecting pairs of vertice . In a GI , it is best imp emented in t e vector mo el, where points represent the odes and lines represent the edges of a graph. Once for example, a street network is modelled ccording the these criteria, problems can be solved such as the co putation of t e shortest p th between t o nodes, or given for example a list of cities (nodes) nd their pai wis distances (e ges), the co putation of shortest route that visits each city exactly once (this is also called “tra elling sales an problem”). An examp e is given in Fig re 4.
A distinctio can be m de between deterministic and geostatistical interpolatio methods. The first are ased on mathematical functio s that calcu ate the values at unknown locations acco ding either to the degr e of similarity or to the degre of smoothi g in relation with neighbourin data. Typic l examples o this interpolation family are Inverse Dista ce Weightin (IDW) or radial basis functions (e.g. t in-plate spline, spline with tension). An example is given in Figure 3.
5.1. .4 Data visualisationand map generalisation
One of the fields here geographical information system hav always fou d great application is cartography, i.e. the rocess of designing and roducing ma s to visually represent spatial data and to help e ploring and und rstanding t e results of analysis. Geodata are gen rally stacke as thematic layers, and each layer i for atted using tyles that define the appearance of the data in terms of olours, sym ols, etc. Raster and vecto data can be repr sented at the same time r selectively. Mo eover, legen s, scale bars, and north-ar ows can also be dded. The utput maps are typically on paper o
Geostatistical interpolation methods use both mathematical and statistical meth ds, in order to predict values and their p obabilistic e timates of the quality o the interpolation. These esti ates are o tained usin the spatial autocorrelation among data points.
108
GIS
Figure 4. Examples of network analyses. A road network (upper left), in which 5 possible destinations are represented by black dots, can be represented according to the average speed typical for each roadway (upper right), where decreasing average speeds are represented in dark green, light green, yellow, orange and red, respectively. The shortest route, considering distance, connecting all 5 destinations is depicted in blue (bottom left), while the shortest route, in terms of time, is depicted in violet (bottom right). These examples are based on the Spearfish dataset available for Grass GIS
directly on screen. In the latter case, more information can be added, e.g. as images, graphs, or any other multimedia objects linked to the thematic data being displayed.
is therefore conceived to reduce the complexity of the real world by dropping ancillary and unnecessary details by means of a proper (geo)data selection. Other common map simplification strategies consist in simplification (e.g. the shapes of significant features are retained but their geometries altered to reduce complexity and increase visibility), combination (e.g. two adjacent building footprints are merged into a single one, as the gap between them is negligible at the chosen scale), smoothing (e.g. the polyline representing a road is smoothed to appear more natural) or enhancement, in that some peculiar but significant details are added to the map in order to help readability by the user (e.g. symbols
Every GIS package allows some kind of data exploration. Queries on attribute data can be performed and the results can be visualised as geometric features, or, vice versa, a feature (or a group of them) can be selected and the attributes retrieved and presented on screen. Most GIS data can be visualised on screen as standard 2D maps, or in 3D in that thematic maps (rasters or vectors) are draped on top of an elevation map, whose cell values are used to create a 2.5 surface. According to the viewer capabilities, 3D vector data (sometimes with textures) can also be visualised. In Figure 5 some example of 2D and 3D data visualisation are presented.
hinting at a particularly steep climb in a hiking map). If the cartographic generalisation process has traditionally been carried out manually by expert cartographers, who were given license to adjust the content of a map according to its purpose using the appropriate strategies, the emergence of GIS has led to the automated generalisation process and to the need of developing and establishing algorithms for the automatic production of maps according to the purpose and scale.
A fundamental aspect tied with cartography is the selection and the representation on a map of data in a way that adapts to the scale of the display medium. Not all geographical or cartographic details need to be necessarily always preserved, as they might hinder the readability of the map, or they might be unsuitable for the purpose the map has been created for. Map generalisation
Several conceptual models for automated generalisation have been proposed in the course of time (Brassel and
109
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Figur 5. Examples of visualization of GIS data. A raster i age (orthoph to) and a ve tor dataset (building foot rints) are vis alized in 2D (left). A 3D visualization of the extrude buildings dr ped onto the DTM
Weibel, 198 ; McMaster nd Shea, 19 2; Li, 2006). Two main approaches for automated gener lisation exist: one deals with t e actual pro ess of gener lisation, the other focuses on representation of data at di ferent scales. The latter is therefore in tight relation with the framework of multi-scale databases, w ere two m in methods have been established. The first consists in a ste wise generalisation, in which each derived ataset is bas d on the other one of the nex larger scale; with the second
Ser ices (WMS), Web Feature Services ( FS) or Web Coverage Servic s (WCS). T day the specifications are defi ed and aintained b the Ope Geospatial Consortium ( GC), an international standard org nisation, which encou ages devel pment and imp ementation of open stand rds for geos atial conten and services, GIS data processing and data s aring.
method, the ederived datasets all s ted alesgeneralis are obt ation ined from a singl large-scale one. at Autom is, however still a subj ct of curre t research, as no definitive a swers have been given also due t the continuously expanding number of applications and devices using and displaying heterog neous geodata at multiple scales.
to (st. erved) in for of georefer oveb e delivered the Interne These im ges corresp enced nd toimage map gen rated by a map server, which retrie es data, fo exa ple, from a spatial data ase and sen them to the clie t application for visualisa ion. During an and zoo ope ations, WMS requests enerate ma images by means of a variet of raster re dering proce ses, the most co mon being g nerally calle resampling, interpolation and down-sampling. WMS is a widely supported open standard for map and GIS da a accessed via the Interne and loaded into client side IS software, however it limitation consist mainly in the impossibility for the use to e it or spatiall analyse the erved image .
A
5.1.2.5 Web-based geodata publication
In the past decade, web-b sed mapping applications have experienced a steady growth in term of diffusion and popularity. xamples are Google Ma s, Bing Ma s by Microsoft, r the com unity-driven OpenStreet ap, which have ade available to the public large amou ts of spatial data. In general, generated
eb Map Service is imple ented whene er geodata i
In case of a We Feature Service, geodata are instead served encoded in the XML-based GML (Geography Markup Langua e) format (but other formats like sha efiles can be employed), which allows every single geo raphic feat re to be t ansmitted i dependently
eb mapping ervices facilitate distribution of aps through web brow ers, followi g a
classical cli nt-server str cture, accor ing to whic the user perfor s a query o certain data (spatial or nonspatial) fro his client application, running gen rally within the web browser, and the results are provided by a remote serv r to the we browser (generally) over the Internet. Th s allows to explore data dynamically and interactively, as well as to combine different data to create new aps accordin to certain c iteria given by the user.
que ied and anal sed. Essentially, GML passes data forth and back between a Web Feature Server and a client. While a WMS serves a static ap image “ s is”, a WFS can be thought to serve the “so rce code” of the map. A eb Coverage Service is implemented whenever a web-based retrie al of cover ge’s is needed. The ter cov rage refers to any digital geospatial information representing space- or time-varying phenomena. The efore, simila ly to WMS nd WFS service instances a CS allows clients to quer portions re otely stored geo ata accordin to certain criteria. However, there are some difference to WMS and WFS. U like “static”
Web-based geodata pub ication can be performed in different wa s, although t e most com on strategie rely upon adopti n of standard protocols uch as Web Map
110
GIS
Figure 6. Example f Web-based geodata publ cation in 3D: by means of virtual globe , as in Googl Earth, or n the case of the Heidelbe g 3D project (http://www. eidelberg-3d.de)
images serv d by WMS, WCS provi es data (and their metadata) s that they c n be interpreted and ana ysed (and not just visualized). With regar s to WFS, hich serves only iscrete data, the main dif erence consi ts in the possibility for the CS to provide spatial data in form of co erage’s, i.e. representations of pheno ena that relate spatio-temporal domain to a (potent ally)
has been experie cing a steady developme t in the pas dec de. Until re ently, web apping applications with some 3D GIS capability co ld be deliv red only by means of plugin , mostly available for th VRML and X3 formats (the latter being the success r of VRML) and able to support 3D vector graphics and virtual reality mo els in a web browser. These plugins were not widely
multidimens onal range o properties.
ado ted to andlarg su fered from performance issuesof when applied volumes f data, typical GIS applications. Even if some w rkarounds a d alternative imp ementations were proposed and adopted, performance limitations continued to hinder their extensive ado tion.
If the above mentioned standard proto ols are now days definitely established and allow for data publishing in a two-dimensional way, a new spectrum of possibilities are increasingly being offer d thanks to the advanc s in geoinformation technolo ies like 3D virtual environments, 3D analytical vis alisation an 3D data fo mats (Isikdag an Zlatanova, 2010). Three-dimensional data exploration offers in fact several adv ntages in term of representation of geographical data, as well as more effective po sibilities to a cess and analyse data.
Sup ort for 3D from the major commercial web map servers has also een limited, though impr ving. Today if 3D support is provided, then the focus is mostly on deli ering data with the third imension an on their 3D visualization, but rarely on offering support for 3D que ies and adv nced 3D an lyses. Withi the OGC’ standards frame ork, suppor for 3D is similarly and gra ually reachi g maturity. WFS and CS service pro ide support for 3D vectors an coverages respectively. In addition, Geo raphic Mar up Language (G L) and K y Hole Markup Lang age (KML) standards suppor z values, to o. However, similar to the
Today, georeferenced da a can be vi ualised usin socalled “virtual globes”: s ch technologies permit a threedimensional exploration f the Earth’s surface, on t p of which satellite imagery, digital elevati n models, as well as other geographic raste and vector ata (e.g. textured 3D models f buildings nd landmar s) are mapped by direct strea ing through t e Internet.
co mercial counterparts, focus is still mostly on deli ering and vi ualising data with z values obtained by means of 2D que ies.
Several virt al globes e ist, both as closed and open source solutions. The most popul r closed s urce technologies are namely oogle Earth Figure 6, left) and Microsoft Bing Maps 3D. These platfo ms have made 3D visualisation of geographical feat res known and accessible t everyone, however, in the open s urce community, similar solu ions exist, e.g. NASA orld Wind, ossimPlanet, and o gEarth.
So e of the above mentione problems ight soon be resolved with the introduction of HTML 5 and the rapid ado tion of modern (i.e. rele sed after 20 0) browsers. HT L 5 includes standardised support for WebGL whi h brings plu in-free and ardware-accelerated 3D to the web, implemented right i to the browser. All majo web browsers li e Safari, Chrome, Firefo , Opera and Inte net Explorer (only from v rsion 11, released in 2013) alre dy support it.
When it co es to accessi g and visualising 3D geod ta in a web browser, the move from desktop GIS to the web
111
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
CLARKE, K.C. 1986. Advances in geographic information systems. Computers, environment and urban systems, 10(3-4), pp. 175-184.
Despite the several approaches for the web-based 3D data management and visualisation presented in the past years, there is still a lack of “definitive” solutions, as no unique, reliable, flexible and widely accepted package or implementation is available yet, in particular with regards to large and complex 3D models.
COWEN, D.J. 1988. GIS versus CAD versus DBMS: What Are the Differences?. Photogrammetric Engineering and Remote Sensing 54(11), November 1988, pp. 1551-1555.
What is clear is that, in term of geodata access, the web has evolved from showing just static documents (text and images) to a more elaborate platform for running complex 3D applications. This paradigm shift has been dramatic and has led to new issues and challenges, which are now current subject of research. Section XXX of this chapter contains further details and examples concerning this topic.
DATE, C.J. 2003. Introduction to Database Systems. 8th edition. Addison-Wesley. ISBN 0-321-19784-4. HOBERMAN, S. 2009. Data Modeling Made Simple: A Practical Guide for Business and IT Professionals, 2nd Edition. Technics Publications, ISBN: 9780977140060. ISIKDAG, U.; ZLATANOVA, S. 2010. Interactive modelling of buildings in Google Earth: A 3D tool for Urban Planning. In: (T. Neutens, P. Maeyer, eds.) Developments in 3D Geo-Information Sciences, Springer, pp. 52-70.
References
BRASSEL, K.E.; WEIBEL, R. 1988. A Review and Framework of Automated Map Generalization . Int. Journal of Geographical Information Systems, 2(3), pp. 229-244.
LI, Z. 2006. Algorithmic Foundations of Multi-Scale Spatial Representation. Boca Raton, CRC Press. MCMASTER, R.B.; SHEA, K.S. 1992. Generalization in Digital Cartography. Washington, DC, Association of American Geographers.
BURROUGH, P.A.; MCDONNELL, R.A. 1998. Principles of geographical information systems. Oxford University Press, Oxford.
MAGUIRE, D.J.; GOODCHILD, M.F.; RHIND, D.W. 1991. Geographical Information Systems, Longman.
112
6 VIRTUAL REALITY & CYBERARCHAEOLOGY
VIRTUAL REALITY & CYBERARCHAEOLOGY
6.1 VIRTUAL REALITY, CYBERARCHAEOLOGY, TELEIMMERSIVE ARCHAEOLOGY Maurizio FORTE
environment: the mind embodied in the environment. A knowledge created by enaction is constructed on motor skills (real or virtual), which in virtual worlds can derive by gestures, haptic interfaces, 1st or 3rd person navigation, multisensorial and multimodal immersion and so on.
6.1.1 VIRTUAL REALITIES
We live in a cyber era: social networks, virtual communities, human avatars, 3D worlds, digital applications, immersive and collaborative games are able to change our perception of the world and, first of all, the capacity to record, share and transmit information. Terabyte, Petabyte, Exabyte, Zettabyte of digital data are constructing the human knowledge of future societies and changing the access to the past. If the human knowledge
Digital interactive activities used in our daily life play an essential role in managing and distributing information at personal and social level. We could say that humans
is rapidly migrating digital domains worlds, what happens to inthe past? Can and we virtual imagine the interpretation process of the past as a digital hermeneutic circle (fig. 1)? The idea that a digital simulation process one day could remake the past has stimulated dreams and fantasies of many archaeologists. We know that this is impossible, but new cybernetic ways to approach the interpretation process in archaeology are particularly challenging since they open multiple perspectives of research otherwise not identifiable.
typically interact withordifferent “virtualgiven realities” whether by personal choice, by necessity, the fact that there is a consistent amount of information digitally born and available just in digital format. In the 90s many writers, artists and scholars (including who is writing this article, (Forte 2000)) discussed for a long time on the definition of virtual reality (VR, immersive, semiimmersive, off line, etc.) mostly in relation with archaeology and cultural heritage. Nowadays the term is quite blurred, hybrid and elusive: virtual realities represent many social diversifications of the digital Real and are an essential part of the human life. It is possible to recognize and classify them by technology, brand, purpose, functionality; but all of them are VR, open domains for users, players and developers. The evolution of software and digital information in cloud computing is a good example of distributed virtual realities where all the performance runs on line in a network and it doesn’t require end-user knowledge of the physical location and
In digital archaeology the “cybernetic” factor is measurable in terms of interaction and feedback, in a word a “trigger” allowing creation and exploration of virtual worlds. The “trigger” can be considered a metaphor of our embodiment in the cyber world: clicking, trigging, interacting is the way to involve our minds in the digital universe. Any environment, digital or real, could be studied in a similar perceptual way (of course with some limitations): analyzing althea relations between humans and ecosystems. A remarkable factor in the evolution of cyber worlds is identifiable in the informational capacities of digital worlds to generate new knowledge (fig. 2), an autopoiesis1 (Maturana and Varela 1980) of models, data and metadata, which co-evolve in the digital environment. Data and models generate new data and meanings by interaction and, for example, by collaborative activities. The core of this process is the enaction (Maturana and Varela 1980), as information achieved by perception-action interaction with the 1
configuration of the system that delivers the services. It is likely unnecessary to describe VR at this point because there are too many VR and all of them follow different principles of embodiment and digital engagement: everything could be VR. In the past decades for example VR was mainly recognizable for the degree of immersion and the real time interaction (at least 30/60 frames per second) but nowadays the majority of applications are in real time and full immersion is just an option (and sometimes not really relevant). What really changes our capacities of digital/virtual perception is the experience, a cultural presence in a situated environment
Capacity to generate new meanings.
115
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 1. Digital Hermeneutic Circle
Figure 2. Domains of digital knowledge (Champion 2011). According to Subhasish DasGupta (Dasgupta 2006) cultural presence can be defined as “a feeling in a virtual environment that people with a different cultural perspective occupy or have occupied that virtual environment as a place. Such a definition suggests cultural presence is not just a feeling of “being there” but of being in a “there and then” not the cultural rules of the “here and now”. To have a sense of a cultural presence when one visits a real site requires the suggestion of social agency, the feeling that what one is visiting is an artifact, created and modified by conscious
reconstruction with a wrong code can increase the distance between present and past disorienting the observer or the interactor and making the models less “authentic”. The issue of authenticity of virtual worlds is quite complex and it is strongly linked with our cultural presence, knowledge and perception of the past. If for instance we perceive the virtual model as fake or too artificial it is because it doesn’t match our cultural presence. In theory people with different cultural backgrounds can have a different cultural presence with a diverse perception of the past, so that also the vision of
human intention” code, (Dasgupta 2006). Cultural presencefor is the interpretation the cybernetic map necessary interpreting the past in relation with space and time (for Gregory Bateson the “map is not the territory”, (Bateson 1972). In the second cybernetics the study of codes was aimed at understanding the relation between mind and information, between objects and environment. This ecological approach is helpful also in the evaluation of a virtual reconstruction, since a cyber world has to be considered a digital environment with related rules, affordances and features. Ultimately we have to study these relations for a correct comprehension of a virtual reconstruction or simulation. In fact a virtual
the past becomes extremely relative. This argument unfortunately risks pushing the interpretation at a certain level of relativism because of all the components involved in the interpretation, simulation and reconstruction. For instance, the sense of photorealism in a model could be more convincing than a “scientific” non-realistic reconstruction because of the aesthetic engagement or for the embodiment of the observer (for example in the case of interaction with avatars and other artificial organism). Cultural presences, experience, perception, narrative of the digital space create the hermeneutic circle of a cyber environment. The 116
VIRTUAL REALITY & CYBERARCHAEOLOGY
level of embodiment of any application can determine the amount of information acquired by a user or an observer during the exploration of the digital space. For example a third person walkthrough across an empty space and without receiving an adequate feedback from the system can’t produce a high level of embodiment since the engagement is very low. Human presence in virtual spaces determines also the scale of the application and other spatial relations.
3D devices such as kinect © as interfaces, open new perspectives in the domain of cyber/haptic worlds and simulation environments. The interaction does not come from mouse, trackballs, data gloves, head mounted display, but simply by human gestures. In other words all the interaction is based on natural gestures and not on a device: the camera and the software recognizes an action and this is immediately operative within the digital world. This kind of kinesthetic technology is able to cancel the computational frame separating users and software, interaction and feedback; in short the focus is not on the display of the computer but on 3D software interactions. One day the interpenetration of real and virtual will create a sort of hybrid reality able to combine real and virtual objects in the same environment.
If we analyze for example the first virtual reconstructions in archaeology in the 90’s they reproduced mainly empty architectural spaces, without any further social implication or visible life in the space: they were just models. The past was represented as snapshot of 3D artificial models deprived by a multivocal and dynamic sense of time. Yet Dasupta: “So in this sense, cultural presence is a perspective of a past culture to a user, a perspective normally only deduced by trained archaeologists and anthropologists from material remains of fossils, pottery shards, ruins, and so forth (Dasgupta, p. 97). Actually cultural presence should not be a perspective deduced only by archaeologists and anthropologists, but it should be transparent and multivocal.
Understanding the social and technological context of these virtual realities is a necessary premise for introducing cyberachaeology and the problem of the digital reconstruction of the past. 6.1.2 CYBERARCHAEOLOGY
In a recent book “Cyberarchaeology” (Forte 2010) I have discussed the term in the light of the last two decades of theory and practice of digital archaeology. More specifically, in the 90s “Virtual Archaeology” (Forte 1997) designed the reconstructive process for communication and interpretation of the past. This digital archaeology was mainly “reconstructive” because of a deep involvement of computer graphics and high res renderings in the generation of virtual worlds. The first 3D models of Rome, Tenochtitlan, Beijing, Catalhuyuk were generally based on evocative reconstructions rather than by a meticulous process of documentation, validation and scientific analysis (Forte 1997). The main outcome was a static, photorealistic model, displayed in a screen or in a video but not interactive (Barceló, Forte et al., 2000). The photorealism of the scene was the core of the process with a special emphasis on computer graphics and rendering rather than the scene interaction. It is interesting to note that an extreme photorealism was a way to validate the models as “authentic”, even if the term can be disputable in the domain of virtuality (Bentkowska-Kafel, Denard et al., 2011).
If in the 80s and 90s the term Virtual Reality was very common and identifying a very specific, advanced and new digital technology (Forte 2000), now it is more appropriate to classify this domain as “virtual realities” where the interaction is the core, but the modalities of engagement, embodiment, interfaces and devices are diverse and multitasking. According to a retrospective view, VR could be considered a missing revolution, in the sense that it didn’t have a relevant social and technological impact with very few outstanding results in the last two decades. Internet for example was a big revolution, not VR. Nowadays an interesting example is represented by 3D games: very sophisticated virtual environments, with a superb graphic capacity to engage players in a continuous participatory and co-evolving interaction, collaborative communication and digital storytelling. They can expand the digital territory they occupy according to participatory interaction. The ultimate scope of a game in fact is the creation of a digital land to explore and settle. In the game context the role of simple users is transformed in “active players”, that is the players themselves contribute
In addition, every model was static and without any interrelation with human activities or social behaviors. For
to the construction evolution ofand the engagement game. These have new trends of co-activeand embodiment radically changed the traditional definition of virtual environment/virtual reality as a visualization space peopled by predetermined models and actions. The game is an open collaborative performance with specific goals, roles, communication styles and progressive levels of engagement. The narrative of the game can produce the highest level of engagement, a “gamification” of the user (Kapp 2012).
example, in the the virtual models of without Rome any and Pompei were just 90s architectural empty spaces trace of human activity (Cameron and Kenderdine 2010): a sort of 3D temporal snapshot of the main buildings of the city. At that time of digital reconstructions there was scarce attention to reproduce dynamic models and to include human life or activities in virtual worlds. Virtual world were magnificent, realistic and empty digital spaces. It is interesting to point out that all these reconstructions were made by collecting and translating archaeological data from analogue format to digital: for example from
Serious games, cyber games, haptic systems, are changing the rules of engagement: the use for example of 117
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
paper maps, drawings, iconographic comparisons, books and so on. Here the process of reconstruction mediates from different data sources of different formats and shapes. At the dawning of virtual archaeology all the applications were model-centered and without a consistent validation process able to prove the result of the reconstruction. The effect of “reconstruct the past” was dominant and very attractive: several corporations and international companies invested in the 90s in the creation of digital archaeological models, but for most of them the work was focused much more on “advertising the past” rather than reconstructing it. In addition, at the beginning virtual archaeology was not easily accepted in the academic world as scientific field but it was considered mainly a tool for a didactic and spectacular communication of the past. Not enough attention was given to new research questions coming up from the virtual reconstruction process or to the importance of new software and devices in the archaeological research. In this climax virtual archaeology was looking for great effects, digital dreams able to open new perspectives in the interpretation and communication process. Most part of the first applications was more technologically oriented than aimed at explaining the multidisciplinary effort of interpretation behind the graphic scene. The general outcome of the first digital revolution of virtual archaeology was certain skepticism. A big issue was to recognize in so effective and astonished models a precise, transparent and validated reconstruction of the past but which past? The scientific evaluation of many virtual reconstructions is not possible because of lack of transparency in the workflow of data used. Moreover the majority of graphic reconstructions seemed too artificial, with graphic renderings more oriented to show the capabilities of the software than a correct interpretation of data.
beyond a textual description. Visual interactions and graphic simulations stimulate to afford a deeper perceptual approach to the analysis of data. For example, a very detailed textual description of a site, a monument or an artifact can suggest multiple hypotheses but none of them translated in a visual code. In addition the archaeological language is often cryptic, difficult and not easily understandable. Virtual Archaeology started to use complex visual codes able to create a specific digital grammar and to communicate much more information than a traditional input. Unfortunately, this great potential was not systematically used at the beginning for a low involvement of the communities of archaeologists at interdisciplinary level (however with very few digital skills), but also for the difficulties to manage so diverse information sources (most of them analogue) in a single digital environment. Below a schematic distinction between the digital workflow generated by virtual archaeology and by cyberarchaeology: Virtual Archaeology workflow:
Data capturing (analog) Data processing (analog) Digitalization from analog sources (analog-digital) Digital outcome: 3D static or pre-registered rendering CyberArchaeology workflow:
Data capturing (digital) Data processing (digital) Digital input (from digital to digital) Digital outcome: virtual reality and interactive environments (enactive process)
Even if with several limitations and issues, however, the first digital “big bang” in virtual archaeology represented the beginning of a new era for the methodology of
It is important to consider that cyberarchaeology elaborates data already born-digital: for example from laser scanners, remote sensing, digital photogrammetry, computer vision, high-resolution or stereo cameras. “Cyber Archaeology can represent today a research path of simulation and communication, whose ecologicalcybernetic relations organism-environment and informative-communicative feedback constitute the core. The cyber process creates affordances and through them we are able to generate virtual worlds by interactions and inter-connections” (Forte 2010). The workflow of data generated by cyber-archaeology is totally digital and can make reversible the interpretation and reconstruction
research archaeology 2009). isWith some constrains,inactually a virtual(Forte reconstruction potentially able to advance different research questions, hypotheses, or can address the researcher to try unexplored ways of interpretation and communication. However, this process works just in case the virtual reconstruction is the product of a complex digital workflow where the interpretation is the result of a multivocal scientific analysis (data entry, documentation, simulation, comparative studies, metadata). Questions like – how, how much, which material, textures, structures, which phase, etc. – stimulate new and more advanced discussions about the interpretation because they push the researchers to go
process: from the fieldwork to virtual realities. Morethe in detail, cyberarchaeology elaborates spatial data during fieldwork or generally in any bottom-up phase and reprocesses them in simulation environments where it is possible to compare bottom-up and top-down interpretation phases. The integration of bottom-up data (documentation) and top-down (reconstruction) hermeneutic phases is the necessary approach for the digital interpretation within the same spatial domain. In short the cyber process involves a long digital workflow, which crosses all the data in different formulations and simulations in a continuous feedback between existing information (data input), produced information (for
In a recent article (Forte, 2010) I have named this period the “wow era” because the excitement on the production of models was in many cases much bigger than an adequate scientific and cultural discussion. This was and still is a “side effect” in the use of digital technologies in archaeologies: a strong technological determinism where the technology is the core and the basis of any application.
118
VIRTUAL REALITY & CYBERARCHAEOLOGY
example reconstructed models) and potential information (what is generated by simulation). Potentiality of the information is the core of the cyber process: different potential interpretations coexist in the same virtual environment and the simulation itself is able to create new and possibly more advanced interpretation. The key is the capacity to generate comparable and interactive models in sharable domains integrating bottom-up and top-down data. In fact during a virtual simulation it is possible to change and improve several factors and different operators/users can obtain diverse interpretations and ways to proceed. Cyberarchaeology does not look for “the Interpretation” but for achieving possible consistent interpretations and research questions: “how” is more important that “what” according to a digital hermeneutic approach.
the past cannot be reconstructed but simulated. Cyberarchaeology is aimed at the simulation of the past and not on its reconstruction: the simulation is the core of the process. For this it is better to think about potential past, “a co-evolving subject in the human evolution generated by cyber-interaction between worlds” (Forte 2010). In short cyberarchaeology studies the process of simulation of the past and its relations with the present societies. Is this a revolutionary change in theoretical archaeology? Perhaps a new methodological phase after processualism and post-processualism? Is cyber archaeology a change in methodology, a change in paradigm, or a reflection of a broader change? (Zubrow 2010). According to Ezra Zubrow (Zubrow 2011) both processual and post processual are now integrated into something new. Cyber archaeology bridges the gap between “scientific” and “interpretational” archaeology for it provides testable in the sense of adequacy material representations of either “interpretations” or “scientific hypotheses or discoveries.” (Zubrow 2010). And further: “if postprocessual archaeology will continue to exist it will exist through cyber archaeology. It is in cyberarchaeology where the interesting issues of cognition, memory, individual difference, education etc are actually being researched and actually being used.” (Zubrow 2011).
For example in the case of the digital project of the Roman Villa of Livia (Forte 2007) it was possible to create a complex hermeneutic circle starting with the 3D documentation of the site by laser scanning and then proceeding with the potential reconstruction/simulation of different phases of the monument integrated also with the reconstruction of some social activities displayed by the use of digital avatars (Livia, Augustus and other characters). In this project NPC (non-player-characters) and PC (player characters) have been used in order to populate the virtual world of actions, event and behaviors. NPC and PC interact each other stimulating a dialogue between users and digital environments and designing new digital affordances (a digital affordance identifies the properties of a virtual object).
6.1.3 TELEIMMERSIVE ARCHAEOLOGY 6.1.3.1 Introduction
One of the key problems in archaeology is that the production of data from the fieldwork to the publication, communication and transmission is unbalanced: no matter if data are digital or not, a low percentage of them is used and distributed. In the long pipeline involving digging, data recording, documentation, archiving and publication there is a relevant dispersion of information and the interpretation process is too much influenced by authorships and scholarships and not by a real multivocal critical perspective. The archaeologist alone arguing in front of his/her data is not just a stereotype: the circulation of data before the publication is very limited and it does not involve a deep and interactive analysis with all the information available (from the fieldwork or other sources).In short it is difficult to make available and transparent the entire pipeline of archaeological data and to share adequately them in the right context. For example an artifact or a stratigraphic deposit could be
The Virtual Villa of Livia is a good example of the use of digital affordances: any virtual model is accomplished by multiple properties that describe and validate its creation. For example frescos and paintings show which iconographic comparisons and data sources were used for the reconstruction; in the case of architectural elements the affordances display maps and reliefs of other sites and monuments studied and analyzed for validating the process of reconstruction. The more there are potential simulations, the more it is possible to have multiple interpretations. The coexistence of different interpretations is one of the key features of the digital domain of virtual realities and in this way it is possible to create new knowledge. How can this knowledge be distributed through virtual realities and which virtual realities? (fig. 2).
otherwise interpreted it isand possible to compare in 3D its contextualization on ifsite the srcinal functionality and properties. Documentation and interpretation are often separated and not overlapping in the same spatial domain. In fact the usual result is that the interpretation is segmented in different domains, often not mutually interacting, and with enormous difficulties in making the research work a collaborative research. In archaeology collaborative activities start in the field and sometimes continue in laboratory but with limited capacities of data integration, data sharing and reversibility of the interpretation process. More specifically in digital archaeology it is difficult to integrate for example 2D and
How is it possible to approach the problem of authenticity in a process of virtual reconstruction? How is it possible to manage the link between data in situ and reconstruction of the srcinal past? The validation of a digital process can show the consistency of the simulation/reconstruction: in other words the digital workflow has to be transparent (Bentkowska-Kafel, Denard et al., 2011). The most important distinction between virtual and cyber archaeology is in the relation data entry – feedback/ simulation: the interactive factor. From this point of view 119
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 3. 3D-Digging Project at Çatalhöyük 3D data, shape files and 3D models, old and new data. It is also very difficult to mitigate the destructive impact of archaeological digging and to make reversible the virtual
challenge the work in team is essential and as well the quality and amount of information to study and test. The creation of very advanced digital labs is not easy in the
recomposition of layers and units, after the excavation.
humanities and, in addiction, verydoes expensive consuming. The work in isolation not payand off: time it is important to work in a network, to share resources and first of all to multiply the faculty of interpretation worldwide.
6.1.3.2 TeleArch: a Collaborative Approach
Discussions and arguments around virtual and cyberarchaeology should help to understand the controversial relationships between digital technologies and archaeology: risks, trends, potentialities, problems, but what’s the next? What if after we have digitally recorded and simulated archaeological excavations, reconstructed hypothetical models of the past integrating documentation and interpretation processes? How can we imagine the future after virtual-cyber archaeology?
Teleimmersive Archaeology can be considered an advanced evolution of 3D visualization and simulation in archaeology: not a simple visualization tool but a virtual collaborative space for research, teaching and education (fig. 3); a network of virtual labs and models able to generate and to transmit virtual knowledge. It is named “Teleimmersive” because can involve the use of stereo cameras or kinect haptic systems in order to represent the users as human avatars and to visualize 3D models in immersive remote participatory sessions. Teleimmersive Archaeology tries to integrate different data sources and provide real-time interaction tools for remote collaboration of geographically distributed scholars.
Collaborative research represents nowadays one of the most important challenges in any scientific field. Minds at work simultaneously with continuous feedback and interaction, able to share data in real time can co-create new knowledge and come with different research perspectives. Networking and collaborative activities can change the methodological asset of archaeological research and communication. The intensive interactive use of 3D models in archaeology at different level of immersion hasn’t been monitored and analyzed: actually we don’t know how much this can have an impact on the generation of new digital and unexplored hermeneutic circles.
I would consider Teleimmersive a simulation tool for the interpretation and communication of archaeological data. The tools allow for data decimation, analysis, visualization, archiving, and contextualization of any 3D dbase in a collaborative space. This kind of activity can start in the field during the excavation and can continue in lab in the phase of post-processing and interpretation. Fieldwork archaeologists for example could discuss with experts of pottery, geoarchaeologists, physical anthropologists, conservation experts, geophysicists and so on: the interpretation of an object, a site or a landscape is always the result of a work in team. At the end the
Any significant progress, any new discovery, can depend by the capacity of scientific communities to share their knowledge and to analyze the state of the art of a specific research topic in a very effective manner. In this 120
VIRTUAL RE
Figure 4. Teleim ersion Syste
LITY & CYBE ARCHAEOLOGY
in Archaeology (UC Me ced, UC Ber eley)
Figure 5. Video capturin system for teleimmersive archaeology
most import nt outcome i Teleimmer ive archaeology is the kinesthe ic learning. In other words the transmission of knowled e comes from the interactive emb died activity in virtual envi onments and though v rtual models, while traditional learning co es through linear systems, suc as books, texts, reports.
and Kurillo 201 ) aimed at creating a 3D immersive coll borative environment for research and education in archaeology, na ed TeleArch (Teleimmer ive Archaeolog , figs. 4-6). eleArch is a teleimmersive system able to connect remo e users in a 3D cybersp ce by stereo cameras, kinect c meras and otion trackin sensors (fig. 4). he system i able to pro ide: immers ve visualization, data integr tion, real-ti e interactio and remote presence. The so tware is bas d on OpenG -based open sou ce Vrui V Toolkit developed at niversity o California, Davis. Last tests sa y that it allo s a real time
6.1.3.3 The System
In 2010 UC Merced (M. Forte) and UC Berkele (G. Kurillo, R. aycsj) starte a new rese rch project ( orte
121
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
inte face and co tent (fig. 3, 6). As stan alone it can elaborate all the models in 3D including GIS layers met data and dbases (fig. 7). The digital workflow o Tel Arch is able to integrate ll the data in 3D from the fiel work to the ollaborative system with the following seq ence:
Fig re 6. A Telei mersive work session
rendering of 1 million tri ngles with the frame rate f 60 FPS (frames per second) on NVidia GeForce GTX 8800 (typically 20/30 object fo scene). In t e virtual environment, users an load, dele e, scale, move objects or attach them to different parent n des. 3D laye s combine several 3D objects that share g ometrical and contextual properties but a e used as a single entity in the environment.
A chaeological data can be recorded in D format by laser scanners, digital p otogrammetry, compute vision, image odeling. T e 3D models have to be d cimated and optimized fo real time simul tions. 3
models hav to be exported in obj for at.
T ey are optimized in T leArch.
eshlab and uploaded to
U timately dif erent geographically distributed user st rt to work si ultaneously though a 3D etwork
6.1. .4 3D Inter ction
Tel Arch suppo ts different kinds of 3 interaction: hu an avatars ( st person interaction), 3r person and standalone. In 1st person oper bility the user can interact like in the real world within t e space map ed by stereo cameras: he/she operates like a human av tar since the system reconstru ts the body otion in real time (figs. 56). n this case sers can se each other using natural inte faces and b dy languag . In 3rd person the use inte acts collabor tively with data and models but without ster o cameras. Ultimately TeleArch w rks also a standalone soft are, so that the user can interact indi idually with models and data in stereo ision.
The framework supports eshlab project format ( LN), which defi es object filenames a d their relative geometric r lationship. sing a slider in the prop rties dialog, one can easily ncover diff rent stratigr phic layers associated with the correspondi g units. TeleArch works as network or stan alone software. In a netw rk it can develop all the properties of the Te leImmersion, with the ability o connect r mote users sharing the same
Figure 7. B ilding 77 at atalhöyük: t e teleimmers ve session shows the spati l integration of shape files (layers, units and arti acts) in the 3D model recorded by laser scanning
122
VIRTUAL RE
LITY & CYBE ARCHAEOLOGY
The followi g tools are currently imple ented:
navigation tools: for na igation through 3D space; graphic user interface t ols: for interaction with and other n-screen obj cts;
enus
measurement tools: for acquiring object geometry (e.g. dimensional and angula measureme ts); flashlight tool: for reli hting parts of the 3D sce e or pointing a salient featu es. annotation and pointing tools:
or marking and
communicating important interestin features to other remote us rs;
draggers: or picking up, moving and rotating obje ts; Fi ure 8. 3D Interaction with Wii in the teleimmersive system: building 77, Çatalhöyü
screen loc tors: for ren ering mode manipulation (e.g. mesh, text re, point clo d) object selectors: for selecting objects to perform different actions related to the local unctionality, such as changi g object re dering style (e.g. textur , no texture, esh only), retrieving object metadata, focusing urrent view to object principal plane etc. (Forte and Kurillo 2010 .
6.1.4 CASE STUDY: 3D ARCHAEOLOGY AT CAT LHUYUK 6.1.4.1 Intr duction
The project “3D Archaeology at Ca alhuyuk” (fi . 3) started in 2010 thanks to the collaboration with Sta ford University ( rchaeologic l Center) an UC Merced with the scope to record, document (wit different digital technologies) and visualize in virtual reality all the p ases of archaeol gical excav tion. Phase I (2010) o the project was ainly oriented to test dif erent technologies during the xcavation (time of flight and optical laser scanners). In phase II (20 1) the UC Merced team started from scratch the exca ation of a Neolithic ouse (building 89) recording all the layers by time of hase scanners (fi . 9), optical canners (fig. 12) and computer vision techniques (image odeling, figs. 10-11). In hase III (2012) t e plan is to document the entire site (East Mound) with the integr tion of diff rent technologies (scanners, omputer vision, stereo cameras) and to continue th digital rec rding of th Neolithic ouse focusing on the micro-deposits which backfill the loor.
igure 9. Clouds of points by time of phase scanner (Trimble FX) at Çatal öyük: buildi g 77
percentage of th entire area has been excavated. The digi al archeological project aims to virtually reproduce the entire archae logical proc ss of excava ion using 3D technologies (las r scanners, 3D photogram etry) on site and 3D Virtual eality of the deposits of atalhoyuk a they are excavat d (fig. 8). In this way it s possible to ma e the exc vation process virtuall reversible reproducing in l b all the phases of digging, layer-bylayer, unit-by-u it (fig. 7). Unlike tr ditional 2D
Final aim is to vir ually muse lize the ntire archaeological site for t e local visitor center an for TeleArch, t e Teleimmersive system or archaeolo y at UC Merced nd UC Berk ley (fig. 6).
technology, 3D reconstruction of deposits allows the archeologist to develop a mo e complex u derstanding and analyses of the deposit and artifac s excavated. Dig ing is a destructive technique: how can we reanalyze and interpret what we ex avate? The inte pretation ph se uses two approaches. ne approach inv lves the interpretation and documentation during the exc vation; the other approach is related to the reconstruction process after the excavation. Both phase are ypically separate and not contextualized in one single research workfl w. The d cumentation process o exc vation is se mented in ifferent rep rts, pictures met -data and a chives; the nterpretation comes fro
Çatalhöyük is considere for many reasons ideal for addressing omplex rese rch method logical ques ions. More than thirty years of studies, archaeological fieldwork and research have b en devoted to investigating the ideol gy, religio , social s atus, architectural structures, art, environm nt and land cape of the site, producing s veral public tions, books and other media ttp://www.catalhoyuk.co /), but just a small
123
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
The site rapidly became famous internationally due to the large size and dense occupation of the settlement, as well as the spectacular wall paintings and other art that was uncovered inside the houses. Another distinguishing feature of Çatalhöyük was the nature of the houses: they are complex units involving ritual and domestic activities in the same space. In particular, the diachronic architectural development of the site is still very controversial and it needs more studies and analyses in relation with the landscape and the symbolic, ritual and social use of the buildings. Since February 2009, the site is inscribed in the tentative list of UNESCO World Heritage Sites. The specific critical conditions of the houses (mud-brick dwellings, earth floors, artifacts, etc.) and the difficulties to preserve all the structures in situ urge to document digitally all the structures before they collapse or disappear.
Figure 10. Image modeling of the building 89 at Çatalhöyük
6.1.4.3 Research Questions
The project can open new perspectives at the level of methodology of research in archaeology, generating a more advanced digital pipeline from the fieldwork to a more holistic interpretation process in the use of integrated spatial datasets in three dimensions. More specifically, it should be able to define a new digital hermeneutics of the archaeological research and new research questions. One of the key points of the project in fact is the migration of to3Da simulation data from environment the digital documentation in the field and one day with an installation in a public visitor center.
Çatalhöyük lies on the Konya plain on the southern edge
In fact, in this case the 3D documentation of the new excavation areas could be linked and georeferenced with layers and datasets recorded in the past, reconstructing at the end a complete 3D map of the site and of the entire stratigraphic context (figs. 12-13). In that way, it will be possible to redesign the relative chronology of the site and the several phases of settlement. In fact the reconstruction of the Neolithic site in thousands years of continuous occupation and use is still very difficult and controversial. In addition, the 3D recontextualization of artifacts in the virtual excavation is otherwise important for the interpretation of different areas of any singles house or for studying possible social activities perpetuated within the site.
of the Anatolian Plateau at an elevation of just over 1000 m above sea level. The site is made up of two mounds: Çatalhöyük East and Çatalhöyük West (Hodder 2006). Çatalhöyük East consists of Neolithic deposits dating from 7400-6000 B.C. while Çatalhöyük West is almost exclusively Chalcolithic (6000-5500 B.C.). Çatalhöyük was discovered in the 1950s by the British archaeologist James Mellaart (Hodder 2000) and it was the largest known Neolithic site in the Near East at that time. From 1993 up today the site was excavated by Ian Hodder with the collaboration of several international teams experimenting multivocality and reflexivity methods in archaeology (Hodder 2000).
Other important research questions regard the sequence and re-composition of wall art paintings and, in general the decoration of buildings with scenes of social life, symbols or geometrical shapes. For example in the buildings 77, it was possible to recompose the entire sequence of paintings after four years of excavation, but this entire sequence is not visible on site anymore since the paintings are very fragile and cannot be preserved in situ (figs. 13-16). In short, the only way to study them is in a virtual environment with all the links to their metadata and stratigraphic contexts (figs. 7, 12, 13).
Figure 11. Image modeling of the building 77 at Çatalhöyük comparative studies and analyses of all the documentation recorded in different files and archives. TeleArch aims at the integration of both phases of documentation (bottom-up) and reconstruction (topdown) in the same session of work and interpretation. 6.1.4.2 The site
124
VIRTUAL REALITY & CYBERARCHAEOLOGY
Figure 12. 3D layers and microstratigraphy in the teleimmersive system (accuracy < 1 mm): midden layers at Çatalhöyük. This area was recorded by optical scanner (Minolta 910)
Figure 15. Building 77 after the removal of the painted calf’s head. The 3D recording by image modeling allows to reconstruct the entire sequence of decoration (by different layers)
Figure 13. Virtual stratigraphy of the building 89, Çatalhöyük: all the layers recorded by time of phase laser scanner (Trimble FX)
Figure 16. Building 77: all the 3D layers with paintings visualized in transparency (processed in Meshlab)
Figure 14. Building 77 reconstructed by image modeling (Photoscan). In detailhand wall painting and painted calf's head above niche 125
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
The combined use of the 3D stereo camera and the stereo video projector have allowed the visualization of 3D archaeological data and models day by day, stimulating a debate on site about the possible interpretations of buildings, objects and stratigraphy.
6.1.4.4 Collaborative Research at Catalhuyuk
Since the system is still a prototype it is too early for a significant analysis of the performance and for discussing deeply the first results. Most of the time was invested in the implementation, testing, optimization of data and the creation of a new beta version of the software running also as standalone version. A bottle-neck is the number of users/operators the system can involve simultaneously: current experiments were tested with the connection of two campuses. The expandability of the system is crucial for a long-term collaborative research and also for getting adequate results in terms of interpretation and validation of models and digital processes. In fact in Teleimmersive archaeology the interpretation is the result of an embodied participatory activity engaging multiple users/actors in real time interaction in the same space. The participation of human avatars in teleimmersion has the scope to augment the embodiment of the operators, to use natural interfaces during the interaction and to perceive all the models on scale. This cyberspace augments then the possibilities to interpret, measure, analyze, compare, illuminate, and simulate digital models according to different research perspectives while sharing models and data in the same space.
With the time of flight scanner Buildings 80, 77, 96 and all the general areas of excavation in the North and South shelter were recorded and documented. With the optical scanner Nextengine, 35 objects were recorded in 3D involving different categories: figurines, ceramics and stone. Finally all these models were exported for 3D sessions in TeleArch. 6.1.4.6 Fieldwork 2011
The experience acquired in 2010 was able to address differently the strategy of data recording in 2011. In fact in 2010 timing was a very critical factor in laser scanning during the archaeological excavation and the use of optical scanners (Minolta 910) was not appropriate for capturing stratigraphy and layers (optical scanner have troubleshooting working outdoor). In addition the accuracy produced by the use of Minolta scanner, even if very valuable, was even too much (a range of few microns) for the representation of stratigraphic layers (fig. 12). The Minolta 910 in fact, as many other optical scanners, does not work properly in the sunlight, and because of that the use in 2010 was
In the case of Catalhuyuk, the Teleimmersive system is aimed to recreate virtually all the archaeological process of excavation. Therefore all the data are recorded srcinally by time-of-flight and optical scanners and then spatially linked with 3D dbases, alphanumeric and GIS data. Two fieldwork seasons, 2010 and 2011 were scaled and implemented for TeleArch with all the 3D layers and stratigraphies integrated with dbases and GIS data (figs. 7, 8, 13). All the 3D models have to be aligned and scaled first in Meshlab and then exported in TeleArch.
limited under smallmodels surfaceproduced of 1 sq mtinunder dark very tent. However the afinal 2010 awere interesting because of the very detailed features represented in the sequence of stratigraphic units and in relation with the sequence of midden layers. Therefore in 2011 we have opted for an integrated system able to shorten dramatically the phases of post-processing and to allow a daily reconstruction in 3D of all the trench of excavation. It is important in fact to highlight that timing is a crucial factor in relation with the daily need to discuss the results of 3D elaboration and the strategy of excavation.
6.1.4.5 Fieldwork 2010
The fieldwork activity had the twofold scope of excavating a multistratified deposit such as a “midden area” (East mound, Building 86, Space 344, 329, 445) and to document all the excavation by 3D laser scanners, computer vision and 3D stereoscopy. For this scope we have used a triangulation scanner for the microstratigraphy (Minolta 910), an optical scanner for the artifacts (Nextengine) and a time of flight/phase scanner for the buildings and the largest areas of
Differently from 2010, we have adopted two new systems working simultaneously: a new time of phase scanner (Trimble FX) and a combination of camera based software of computer vision and image modeling (Photoscan, stereoscan, Meshlab). The Trimble FX is a
excavation (Trimble CX). The use of different technologies was necessary for applying a multiscale approach to the documentation process. In fact, scanners at different accuracy are able to produce different kinds of 3D datasets with various levels of accuracy. More specifically a special procedure was adopted for the data recording of the stratigraphic units: every single phase and surface of excavation was recorded by the triangulation scanner after cleaning and the traditional manual archaeological drawing. The contemporaneous use of both methodologies was fundamental in order to overlap the logic units of the stratigraphic sequence (and related perimeter) on their 3D models.
time of phase shift able to generate 216000 pt/sec and with a 360 x 270* field of view; it is a very fast and effective scanner with the capacity to generate meshes during the data recording, so that to save time in the phase of post processing. The strategy in the documentation process was to record simultaneously all the layers/units in the sequence of excavation using laser scanning and computer vision. At the end of the season we have generated 8 different models of the phases of excavation by computer vision (3D camera image modeling) and as well by laser scanning. The scheme below shows the principal features and differences between the two systems; laser scanning requires a longer 126
VIRTUAL REALITY & CYBERARCHAEOLOGY
Table 1.
post-processing but it produces higher quality of data. Computer vision allows to have immediate results and to follow the excavation process in 3D day by day (but not with the same geometrical evidence of the laser scanner). The digital workflow used during the excavation was the following:
integrated with all the 2D maps, GIS layers and archeological data.
Digital photo-recording for computer vision
Digital photo recording for laser scanning
Ultimately and differently from 2010, the post processing phase was very quick and effective for laser scanning and computer vision. In fact the models recorded with the above mentioned technologies were ready and available for a 3D visualization a few hours after data capturing. The speed of this process has allowed a daily discussion on the interpretation of the archaeological stratigraphy and on 3D spatial relations between layers, structures and phases of excavation. The excavation of an entire building (B89) has allowed testing the system in one single context so that to produce a 3D multilayered model of stratigraphy related to an entire building. In addition a 3D model of the painted wall of Building 80 was created in 3D computer vision in order to study the relations
Laser scanning
between micro-layers of frescos and the surface of the wall.
Identification of archaeological layers and recognition of shapes and edges. Cleaning of the surface (in the case of computer vision applications). Registration of targets by total station (so that all the models can be georeferenced with the excavation’s grid).
The digital workflow for the computer vision processing is based on 1) photos alignment; 2) construction of the geometry (meshes) 3) texturing and ortophoto generation. The accuracy by computer vision measured in 2011 models was around 5 mm.
The last part of the work was the 3D stereo implementation of the models for the OgreMax viewer and for Unity 3D in order to display them in stereo projection. For this purpose we have used the DLP Projector Acer H5360 in association with the NVIDIA 3D vision kit and a set of active stereo glasses. The buildings B77 and B89 (during the excavation) were implemented for a stereo visualization in real time (walkthrough, flythrough, rotation, zooming and panning). Thanks to the portability of this system, the
The use of georeferenced targets on site was implemented for the automatic georeferencing of the 3D models with the excavation grid. In that way all the 3D information recorded during the excavation is perfectly oriented and 127
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
stereo projection was available in the seminar room for all the time of excavation.
different feedback if compared with the digital ones. In some circumstances the virtual object has a more “dense” information, it is comprehensible from different perspectives, not necessarily reproducible in the real world.
6.1.5 CONCLUSIONS
Humans, as visual animals, have constructed their hermeneutic skills throughout several generations of genetic and cultural evolution. Digital materiality is a new domain of hermeneutic, with different rules, spaces and contexts. The informative content of a complex digital representation could be more than authentic: it is hyper-real. This hyper-real archaeology elaborates at the end much more data and information than in the past: this new digital materiality has therefore to be studied with a diverse hermeneutic approach.
The future of digital archaeology is in the interactive kinesthetic process: enactive embodiments of data, models, users, human avatars: a continue work in progress. If in the past the attention was focused on the validation of models and environments, the future of archaeological information is in the digital performance between operators in shared environments and cyber worlds. We could say: “performing the past” rather than “reconstructing”. The virtual performance represents a new digital frame within which the archaeological interpretation can be generated and transmitted.
This new digital phase of research and communication permits to review the entire digital workflow from data capturing to the final documentation and reconstruction process. The integrated use of different technologies of data capturing and post-processing then generates a more sophisticated pipeline of digital interpretation, thanks to the comparison among models, meshes, geometry and clouds of points. In addition, the relevant speed of all the digital process is able to increase the capacities of interpretation during the excavation and, more specifically, to simulate the entire excavation in 3D.
If at the beginning of virtual archaeology the goals were to reconstruct the past (mainly in computer graphics), at the present the past can be simulated in virtual environments, re-elaborated in Internet, transmitted by different social media. This last digital phase, borndigital, is completely different: the bottom-up phase during the fieldwork, the documentation process, the 3D modeling produce an enormous quantity of data, whose just a low percentage is really used and shared. Instruments, tools and software of data capturing have substantially increase the capacity of digital recording
Ultimately Teleimmersive archaeology is still in
and real time renderings, but unfortunately there are and not yet adequate instruments for interpretation communication. The interpretation several times is hidden somewhere in or through models but we don’t have the key for discovering or extrapolating it from the digital universe. The research work in the last two decades was concentrated more on recording tools and data entry rather than accurate analyses and interpretations. The result is that too much information or too little have a similar effect: there is no way to interpret it correctly.
embryonic of development, but collaborative minds at work stage simultaneously in the same immersive cyberspace can potentially generate new interpretations and simulation scenarios never explored before. Acknowledgements
Teleimmersive archeology project was supported by Center for Information Technology Research in the Interest of Society (CITRIS) at University of California, Berkeley. We also acknowledge financial support from NSF grants 0703787 and 0724681, HP Labs, The European Aeronautic Defence and Space Company (EADS) for the implementation of the teleimmersion software. We thank Ram Vasudevan and Edgar Lobaton for the stereo reconstruction work at University of California, Berkeley. We also thank Tony Bernardin and Oliver Kreylos from University of California, Davis for
One more thing to consider in this new dimension of virtual interaction in archaeology is the digital materiality. The cyber world is now populated of digital artifacts and affordances: they create networks of a new material culture, totally digital. The multiplication of affordances in a virtual environment depends on interaction design and on the digital usability of the models. there specific are newtaxonomic material contexts to analyze: Therefore shall we create approaches for this domain? New classes and categories of digital materiality? When we analyze for example a 3D model of a statue or a potsherd and we compare it with the srcinal, we assume that the 3D model is a detailed copy of a real artifact. Is that true? Actually it is not: a digital artifact is a representation of objects simulated by different lights, shadows, contexts and measurable on scale: in other words it is simulated model not a copy or a replica. Of course there are several similarities between the digital and the real one, but we cannot use the same analytical tool. Hands-on experiences on real artifacts reproduce a
the implementation of the 3D video rendering. For the project 3D Archaeology at Catalhuyuk, special thanks to all the students and participants involved in the fieldwork and lab post-processing and in particular Fabrizio Galeazzi (2010 season), Justine Issavi (201011), Nicola Lercari (2011), Llonel Onsurez (2010-11). Bibliography
BARCELÓ, J.A.; FORTE, M. et al. 2000. “Virtual reality in archaeology.” BAR international series 843. 128
VIRTUAL REALITY & CYBERARCHAEOLOGY
BATESON, G. 1972. Steps to an ecology of mind. New York, Ballantine Books. BENTKOWSKA-KAFEL, A.; DENARD, H. et al. 2011. Paradata and transparency in virtual heritage. Farnham, Surrey, England ; Burlington, VT, Ashgate. CAMERON, F. and KENDERDINE, S. 2010. Theorizing digital cultural heritage: a critical discourse. Cambridge, Mass.; London, MIT Press. CHAMPION, E. 2011. Playing with the past. Humancomputer interaction series. London; New York, Springer: 1 online resource (xxi, 214 p.). DASGUPTA, S. 2006. Encyclopedia of virtual communities and technologies. Hershey, PA, Idea Group Reference: 1 online resource (1 v.). DASGUPTA, S. 2006. Encyclopedia of virtual communities and technologies. Hershey, PA, Idea Group Reference. FORTE, M. 1997. Virtual archaeology: re-creating ancient worlds. New York, NY, Abrams. FORTE, M. 2000. About virtual archaeology: disorders, cognitive interactions and virtuality. Virtual reality in archaeology. F.M. Barcelo J., Sanders D. Oxford. in Barcelo J., Forte M., Sanders D., 2000 (eds.), Virtual reality in archaeology, Oxford, ArcheoPress (BAR International Series S 843), 247-263: 247-263. FORTE, M. 2007. La villa di Livia: un percorso di ricerca di archeologia virtuale. Roma, “L’Erma” di Bretschneider. FORTE, M. 2009. “Virtual Archaeology. Communication in 3D and ecological thinking.” Beyond Illustration: 2D and 3D Digital Technologies as Tools for Discovery in Archaeology, edited by Bernard Frischer and Anastasia Dakouri-Hild, Archaeopress, Oxford: 31-45.
FORTE, M. 2010. Cyber-archaeology. Oxford, England, Archaeopress. FORTE, M. 2010. Cyber-archaeology. Oxford u.a., Archaeopress. FORTE, M. and KURILLO, G. 2010. “Cyberarchaeology Experimenting Teleimmersive Archaeology” 16th International Conference on Virtual Systems and Multimedia (VSMM 2010), Oct 20-23, 2010: 155-162.
HODDER, I. 2000. Towards reflexive method in archaeology the example at Çatalhöyük. BIAA monograph no. 28. Cambridge. HODDER, I. 2000. Towards reflexive method in archaeology the example at Çatalhöyük. BIAA monograph no. 28. Cambridge, Oxford, McDonald Institute for Archaeological Research University of Cambridge. HODDER, I. 2006. Çatalhöyük : the leopard's tale : revealing the mysteries of Turkey's ancient town. London, Thames & Hudson. KAPP, K.M. 2012. The gamification of learning and instruction : game-based methods and strategies for training and education. San Francisco, Calif.
Chichester, Jossey-Bass; John Wiley distributor. MATURANA, H.R. and VARELA, F.J. 1980. Autopoiesis and cognition: the realization of the living. Dordrecht, Holland; Boston, D. Reidel Pub. Co. ZUBROW, E. 2010. From Archaeology to I-archaeology: Cyberarchaeology, paradigms, and the end of the twentieth century. Oxford, Archaeopress. Cyberarchaeology (ed. by Maurizio Forte): 1-7. ZUBROW, E.B.W. 2011. The Magdalenian household : unraveling domesticity. Albany, N.Y., State Univ. of New York Press.
129
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
6.2 VIRTUAL REALITY & CYBERARCHAEOLOGYVIRTUAL MUSEUMS Sofia PESCARIN cultural interest that are accessed through electronic media. A virtual museum does not house actual objects and therefore lacks the permanence and unique qualities of a museum in the institutional definition of the term. In fact, most virtual museums are sponsored by institutional museums and are directly dependent upon their existing collections”, an interesting point is made by Antinucci in the previously cited paper. He states that there is an easy exercise that can be done, defining what “is not” a Virtual Museum. He proposed that it “is not the real museum
In this chapter the author will focus on virtual museums, an application area related to virtual heritage. She will analyse what is a virtual museum, its characteristics and categories. The section will be closed with four examples of Virtual Museums.
6.2.1 MUSEUMS AND VIRTUAL MUSEUMS
The term Virtual Museum has become more and more
transposed web (orof,toorany electronic form)”, nor “an archive to of,the database electronic complement to the real museum” since these aren't meant for communication, and finally nor “what is missing from the real museum”. He finally underlines as “ the visual narrative is the best means to effectively communicate about objects in a museum to the ordinary visitor ”. (Antinucci, 2007:80-81).
used the lastways, 10 years, but it to hasonalso adopted3D in very in different referring linebeen museums, reconstructions, interactive applications, etc. As F. Antinucci was writing in 2007 “this fact immediately becomes apparent when we observe the various entities that are called by this name and realize that we are dealing with a wide variety of very different things, often without any theory or concept in common ” [Antinucci, 2007: 79].
Coming back to the ICOM definition, there are 5 interesting common characteristics for a museum and a virtual museum: 1) there is often an institution behind the museum; 2) heritage (tangible and intangible) forms the collections; 3) it has always a communication system; 4) it is created to be accessed by a public; 5) it is built following one or more scopes (education, study, enjoyment).
Virtual Museum is made of two terms: “virtual” and “museum”. The definition of “museum” is widely accepted and approved. ICOM updated definition, refers to a “museum” as “a non-profit, permanent institution in the service of society and its development, open to the public, which acquires, conserves, researches, communicates and exhibits the tangible and intangible heritage of humanity and its environment for the purposes of education, study and enjoyment.”
Virtual Museums are aimed at creating a connection to
(http://icom.museum/who-we-are/the-vision/museumdefinition.html). On the other side, the term “virtual” is the real cause of the not univoque definition of virtual museum, since it is used in the ICT community as connected to interactive real time 3D, while in the Cultural Heritage community in a broader and more epistemological way, often including any reconstruction, independently from the presentation layer.
the remains of our past and their knowledge, with a fundamental focus they should have on users. They are communication media developed on top of different technologies, whose goal is to build a bridge between Heritage and People. Their goal is to make users experiencing the future of their past. Virtual Museums are the application domain of several different researches: content-related research, cognitive sciences, ICT, and more specifically interactive digital media, Technology Enhanced Learning (TEL), serious and educational games. They are an aggregations of digital contents (various kind of Multimedia assets: 3D models, audio, video, texts) built on top of a narrative or descriptive
So, what is a “Virtual Museum”? Although the ‘Encyclopædia Britannica’ refers to a virtual museum as a “collection of digitally recorded images, sound files, text documents, and other data of historical, scientific, or
130
VIRTUAL REALITY & CYBERARCHAEOLOGY
story, with a presentation layer which defines the specific ICT solution and the behaviours.
case of use of an immersive workbench, through either sound or vision. This is the case of Head Mounted Display, wearable haptic, retinal display where either the view or the hearing is involved.
This definition is a work-in-progress activities of vmust.net (www.v-must.net), the network of excellence financed by the European Commission on Virtual Museums.
6) Distribution: There is a further category which regards how the virtual museum is distributed. In fact it might be not-distributed at all, such in the case of an on site installation inside a museum, not connected with Internet, or distributed.
6.2.2 CATEGORIES OF VIRTUAL MUSEUMS
7) Scope: An important distinction regards the aim, the scope why a virtual museum has been developed. This issue in fact has an impact on the application itself. In the recent analysis within the v-must project we have distinguished six possible scopes: educational, edutainment, entertainment, research, enhancement of visitor experience and promotion. In educative virtual museum the main focus is addressed to specific instructional purposed, while in edutainment – following Chen and Michael [Chen, Michael 2006] – the concept is related to serious games where fun and entertainment are strictly related to the transmission of specific information and to foster learning. In entertainment in fact, the focus lays in the fun and enjoyment which are at the base of the development, while research purposes are address in case of testing or analysing some specific aspects of interest of a restricted scientific community. Nevertheless Virtual Museums may be also developed to enhance visitor experience of a site or a museum, or just to promote or advertise a specific cultural heritage.
As we have seen, the definition of virtual museum is quite wide. This is the reason why there are several types of virtual museums. They can in fact be defined in accordance to their: 1) content, 2) interaction technology; 3) duration; 4) communication; 5) level of immersion; 6) distribution; 7) scope; or 8) sustainability level. These eight categories have a direct implication on the technical and digital asset development. 1) Content: There are several types of virtual museums if we consider their content, such as archaeology, history, art, ethnography, natural history, technology, design virtual museums. 2) Interaction: If we consider interaction technology, there are two main types: interactive virtual museums, which use either device-based interaction (such in the case of mouse or joystick) or natural-interaction (speech or gesture based interaction) and non-interactive virtual
museums, engagement.which
provide
the
user
with
passive 8) Sustainability: There is finally a category that is more and more perceived as important to be defined and that regards the level of sustainability of a project, meaning the capacity to be persistent and durable in the future. In fact the life-cycle of many installation is still very limited in time. Furthermore, important long-lasting projects are today completely lost and inaccessible, due also to the lack of preservation policies. An entire digital patrimony is in danger due to the lack of a shared methodology for preserving content. This danger is felt more and more, when this digital patrimony is the only testimony of heritage disappeared or in danger (i.e. Lascaux Caves). In this cases, the need of pairing the real artefact or the real site with Virtual Museums installations is particularly evident. This characteristic, in case of virtual museums, might be verified through their level of re-usability or ex changeability (in term of software, hardware or digital
3) Duration: A virtual museum can be installed and be accessible continuously on line or inside a museum (permanent virtual museum) or it may be playable not continuously, but only for a limited time (temporary or periodic virtual museums). These two cases have different needs and requirements, especially related to their maintenance and sustainability. 4) Communication: An interesting distinction in virtual museums regards the communication style. Although there are several types of narratives, a basic distinction can be made among exposition, description and narration. A narration implies a sequence of events which are reported to a “receiver” in a subjective way. In exposition or description the concept are defined and interpreted as to inform.
multimedia assets) and it has a connection with the approached followed (open source, open formats).
5) Level of immersion: Following Carrozzino and Bergamasco [Carrozzino & Bergamasco, 2010] there are three main categories related to the level of immersion: high immersion, low immersion and not-immersive. While for not-immersive virtual museums the concept is quite clear, a distinction should be done between high and low immersion. In the first case, we are dealing with virtual reality systems where both visual and audio systems are made as to immerse the users deeply in the digital environment, through 3D stereo projected on a large screen and multichannel sound (such as in a cave). In the second case a lower level is guaranteed, such as in
In the following section, examples of various types of virtual museums are described. In order to simplify the possible interconnection between the categories we have defined 4 main types of virtual museums, representing a cross-selection.
131
On site Virtual Museum
On line Virtual Museum
Mobile Virtual Museum or Micro Museum
Not interactive Virtual Museum
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 1. The virtual museum of Scrovegni chapel (Padova, IT, 2003-): the VR installation at the Civic Museum and the cybermap which is part of the VR application (courtesy of CNR ITABC [M. Forte, E. Pietroni, C. Rufa] and Padova city council [D- Banzato])
[On site virtual museum. Categories: art history, interactive VR, descriptive, permanent, not-immersive, partially sustainable, not distributed (on site), enhancement of visitor experience.]
University of Padova, funded by MIUR and Veneto Region. It is part of a wider project whose goal is to study, reconstruct, promote the thermal landscape typical of the Euganean Hills, around Montegrotto Terme (Padova, Italy: www.aquaepatavinae.it). The known elements of this territory are different and spread along a
In 1971, after only 8 years after the last restoration of the Scrovegni Chapel in Padova (Italy), the superintendent announced the rapidly growing damages, already visible on Giotto’s 13th century frescos, due mainly to pollution problems. After the earthquake of 1976, other damages have been registered, therefore a new restoration program was started. In 2002 the Chapel was accessible again to the public, but with limitations in terms of time spent inside the monument (15 minutes maximum) and number of visitor a day (groups of maximum 15 people each time). For this reason, in 2000 a multimedia and virtual reality project was started by Padova city council, in cooperation with CNR ITABC. The project, finished in 2003, with the official opening of the new multimedia room (Sala Wiegand) inside the Eremitani Museum, is still accessible today. It includes a not-interactive movie, which follow a narrative style, two multimedia interactive applications and one main desktop-based virtual reality
wide area: there is only tourist site; some are still undera primary excavation and archaeological cannot yet be see, while others are recognizable only through little scattered evidences identified through archaeological and geological surveys, historical studies and remote sensing of the whole area. The project is directed mainly to nonexpert users. An on line interactive application has been set up and integrated in the main website. It was used a plug-in approach to enable full integration inside the browser of the 3D interaction with the complex scenes (DEM, geoimages, 3D models obtained with RBM and IBM technique and with not-reality based modelling, vegetation, water effects, etc.). It was used the OpenSceneGraph library (www.openscenegraph.org) and the OSG4WEB project [Fanini et al., 2011, Calori et al., 2009]. The user interface was improved, such as the interaction of the plug-in: the navigation system is personalized in accordance with the type of exploration, the scale of the landscape, the visualisation device (fly,
(DVR) application, where the users can freely explore the monument with its frescos and its history in 3D and real time. A cyber map was created describing the conceptual structure of the application. [Forte et alii 2011]
walk; touch-screen, natural interaction); specific user interface was developed through which the user can upload different models (interpreted – simulated reconstructed) on the landscape, in some cases as transparent volumes above the srcinal 3D archaeological remains; it was added a loading system for information in 3D and a dynamic real time system to add plants directly on the scene, useful in case of gardens reconstruction. Although OSG4WEB is a still ongoing open source project, it is still a good solution, at least in the open source panorama, to create an on line virtual museum, made of complex scenes, based on geographical datasets and complex interactions. Nevertheless, a not-plug-in
6.2.2.1 Scrovegni Chapel (2003)
6.2.2.2 Aquae Patavinae VR (2010-2011)
[On line virtual museum. Categories: archaeology, interactive VR, descriptive, permanent, not-immersive, sustainable, on line, research] Aquae Patavinae VR is an on line virtual museum, developed by CNR ITABC, in cooperation with
132
VIRTUAL REALITY & CYBERARCHAEOLOGY
a tablet. An example is “Matera: tales of a city” [Pietroni et al., 2011], a project developed by CNR in cooperation with Basilicata Region. The project is aimed at creating a digital platform able to help tourists and visitors before and during the visit of Matera. The cultural content, audio-visuals, texts, 3d models, including a complete reconstruction of the landscape and one of the city (Matera) in nine historical periods, from Pliocene up to nowadays (fig. 3). The result is an application that integrates different digital assets, used and combined following a narrative approach: from a more traditional multimedia approach to a 3D real time navigation system for I-Pad. Figure 2. Aquae Patavinae VR presented at Archeovirtual 2011 (www.archeovirtual.it): natural interaction through the web
6.2.2.4 Apa (2009-2011)
[Not interactive virtual museum. Categories: history, notinteractive, narrative, permanent, immersive, sustainable, not-distributed (on site), edutainment]
approach would be preferable, especially in case of notexpert users and it is desirable that in the near future webGL could offer concrete solutions (http://www.khronos.org/webgl/).
The last example is an open project where Blender, the 3D modelling software (www.blender.org), was used as main tool to develop a computer animation stereoscopic movie about the story of the city of Bologna. This software was also selected as to become a real training environment, where several students have contributed. A rendering farm, based also on Blender, and a subversioning … were developed to simplify the production and help content developers, directors, artists, experts, etc. For the movie, the city of Bologna has been
6.2.2.3 Matera: tales of a city (2009-2011)
[Mobile virtual museum. Categories: history, interactive, narrative, permanent, not-immersive, partially re-usable, distributed: mobile, edutainment] Another type of virtual museum is the one that can be accessed through a mobile device, such a smart phone or
completely reconstructed duringre-using Etruscan, Medieval times until now-a-days, digitalRoman, assets
Figure 3. 3D reconstruction parts of the project “Matera: tales of a city” with a view of the same place in different historical periods (courtesy of S. Borghini, R. Carlani)
133
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
collaboration in inclusive environments; simulation in real time environments; 3D collaborative environments; multi user or serious games, multi-user virtual museums.
References
ANTINUCCI, F. 2007. The virtual museum. In: Virtual Museums and archaeology. The contribution of the Italian National Research Council, Ed. P. Moscati, Archeologia e Calcolatori, Suppl. 1, 2007: 79-86. AMPORESI, Carlo; PESCARIN, Sofia CALORI , “Virtual Luigi; CRome: 2009. a FOSS approach to Web3D”, Proceedings of the 14th International Conference on 3D Web Technology (Web3D 09), 2009.
Figure 4. Immersive room with Apa stereo movie inside the new museum of the city of Bologna (courtesy of CINECA).
CARROZZINO, M. and BERGAMASCO, M. 2010. Beyond virtual museums: Experiencing immersive virtual reality in real museums. In: Journal of Cultural Heritage, 11: 452-458.
and knowledge previously created by main research institutions (University, CNR, Enea, City Council, IBC). The result is now accessible inside the new museum of the city, in an immersive room (fig. 4) [Guidazzoli et al., 2011]. The entire digital asset is going to be soon shared, following the Creative Commons licence, and to be a training example for the development of the same project.
CHEN, Sande and MICHAEL, David 2006. Serious games: Games that educate, train, and inform. Boston, MA.: Thomson Course Technology. FANINI, Bruno; CALORI, L.; FERDANI, D.; PESCARIN, S. 2011. “Interactive 3D Landscapes Online”, Proceedings of the 3D Virtual Reconstruction and Visualization of Complex Architectures Conference (3D-ARCH 2011), 2-5 March 2011, Trento, Italy.
6.2.3 FUTURE PERSPECTIVES
FORTE, M.; PESCARIN, S.; PIETRONI, E.; RUFA, C.; ACILIERI, D.; BORRA, D. 2003. The multimedia room B of the scrovegni chapel: a virtual heritage project, in “Enter the Past. The E-way into the four dimensions of Cultural Heritage. Vienna Apr. 2003”, BAR International Series 1227, Oxford 2004: 529-532.
If we lookasatitVirtual Museum field as interdisciplinary domain, is, than it is more andanmore clear how technology is just a part of the question. In fact it should be selected strictly in accordance with other relevant categories such as the communication style and the scope (even following the 2nd Principle of the London Charter “A computer-based visualisation method should normally be used only when it is the most appropriate available method for that purpose”: www.londoncharter.org]. Moreover, there are several problems that are still open and will need to be faced in the next years, such as the duration of the life-cycle of a virtual museum. Projects such as V-MUST (www.v-must.net) are more and more focused on improving the level of sustainability of such projects.
GUIDAZZOLI, A.; CALORI, L.; DELLI PONTI, F.; DIAMANTI, T.; IMBODEN, S.; MAURI, A.; NEGRI, A.; BOETTO COHEN, G.; PESCARIN, S.; LIGUORI, M.C. 2011. “Apa the Etruscan and 2700 years of 3D Bologna history”, SIGGRAPH Asia 2011 Posters Hong Kong, China, 2011.
Interesting perspectives for the future of the domain include: details enhancement and support to
PIETRONI, Eva; BORGHINI, Stefano; CARLANI, Raffaele; RUFA, Claudio 2011. Matera città narrata project: an integrated guide for mobile system , in Volume XXXVIII-5/W16, 2011 ISPRS Workshop ‘3D-ARCH 2011’ “3D Virtual Reconstruction and Visualization of Complex Architectures”, 2-4 March 2011, Trento, Italy - Editor(s): Fabio Remondino, Sabry El-Hakim.
reconstruction based on Artificial Intelligence, Neural Networks, Genetic Art and Procedural Modeling;
WebGL 2009. – Khronos Group. WebGL - OpenGL ES 2.0 for the Web. http://www.khronos.org/webgl.
134
7 CASE STUDIES
7.1 3D DATA CAPTURE, RESTORATION AND ONLINE PUBLICATION OF SCULPTURE Bernard FRISCHER
scratches; and it may well be lacking important projecting parts such as limbs, noses, etc. Thus, in the digital representation of sculpture, the issue of restoration is often encountered.
7.1.1 INTRODUCTION Homo sapiens is an animal symbolicum (Cassirer 1953: 44). Sculpture is a constitutive form of human artistic expression. Indeed, the earliest preserved examples of modern human symbol-making are not (as one might think) the famous 2D cave paintings from Chauvet in France (ca. 33,000 BP) but 3D sculpted statuettes from the Hohle Fels Cave near Ulm, Germany dating to ca. 40,000 BP (Conard 2009). Practitioners of digital archaeology must thus be prepared to handle works of sculpture, which constitute an important class of archaeological monument and are often essential components of virtual environments such as temples, houses, and settlements. The challenges of doing so relate to two typical characteristics of ancient sculpture: its form tends to be organic; its condition usually leaves something to be desired.
The purpose of this contribution is to discuss the process of how we gather, restore, and publish online the 3D data of sculpture. Given space limitations, the goal is not to be comprehensive but to focus on specific examples handled in recent years by The Virtual World Heritage Laboratory through its Digital Sculpture Project (hereafter: DSP), the goal of which is to explore how the new 3D technologies can be used in promoting new visualizations of and insights about ancient sculpture.2
7.1.2 TERMINOLOGY AND METHODOLOGY
In any project of digital representation of sculpture it is essential to define at the outset the goal of the final product. Generally, one is attempting to create: (a) a digital representation of the current state of the statue, (b) a digital restoration of the srcinal statue of the statue, or (c) both. In case (a), we speak of a state model; in (b) of a restoration model. Of course, other categories, or subcategories, are possible, including restoration models that show the work of art at different phases of its existence. Finally, there is (d) the reconstruction model that takes as
The organic nature of sculpture means that accurate, realistic digital representation requires digital models that are generally more curvilinear and hence more dataintensive than equivalent models of built structures. For example, whereas our lab’s Rome Reborn 3D model could describe the entire 25 sq. km. city of late-antique Rome in 9 million triangles (version 1.0),1 the pioneering Stanford Digital Michelangelo Project need 2 billion triangles to describe the geometry of a single statue— Michelangelo’s “David”—resulting in a file size of 32 GB (Levoy 2003).
its point of departure not the physical evidence of the actual statue (or an ancient copy of it), which no longer exists, but representations of the statue in other media such as descriptions in texts or 2D representations on coins, reliefs, etc. (see, in general Frischer and Stinson 2007). In the case of restoration and reconstruction models, one must reckon with the problem of uncertainty (see Zuk 2008) and the resulting need to offer alternative hypotheses since rarely are our restorations or reconstructions so securely attested that there is no room for doubt or different solutions. It is essential in scientific
“David” is, in archaeological terms, a relatively recent creation and is in good condition. In comparison, the statue typically found on an archaeological site is degraded: its surface has been randomly patinated by components of the soil; its polychromy has faded or disappeared; it is probably covered with many dents and 1
This version was reduced to 3 million triangles by Google engineers for inclusion in the Gallery of Google Earth, where it is known as “Ancient Rome 3D.” The current version (2.2) of Rome Reborn has over 700 million t riangles; see www.romereborn.virginia.edu.
2
137
See www.digitalsculpture.org.
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
3D modeling to define one’s goal clearly and, when appropriate, to deliver a final product that exposes, whether through words or images, areas of uncertainty and includes relevant alternative solutions. In developing restorations and alternative solutions, it is important that the technicians who typically are responsible for data capture and modeling work closely with subject experts. Generally the project director will initiate a project by putting together a team of 3D technicians, experts on the statue of interest, and one or more restorers familiar with the work of art. The work of the team almost always is iterative: the technical team produces a model, which is then reviewed by the subject experts and restorers. On the basis of their feedback, a revised model is made. This back-and-forth can continue many times until all members of the team are satisfied that the best possible results have been obtained. It is advisable for the project director to ask all members of the team to sign a statement expressing their approval of the final product and granting their permission to publish it with their names listed as consultants or, if appropriate, co-authors.3
7.1.3 3D DATA CAPTURE
In making a state or restoration model, the first step is to capture the 3D data of the surviving statue, including any fragments. The instrument traditionally used for this is the 3D scanner. Types of scanners include Triangulation, Time-of-Flight, Structured Light and Coherent Frequency Modulated Continuous-Wave Radar.4 The instruments are either fixed, mounted on arms, or movable. Normally, Triangulation scanners are not appropriate for campaigns of 3D data capture of sculpture (Guidi, Frischer et al., 2005: 121). In approaching data capture, it is usually important to take into account the material properties of the object at hand. Sculpture has been produced in a variety of materials, including terracotta, stone, and metal. Casts of sculpture have been typically made with Plaster of Paris or resin. The data capture device used should be appropriate to the material of the object. For example, Beraldin, Godin et al., 2001 showed that because of its crystalline surface, marble is not as receptive to 3D scanning as other materials. Our own research (see Appendix I) has shown that Plaster of Paris is highly receptive to 3D scanning. Thus, one might obtain more reliable results from scanning a good first-generation cast of a statue rather than the srcinal, if the srcinal is in marble. Statues whose material is a dark metal are also problematic. In our project to reconstruct the lost portraitstatue of the Hellenistic philosopher Epicurus,5 the DSP 3
For example of one way to handle team credits, see: www.digitalsculpture.org/credits.html. 4 For a basic introduction see “3D scanner,” in Wikipedia at http://en.wikipedia.org/wiki/3D_scanner (seen February 1, 2012). 5 See http://www.digitalsculpture.org/epicurus/index.htm.
used the marble torso in the National Archaeological Museum of Florence and for the head the bronze bust from Herculaneum now in the National Archaeological Museum in Naples. For 3D data capture of the bust the LR200 manufactured by Metric Vision Inc. of Virginia was used because it employs the principle of Coherent Frequency Modulated Continuous-Wave Radar, which (unlike TOF or Structure Light instruments) operates independently of the surface color. Whether the scanner is fixed, mounted on an arm, or hand-held can also be a relevant consideration. For objects with many occluded parts (e.g., intertwined, projecting limbs such as are found on the “Laocoon” statue group in the Vatican Museums) a fixed scanner may not be as effective in data capture as one mounted on an arm or hand-held because the latter instruments allow one to move the instrument around and behind the occlusions, something not possible with a fixed scanner. The “Laocoon” was indeed mentioned by Levoy as a statue group that “may be forever unscannable.” 6 The DSP succeeded in capturing the data of the “Laocoon” 7 despite using a fixed scanner because to handle the occlusions it was possible to scan casts of the six individual blocks making up the statue group and to combine the scan data from these with those from the srcinal itself. Given the range of problems presented by 3D data capture, few university or museum laboratories will be able to purchase the full range of scanners that would ideally be needed to deal with any situation. Moreover each scanner has its own peculiarities and requires a certain amount of expertise if it is to be deployed to its best advantage. Thus it is often practical to hire a service provider specializing in 3D data capture using the appropriate instrument for the job at hand. The DSP has worked with a number of companies on its projects, including Direct Dimensions,8 Breuckmann GmbH,9 FARO,10 and Leica.11 On other occasions, it has partnered with university laboratories such as the TECHLab12 at the University of Florence and the INDACO Lab at the Politecnico di Milano.13 Thus far, we have discussed traditional devices for data capture. A promising new approach based on “structure from motion” (cf. Dellaert, Seitz et al., 2000) is being developed by Autodesk14 and the EU-sponsored Arc3D project.15 With these free solutions, one can upload digital photographs all around top of statuehave and receive backtaken a digital model.and Toon date, no the studies 6
http://graphics.stanford.edu/ talks/3Dscanning-3dpvt02/3Dscanning3dpvt02_files/v3_document.htm. 7 For the results, see http://www.digitalsculpture.org/laocoon/index. html. 8 www.dirdim.com/. 9 www.breuckmann.com/. 10 www.faro.com. 11 www.leica-geosystems.us. 12 www.techlab.unifi.it/. 13 http://vprm.indaco.polimi.it/. 14 See www.123dapp.com/catch. 15 See www.arc3d.be/.
CASE STUDIES
been undertaken to compare the accuracy of models made by these software packages with those made by traditional data capture. What is clear is that, to date, the resulting models are smaller in terms of polygons and hence lower in terms of resolution. But not all digital archaeological applications require high-resolution models. For example, our lab has had good success with Autodesk’s 123D Catch to produce the small to mediumsize models used in virtual worlds. We find that to obtain the best results from 123D Catch, it is important to observe a few, simple rules that are designed to get the best performance from the digital camera:
2D views in media such as coins or reliefs. The DSP uses a process of hand-modeling with the software 3D Studio Max to create reconstruction models in mesh format. As an example, the model of the statue group showing Marsyas, the olea, ficus, et vitis in the Roman Forum can be cited (figure 1). Nothing survives of the group, but it is illustrated on two sculpted reliefs dating to ca. 110 CE (see Frischer forthcoming). Besides 3D Studio Max, published by Autodesk, other hand modeling packages that are commonly used include Autodesk’s Maya18 and Blender, which is free and open source.19
If possible, prepare the statue to be shot by place 10-20 small post-it notes randomly distributed over the surface. Each post-it should have an “X” written on it to provide a reference point later in the process. Put a scale with color bars in front of the statue. This will allow you to give your model accurate, absolute spatial values. Shoot all around the object while keeping the view centered on it. Shoot in a gyric pattern from bottom to the top around the statue several times, ending with shots all around the top.
Make sure that shots overlap.
Keep the lighting constant.
Keep the object fixed: do not rotate it.
Figure 1. View of the DSP’s reconstruction of
Depth of field should be as big a possible so that the maximum area is in focus.
the statue group of Marsyas, olea, ficus, et vitis in the Roman Forum
ISO should be set to 100 or 200.
A tripod and shutter remote control should be used.
Format should be RAW
As for file formats for point clouds, restoration models and reconstruction models, there are no universally accepted standards, but for point clouds PLY and OBJ are well supported; and for polygonal models (including both the restoration and reconstruction varieties) PLY, OBJ, 3DS, and COLLADA are widely used. Generally, one will want to work with a file format that can be imported into Meshlab, a popular free, open-source software for editing point clouds and polygonal models.20 Proprietary software packages such as Geomagic, Polyworks, and Rapidform are also commonly used, though they are often too expensive for the budgets of university and museum laboratories.
7.1.4 MODEL TYPES AND FORMATS
The 3D data are captured as a collection of points with X, Y, Z spatial coordinates.16 One can either leave the model in point-cloud format or join the points to make a mesh defined by triangles (also known as polygons; hence the term polygonal model).17 Generally, the latter is preferable since a point cloud only looks like the object represented from a distance but dissolves into its constituent points when one zooms in during visualization. Other advantages of The a mesh are that the data to the bare essentials. vertices of it thereduces polygons can be painted, or the faces can be overlaid with textures. Finally, unlike a point cloud, a polygonal model can be the basis of a restoration, which is made by generating new polygons and adding them to the existing mesh.
7.1.5 RESTORATION
As mentioned, reconstruction models are not based on scans of the existing statue but on verbal descriptions or
Sculpture found in archaeological contexts is generally damaged. Defects can range from minor surface scratches and dents to more major losses such as missing limbs, noses, or entire heads. Sometimes most or all of a statue is preserved but is found broken into fragments. Moreover, the paint on the surface of a work of art rarely
16
18
http://usa.autodesk.com/maya. www.blender.org/. See http://meshlab.sourceforge.net/; Wikipedia s.v. MeshLab at http://en.wikipedia.org/wiki/Meshlab (seen February 1, 2012).
See the useful introduction in Wikipedia, s.v. “Point cloud” at http://en.wikipedia.org/wiki/Point_cloud (seen February 1, 2012). 17 See, in general, Wikipedia s.v. “Polygon mesh,” at http://en.wikipedia.org/wiki/Polygon-mesh (seen February 1, 2012).
19 20
139
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
survives well in the soil and may be faded, invisible to the naked eye, or lost. A member of the team, typically a restorer, should be skilled in the use of methods and techniques to detect traces of pigments. An art historian can provide useful input for the restoration of large missing pieces such as limbs and heads. In some sculptural traditions, copies were often made of a work of art such as a ruler portrait. Thus, even if the statue of interest is damaged, it may be possible to supplement its missing elements by data collection from other copies. In the latter case, one must be attentive to the issue of scaling: in pre-modern times, the pointing method was not in use (Rockwell 1993: 119-122) and so no two copies are exactly to the same scale. Thus, one cannot simply scan the torso of one copy and the head of another and combine the two digital models. When a statue exists but exists in separate fragments, it is possible to scan each fragment and to use digital technology to join the individual fragment models in a composite restoration model. An example of this can be seen in the DSP’s project to restore the disassembled fragments of the Pan-Nymph statue group in The Dresden State Museums.21 When direct evidence (e.g., from a copy) is lacking for how a missing element is to be restored, the advice of the team’s art historian is crucial for determining the valid range of alternative solutions available to the artist in the period in which the work of art was created. In the case of the DSP’s Caligula project, the goal of which was to scan, model, and restore the full-length portrait of the emperor Caligula (AD 12-41), the consulting art historians agreed that the missing arms and hands should be restored on the basis of an analogous portrait from Herculaneum of L. Mammius Maximus (cf. Davies 2010). On the other hand, the archaeologist-restorer on the team found only one small area where pigments of pink madder and Egyptian blue were still preserved. These traces suggested that the toga of Caligula was painted with a purple. But whether the purple was confined to a stripe (the toga praetexta) or covered the entire garment (toga purpurea) was uncertain. Moreover, the edge of the garment might have been decorated with a gilt pattern (toga picta), something that emperors were known to have worn. So although the missing arms and hands could be restored with high probability, the toga had to be painted in three alternative ways.22
Meshlab known as Meshlab Paint Edition, which it distributes at no cost.25 It is quite easy to use and might be most useful as a way for the art historian on the team to quickly mark up a mesh with color prior to more professional painting by a technician expert in the use of Zbrush or Mudbox. In painting a mesh, it is important to try to imitate the actual painting techniques found in the culture that produced the work of art. To give a concrete example of restoration, here are the steps involved in using Zbrush to go from the state model of Caligula to the toga praetexta restoration. First, the scan data are brought into ZBrush as geometry. To restore missing elements such as missing limbs, new geometry is created and sculpted using photographs, drawings, or other precedents to achieve a natural appearance. In the case of a missing arm on a sculpture, this might entail using photos of similar sculptures with extant arms, or photographs of models posed in a similar fashion as the sculpture to be restored. Once major restorations have been completed, attention can be turned to the small nicks, dents, and gashes. Many such damages can be easily repaired using ZBrush’s basic tools, and made to look realistic and natural by using the surrounding undamaged geometry of the sculpture as a guide. Following the digital restoration of the geometry of the sculpture, polychromy can be added to the model. This is done in ZBrush using the painting tools. In the case of Caligula, as noted, three different versions of the toga were restored. When restoration of the geometry and polychromy have been completed to a satisfactory level, turntable animations or still image renderings can be outputted from ZBrush in a variety of formats and resolutions. The geometry can also be exported, and, through a process using MeshLab and MeshLab Paint, be converted into a format suitable for interactive display on the web.26
7.1.6 DIGITAL PUBLICATION
The goal of a project of 3D data capture, modeling, and restoration is generally publication in some format. Typical forms of publication include 2D renderings, video animations, and interactive objects and environments. For 2D renderings and video animations the DSP exports the finished model to 3D Studio Max, a product of Autodesk.27 Sculpture is often an integral component of a reconstructed archaeological site.
The DSP makes its restoration models with Zbrush,23 software published by one of its sponsors, Pixologic. Autodesk publishes Mudbox, which has many features similar to ZBrush. The best freeware is Sculptris, also published by Pixologic, which those on a tight budget may find a serviceable alternative.24 For painting 3D meshes, the DSP commissioned a special version of
Finished models of statues can be imported into game engines and integrated into the scene in the right position. Like many laboratories in the field of virtual archaeology, the DSP uses Unity 3D as its preferred game engine. Normally, the statue imported into a game engine has to be simplified from several million to several hundred thousand polygons. In contrast, the digital model of the statue can be published with little if any loss of resolution
21
25
See www.digitalsculpture.org/pan-nymph/index.html. For the state and three restoration models, see www.digitalsculpture. org/caligula/index.html. 23 www.pixologic.com/zbrush/. 24 www.pixologic.com/sculptris/. 22
Meshlab Paint Edition may be downloaded at: www.digitalsculpture. org/tools.html. 26 I thank Matthew Brennan, lead 3D technician in The Virtual World Heritage Laboratory, for his input to this paragraph. 27 http://usa.autodesk.com/3ds-max/.
CASE STUDIES
and detail through the use of Seymour, a new product developed by a company owned by the present author. Seymour exploits WebGL to make it possible to run interactive 3D models as elements of web pages analogous to text and 2D images. The user downloads a simplified version of the full model to her standard HTML browser. Seymour functions not only in browsers supporting WebGL such as Chrome, Firefox, and Opera but also in Internet Explorer, which does not. When the desired view of the model has been set, the user automatically and quickly receives the exact same view of the full model rendered on the cloud and sent to the user’s browser over the Internet. It is expected that a Seymour web service will be established in the period
2012-13 so that creators of 3D models can embed them where they are needed in their own web publications. In whatever format a 3D model is published, best practice requires that a report accompany it giving the goals and history of the project and discussing elements of the model that are based on evidence and those based on analogy or hypothesis. Best practice also requires inclusion of the appropriate paradata as required by The London Charter.28 A new peer-reviewed, online journal called Digital Applications in Archaeology and Cultural Heritage will appear sometime in the period 2012-13 in which 3D models of sculpture can be published (for more information, see Appendix II).
Appendix I: 28 A COMPARISON OF CASTS VS. ORIGINALS29 The of this comparison made using theDresden marble statuepurpose and cast (figure 2) “Alexander” in the State Museums is to ascertain two things: (1) which material is more responsive to 3D digital data capture, marble or the plaster used in the cast; (2) how closely does a first-generation plaster cast of a statue correspond to srcinal marble sculpture? In order to make this comparison, the statues were first scanned with a FARO ScanArm. The resulting point clouds were processed with Polyworks and polygonal models were made. As will be seen, these two questions are related since the material qualities of marble turn out to be less receptive to digitization than are those of plaster or silicon. This can be easily seen in figure 3. Whereas the digital model of the plaster cast renders the smooth surface of the marble, that of the marble has a surface marred by bumps that do not correspond to any true feature of the srcinal. Marble is composed primarily of calcite (a crystalline
Figure 2. “Alexander,” plaster cast (left) and srcinal marble31 (right) of the torso; front view. Photographs with kind permission of The Dresden State Museums
form of calcium carbonate, CaCO3).30 As Beraldin, Godin et al., 2001: 1 showed:
Beraldin, Godin et al., conclude by noting that noise on the surface of a material such as marble that is not “cooperative” to laser scanning “was estimated to be 2-3 times larger than on optically cooperating surfaces” (Beraldin, Godin et al., 2001: 8). Beraldin, Godin et al., suggest a possible remedy: development of a predictive algorithm that can digitally correct for the distortion caused by scanning marble surfaces. They do not consider another solution: scanning not the marble srcinal of a statue but a first-generation plaster or silicon cast.
Marble’s translucency and heterogeneous structure produce significant bias and increased noise in the geometric measurements.… A bias in the depth measurement is also observed. These phenomena are believed to result from scattering on the surface of small crystals at or near the surface. 28
www.londoncharter.org/. I thank David Koller for his collaboration in writing the Appendix. Wikipedia, s.v. “Marble sculpture,” http://en.wikipedia.org/wiki/ Marble_sculpture (seen May 1, 2010). 29 30
31
141
Inventory nr. H4 118/254.
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 4. Tolerance-Based Pass/Fail test of the digital models of the cast and srcinal torso of “Alexander” in Dresden. Green indicates that the two models differ by less than ± 1 mm. Red indicates areas where the difference between the models exceeds ± 1 m Figure 3. “Alexander,” digital model of the cast (left) and of the srcinal (right) of the torso; front view. Note the “bumpiness” or noise in the scan model of the srcinal torso
As can be seen in figure 3, the plaster cast of the torso of “Alexander” is free of noisy bumps. But is there not a loss of accuracy when a cast is made? Borbein has recently traced the development of a negative attitude among art historians and archaeologists toward the plaster cast (Borbein 2000: 29). Perhaps the climax of this trend came several years ago when the Metropolitan Museum in New York deaccessioned its entire cast collection.32 Ironically, this happened just at the time when, in Europe, at least, the reputation of the cast was rising again (Borbein 2000: 29). Figure 5. Error Map comparing the digital models of the cast and srcinal torso of the Dresden “Alexander”
With respect to the second question about the accuracy of casts, we attempt to give a quantitative appraisal of the assertion of cast-maker Andrea Felice that “plaster is the material par excellence in the production of sculptural casts owing to its ease of use, malleability, widespread availability, its granular fineness and thus its notable ability to reproduce detail, its remarkable mechanical resistance (13 N/mm2), its moderate weight, and its excellent rendering of light and shade.” 33
the set range and so designated in the images as the color red. Anything within this range passes and is shown as green. The results can be seen in figure 4. Clearly, most of the surfaces measurements of the two
We ran two tests utilizing the scan models of the torso of the Dresden “Alexander” to address this issue. 34 The first test is called a “Tolerance based Pass/Fail Comparison.” To run it, an acceptable tolerance of variance of one model from another is set. In our case, this was set at 1 mm (=0.03937 [< 1/25th] inch). The red/green “stop light” images show deviation of ± 1mm as being outside 32
See www.plastercastcollection.org/de/database.php?d=lire&id=172. www.digitalsculpture.org/casts/felice/ I thank Jason Page of Direct Dimensions, Inc. for running this test and producing the images seen in figures 4 and 5.
digital models fall within ±1 mm of each other. To find out how great the difference is in the red areas, we ran a second test called an Error Map Comparison. In this test, a rainbow of colors is outputted, with each color in the spectrum equivalent to a certain deviation. The results are seen in figure 5. They show that the biggest difference between the two models on the front of the torso is 1.343 mm; on the back it is 1.379 mm. These errors can be expressed in inches as 0.053 inch and 0.054 inch respectively.
33 34
142
It is unclear whether the difference between the two models—which, as we have seen, at worst totals about
CASE STUDIES
1/20th of an inch—arises from the calibration of the scanner; the native error rate of the scanner; the lack of receptivity of marble to laser scanning; or a random imprecision in the casting process. Further tests would need to be undertaken to resolve this uncertainty. What is
already clear is that even if we attribute the maximum error found to a natural result of the casting process, the difference we detected between the model of the cast and that of the srcinal is trivial for most cultural heritage applications.
Appendix II: DIGITAL APPLICATIONS IN ARCHAEOLOGY AND CULTURAL HERITAGE The author of the present article has recently been named editor-in-chief of Digital Applications in Archaeology and Cultural Heritage (DAACH). The journal will be the world’s first online, peer-reviewed publication in which scholars can disseminate 3D digital models of the world’s cultural heritage sites and monuments accompanied by associated scientific articles. The journal aims both to preserve digital cultural heritage models and to provide
opportunity of publishing their models online with full interactivity so that users can explore them at will. It is unique in that it will provide full peer-review for all 3D models, not just the text, 2D renderings, or video animations. It requires all models to be accompanied by metadata, paradata, documentation, and a related article explaining the history and state of preservation of the monument modeled as well as an account of the modeling project itself. The journal focuses on scholarship that either promotes the application of 3D technologies to the fields of archaeology, art and architectural history, or makes a significant contribution to the study of cultural heritage through the use of 3D technology. Creators of 3D models of sculpture are encouraged to consider
access to them for the scholarly to facilitate the academic debate. DAACHcommunity offers scholars the
publishing their work in DAACH.
Bibliography
CONARD, N.J. 2009. “Die erste Venus. Zur ältesten Frauendarstellung der Welt,” in Eiszeit. Kunst und Kultur (Jan Thorbecke Verlag der Schwabenverlag AG, Ostfildern) 268-271.
BERALDIN, J.-A.; GODIN, G. et al., 2001. “An Assessment of Laser Range Measurement on Marble Surfaces,” 5th Conference on Optical 3D Measurement Techniques, October 1-4, 2001, Vienna Austria. 8 pp.; graphics.stanford.edu/papers/marbleassessment/marbr e_gg_final2e_coul.pdf.
DAVIES, G. 2010. “Togate Statues and Petrified Orators,” in D.H. Berry and A. Erskine, editors, Form and Function in Roman Oratory (Cambridge) 51-72. DELLAERT, F.; SEITZ, S.M. et al. 2000. “Structure from Motion Without Correspondences,” Proceedings Computer Vision and Pattern Recognition Conference; available online at: (seen February 1, 2012).
BORBEIN, A. 2000. “On the History of the Appraisal and Use of Plaster Casts of Ancient Sculpture (especially in Germany and in Berlin),” inLes moulages de sculptures antiques et l'histoire de l'archéologie. Actes du colloque international Paris, 24 octobre 1997, edited by Henri Lavagne and François Queyrel (Geneva 2000) 29–43; translated by Bernard Frischer. Available online at: www.digitalsculpture.org/casts/borbein/.
FRISCHER, B. forthcoming. “A New Feature of Rome Reborn: The Forum Statue Group Illustrated on the Trajanic Anaglyphs,” a paper presented at the 2012 Meeting of the Archaeological Institute of America.
CASSIRER, E. 1953. An Essay on Man. In Introduction to a Philosophy of Human Culture (Doubleday, Garden City, NY).
FRISCHER, B. and STINSON. P. 2007. “The Importance of Scientific Authentication and a Formal Visual
143
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Language in Virtual Models of Archeological Sites: The Case of the House of Augustus and Villa of the Mysteries,” in Interpreting the Past. Heritage, New Technologies and Local Development. Proceedings of the Conference on Authenticity, Intellectual Integrity and Sustainable Development of the Public Presentation of Archaeological and Historical Sites and Landscapes, Ghent, East-Flanders 11-13 September 2002 (Brussels) 49-83. Available online at: www.frischerconsulting.com/frischer/pdf/Frischer_Sti nson.pdf. (seen February 1, 2012).
18-20 January 2005, San Jose, California, USA, SPIE. Vol. 5665: 119-133; available online at: www.frischerconsulting.com/frischer/pdf/Plastico.pdf. LEVOY, M. 2003. “The Digital Michelangelo Project,” available online at http://graphics.stanford.edu/ projects/mich/ (seen January 10, 2012). ROCKWELL, P. 1993. The Art of Stoneworking (Cambridge); available online at http://www. digitalsculpture.org/rockwell1.html (seen February 1, 2012). ZUK, T.D. 2008. “Visualizing Uncertainty,” a thesis submitted to the faculty of graduate studies in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Department of Computer Science, University of Calgary (Alberta).
GUIDI, G.; FRISCHER, B. et al. 2005. “Virtualizing Ancient Rome: 3D Acquisition and Modeling of a Large Plaster-of-Paris Model of Imperial Rome,” Videometrics VIII, edited by J.-Angelo Beraldin, Sabry F. El-Hakim, Armin Gruen, James S. Walton,
144
CASE STUDIES
7.2 3D GIS FOR CULTURAL HERITAGE SITES: THE QUERYARCH3D PROTOTYPE Giorgio AGUGIARO & Fabio REMONDINO
Linking quantitative information (obtained from surveying) and qualitative information (obtained by data interpretation or by other documentary sources), analysing and displaying it within a unique integrated platform plays therefore a crucial role. In literature, some approaches exist to associate information to an entire building (Herbig and Waldhusl, 1997), to 2D entities (Salonia and
7.2.1 INTRODUCTION
Constant advances in the field of surveying, computing and digital-content delivery are reshaping the approach Cultural Heritage can be virtually accessed: thanks to such new methodologies, not only researchers, but also new potential users like students and tourists, are given
Negri, 2000), to 3Ddescription objects (Knuyts 2001), or according to a model (Dudeketet al. al., 2003).
the chance toand useperform a wide analyses array of new to obtain information with tools regards to art history, architecture and archaeology.
Use of 3D models can be an advantage as they act as “containers” for different kinds of information. Given the possibility to link their geometry to external data, 3D models can be analysed, split in their subcomponents and organised following proper rules.
One useful possibility is offered by 3D computersimulated models, representing for example both the present and the hypothetical status of a structure. Such models can be linked to heterogeneous information and queried by means of (sometimes Web-enabled) GIS tools. In such a way, relationships between structures, objects and artefacts can be explored and the changes over space and time can be analysed.
The NUBES Tempus Project [link] (MAP Gamsau Laboratory, France) is an example (De Luca et al., 2011), where 3D models in the field of Cultural Heritage are used for diachronic reconstructions of the past. Segmented models can also help to interpret history allowing the assembly of sub-elements located in different places but belonging to the same artefact or site (Kurdy et al., 2011).
For some research purposes, a traditional 2D approach generally suffices, however more complex analyses concerning spatial and temporal features of architecture require 3D tools, which, in some cases, have not yet been implemented or are not yet generally available, as more efforts have been put in recent year in 3D data
With regards to 3D data visualisation, development tools come traditionally from the videogames domain and can
visualisation rather than 3D spatial analysis.
be adapted OpenSG, to support 3DVIA 3D geodata (e.g. Unity3D, OGRE3D, Virtools, etc.) but OSG, with limited capabilities when it comes to large and complex reality-based 3D models. Another general constraint resides in the limited query functionalities for data retrieval. These are actually typical functions of GIS packages, which, on the other hand, often fall short when dealing with detailed and complex 3D data.
Nowadays 3D models of large and complex sites are generated using methodologies based on image data (Remondino et al., 2009), range data (Vosselman and Maas, 2010), classical surveying or existing maps (Yin et al., 2009), or a combination of them, depending on the required accuracy, the object dimensions and location, the surface characteristics, the working team experience, the project’s budget. The goal is often to produce multiresolution data at different levels of detail (LoD), both in geometry and texture (Barazzetti et al., 2010; Remondino et al., 2011).
In 1994, VRML (Virtual Modelling Language) was launched and became an international ISO standard in 1997. The idea was to have a simple exchange format for 3D information. 145
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
The specifications for its successor, X3D (eXtensible 3D), were officially published in 2003, whose goal was to better integrate with other web technologies and tools.BothX3D and VRML can be used to visualise 3D geoinformation. VRML and X3D languages have been developed in order to describe3D scenes in terms of geometry, material and illumination, while still requiring specific plugins or applets for the rendering inside a web browser. The advantage for X3D resides in its XML encoding, which represents a major advantage for on-thefly retrieval in Internet applications.
producing static scenes, but also for requesting data, in order to stream it to the client which implements a more dynamic visualisation. Its specification is however currently still in draft status and not yet adopted by the OGC, although W3DS has already been implemented in some projects, e.g. for the city of Heidelberg in Germany (Basanow et al., 2008). Another web application to share 3D contents is OSG4Web (Pescarin et al., 2008; Baldissini et al., 2009). It consists of a browser plugin that uses the Open Scene Graph library. It supports multiple LoDs and can load different 3D model formats. These peculiarities are particularly useful for large scale visualisations, such as terrain or urban landscape models.
Research on spatial query and VRML-based visualisation has produced some prototype systems, as in Coors and Jung (1998) or in Visintini et al., (2009), where VRML 3D models are linked to external databases and can be accessed via Web.
Virtual globes (e.g. Google Earth and Nasa World Wind) have gathered much visibility in the recent years as visualisation tools for geospatial data: the user can and get external information by clicking on the selected object, or by activating a selectable layer. However more complex queries cannot be generally performed “out-ofthe-box”, unless some specific functions are written ad hoc. In Brovelli et al., (2011), for example, the buildings in the city of Como can be selected according the construction date and the results visualised both in 2D and 3D in a web browser where a multi-frame view is implemented using both WebGIS and Nasa World Wind technologies.
Regarding X3D, as it strives to become the 3D standard for the Web integrated in HTML5 pages, X3DOM has been proposed as syntax model and implemented as a script library to demonstrate how this integration can be achieved without a browser plugin, using only WebGL and JavaScript. The 3D-COFORM project web site presents collection of 3D scanned artefacts visualised using the X3DOMtechnology [link].X3D is being exploited even for mobile devices, e.g. in the case of the Archeoguide Project (21), or in Behr et al., (2010) and Jung et al., (2011), examples of mobile augmented reality in the fieldwhere of Cultural Heritage are reported.
Currently on-going in terms and WebGL offerdevelopments great potential for of theHTML5 future development of more interactive, responsive, efficient and mobile WebGIS applications. This includes the use of 2D, 3D and even temporal and animated content without the need of any third party plugins (Auer 2012). The trend is therefore that more and more graphics libraries will rely on WebGL (a list of the principal ones can be found in the Khronos Group’s webpage [link].
The Venus 3D publishing system [link] by the CCRM Labs enables to explore high-resolution 3D contents on the web using a WebGL-enabled browser and no additional plugins. The system manages both high- and low-resolution polygonal models: only decimated versions of the models are downloaded and used for interactive visualisations, while the high resolution models are rendered remotely and displayed for static representations only. In the Digital Sculpture Project [link] (Virtual World Heritage Laboratory, University of Virginia) 3D models of sculptures are published on-line (Frischer and Schertz, 2011). Although this application actually shows just shaded geometries, other textured 3D models of cultural heritage are available in [link].
A more detailed overview of the available 3D Web technologies is presented in Behr et al., (2009), with a division in plugin and plugin-less solutions, and in Manferdini and Remondino (2011). But, as of today, a standard and widely accepted solution does not exist yet.
At urban scale, CityGML [link] represents a common 7.2.2 THE QUERYARCH3D TOOL
information model for the representation of 3D urban objects, where the most relevant topographic objects in cities and their relations are defined, with respect to their geometrical, topological, semantic and appearance properties. Unfortunately, even CityGML’s highest level of detail (LoD4) is not meant to handle high-resolution, reality-based models, which are characterised by complex geometry and detailed textures.
This section presents a web-based visualisation and query tool called QueryArch3D [link] (Agugiaro et al., 2011), conceived to deal with multi-resolution 3D models in the context of the archaeological Maya site in Copan (Honduras).The site contains over 3700 structures, consisting in palaces, temples, altars, and stelae, spread over an area of circa 24 km2, for which heterogeneous datasets (e.g. thematic maps, GIS data, DTM, hypothetical reconstructed 3D models) have been created over the course of time (Richard-Rissetto, 2010). More recently (Remondino et al., 2009), high-resolution 3D
Nevertheless, once created, 3D city models can be visualised on the Web through Web3D Services (W3DS). W3DS delivers 3D scenes over the Web as VRML, X3D, GeoVRML or similar formats. It is used not only for 146
CASE STUDIES
Figure 1. Different levels of detail (LoD) in the Query Arch 3D tool. Clockwise from top-left: LoD1 of a temple with prismatic geometries, LoD2 with more detailed models (only exterior walls), LoD3 with interior walls/rooms and some simplified reality-based elements, LoD4 with high-resolution reality-based models data was acquired using terrestrial photogrammetry, UAV and terrestrial laser scanning.
the model is. LoD1 contains simplified 3D prismatic entities with flat roofs.LoD2 contains 3D structures at a higher level of detail, however only the exteriors of the
In the case data of Copan, an “ideal” tooland should enable a centralised management, access visualisation. The following characteristics would be therefore desirable: a) the capability handle 3D multi-resolution datasets, b) the possibility to perform queries based both on geometries and on attributes, c) the choice to visualise and navigate the models in 3D, d) the option to allow both local and on-line access to the contents.
structures.stairs) The can sub-structures walls, roofs or external be identified.(e.g. For the LoD2, some hypothetical reconstructions models were used.LoD3 adds the interior elements (rooms, corridors, etc.) to the structures. Some simplified, reality-based models can be optionally added. These reality-based models were obtained from the more detailed ones used in LoD4 by applying mesh simplification algorithms. LoD4 contains structures (or parts of them) as high-resolution models. These models can be further segmented into subparts. Examples are shown in Figure 1.
As no tool able to guarantee the four identified properties currently exists, a prototype, called Query Arch 3D was implemented. Despite being tailored to the needs of researchers working at the Copan archaeological site, the underlying concepts can be generalised to other similar contexts.
The adoption of a LoD-dependent hierarchical schema required the contextual definition of geometric and semantic hierarchical schemas. This was achieved by an identification and description of the so-called “part-ofrelations”, in order to guarantee a spatio-semantic coherence (Stadler and Kolbe, 2007). At the semantic level, once every structure is defined, its entities are
Conceptually similarly to CityGML, geometric data are organised in successive levels of detail (LoD), provided with geometric and semantic hierarchies and enriched with attributesand coming fromfront-end external data sources. visualisation query enables the The 3D navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. The tool can be used as a standalone application, or served through the web.
represented features (stairs, rooms and theyrules are described bybyattributes, relations and etc.) aggregation between features. If a query on an attribute table is carried out for a certain roof, the user retrieves information not only about the roof itself, but also about which structure contains that roof. However, the semantic hierarchy needs to be linked to the corresponding geometries, too: if a query on an attribute table is carried out for a certain roof, not only the linked attributes should be retrieved, but also the corresponding geometric object. This operation requires however to structure the geometric models in compliance with the hierarchy. Some manual data editing was therefore necessary to
In order to cope with the complexity of multiple geometric models, a conceptual scheme for multiple levels of detail, which are required to reflect independent data collection processes, was defined. For the Copan site, four levels of detail were defined for the structures: the higher the LoD rank is, the more detailed and accurate 147
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 2. Different visualization models in QueryArch3D: aerial view (a, b), walkthrough mode (b) and detail view (d). Data can be queried according to attributes (a) or by clicking on the chosen geometry (b, c, d). The amount of information shown is depending on the LoD: in (b), attributes about the whole temple are shown, in (c) only a subpart of the temple, and the corresponding attributes, are shown
segment the geometric modelsUpon into subparts according to the hierarchical schemes. completion of the segmentation, all geometric models were aligned and georeferenced.
Inside the 3D environment user (e.g. can perform attribute queries overfront-end, the wholethe dataset “highlight all structures built by a ruler X”; “highlight all altars”; “highlight only stelae belonging to group Y and built in year Z”). The user can also perform standard queries-on-click: once a geometric object is selected, the related attribute values are shown in a text box. However, the amount of retrieved information is dependent on the LoD: for LoD1 structures, only global attributes are shown, while for higher LoDs also the detailed information is shown, according to the selected segment (Figure 2).
In order to reduce data formats heterogeneity, the free and open-source DBMS PostgreSQLand its extension PostGIS were chosen as data repository, thus providing a valuable (and unique) management system for spatial and non-spatial data. For data administration purposes a simple front-end was developed. The front-end connects directly to the PostgrSQL server and allows update operations on the data currently stored. For of the interactive 3D visualisation and query frontend, the game engine Unity 3D, an integrated authoring tool for creation of 3D interactive content, was adopted. Unity allows to develop applications which can be embedded also in a webpage. Moreover, it can be linked
7.2.3 CONCLUSIONS
to external databases and retrieve data when needed, e.g. by means of a PHP interface between Unity and PosgreSQL.
research the and Cultural Heritage field. Nowadaysusing 3D models ofinlarge complex sites can be produced different methodologies that can be combined to derive multi-resolution data and different levels of detail. The 3D digital world is thus providing opportunities to change the way knowledge and information can be accessed and exchanged, as faithful 3D models help to simulate reality more objectively and can be used for different purposes.
The continuous development and improvement of new sensors, data capture methodologies, multi-resolution 3D representations contribute significantly to the growth of
Regarding the navigation in the 3D environment, three models are available: a) an aerial view over the whole archaeological site, where only LoD1 models are shown, b) a ground-based walkthrough mode, in which the user can approach and enter a structure “on foot” up to LoD3 (provided such a model exists, otherwise a lower-ranked model at LoD2 or LoD1 is visualised) and c) a detail view, where LoD4 models are presented.
The QueryArch3D prototype was developed to address some of open issues regarding multi-resolution data integration and access in the framework of Cultural Heritage. Some requirements were identified in terms of 148
CASE STUDIES
capability to handle multi-resolution models, to query geometries and attributes in the same virtual environment, to allow 3D data exploration, and to offer on-line access to the data. QueryArch3D fulfils these requirements. It is still at a prototypic stage, but it is being further developed and improved, to extend and refine its capabilities. Adding more high-resolution models into an on-line virtual environment requires good hardware and internet connections; proper strategies will have to be tested and adopted to keep the user experience acceptable as the number of models grows.
Systems, Lecture Notes in Geoinformation and Cartography, SpringerBerlin Heidelberg, pp. 6586. BEHR, J.; ESCHLER, P.; JUNG, Y.; ZOELLNER, M. 2009. X3DOM – A DOM-based HTML5/X3D Integration Model, Web3D Proceedings of the 14th International Conference on 3D Web Technology, ACM Press, New York, USA, pp. 127-135. BEHR, J.; ESCHLER, P.; JUNG, Y.; ZÖLLNER, M. 2010. A scalable architecture for the HTML5/X3D integration model X3DOM, Spencer S. (Ed.): Web3D
As of now QueryArch3D relies on the Unity plugin, but the constant improvements and innovations with regards to the web-based access and visualisation capabilities offered by HTML5 and WebGL will make it possible to switch, at a certain point, to a plugin-free architecture.
Proceedings of the 15th International Conference on 3D Web Technology, ACM Press, New York, USA, pp. 185-194. BROVELLI, M.A.; VALENTINI, L.; ZAMBONI, G. 2011. Multi-dimensional and multi-frame web visualization of historical maps. Proc of the 2nd ISPRS workshop on Pervasive Web Mapping, Geoprocessing and Services, Burnaby, British Columbia, Canada. COORS, V. and JUNG, V. 1998. Using VRML as an Interface to the 3D Data Warehouse. Proceedings of VRML'98, New York, pp.121-127. DE LUCA, L.; BUSSAYARAT, C.; STEFANI, C.; VÉRON, P.; FLORENZANO, M. 2011. A semantic-based platform for the digital analysis of architectural heritage Computers & Graphics. Volume 35, Issue 2, April 2011, pp. 227-241 Elsevier. DUDEK, I.; BLAISE, J.-Y.; BENINSTANT, P. 2003.
The Web has already improved accessibility to 2D spatial information hosted in different computer systems over the Internet (e.g. by means of WebGIS), so the same improvements are expected in the near future for the 3D. References
AGUGIARO, G.; REMONDINO, F.; GIRARDI, G.; VON SCHWERIN, J.; RICHARDS-RISSETTO, H.; DE AMICIS, R. 2011. QUERYARCH3D: Querying and visualizing 3D models of a Maya archaeological site in a webbased interface. Geoinformatics FCE CTU Journal, vol. 6, pp. 10-17, Prague, Czech Republic. ISSN: 1802-2669. AUER, M. 2012. Realtime Web GIS Analysis using WebGL. International Journal of 3-D Information Modeling (IJ3DIM), Special Issue on Visualizing 3D Geographic Information on the Web (Special Issue Eds.: M. Goetz, J.G. Rocha, A. Zipf). Special Issue on: 3D Web Visualization of Geographic Data. IGIGlobal. DOI: 10.4018/ij3dim.2012070105. BALDISSINI, S.; MANFERDINI, A.M.; MASCI, M.E. 2009. An information system for the integration, management and visualization of 3d reality based archaeological models from different operators, Remondino F., El-Hakim S., Gonzo L. (Ed.): 3D Virtual Reconstruction and Visualization of Complex Architectures, 3rd ISPRS, International Workshop 3D-ARCH 2009. Trento, Italy, 38(5/W1), (on CDROM). BARAZZETTI, L.; FANGI, G.; REMONDINO, F.; SCAIONI, M. 2010. Automation in multi-image spherical photogrammetry for 3D architectural reconstructions, Proc. of 11th Int. Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2010), Paris, France. BASANOW, J.; NEIS, P.; NEUBAUER, S.; SCHILLING, A. and ZIPF, A. 2008. Towards 3D Spatial Data Infrastructures (3D-SDI) based on open standards experiences, results and future issues. In: P. Oosterom, S. Zlatanova, F. Penninga and E.M. Fendel, eds. Advances in 3D Geoinformation
Exploiting heritage’s documentation: a case studytheonarchitectural data analysis and visualisation. Proc. of I-KNOW‘03. Graz, Austria. HERBIG, U.; WALDHUSL, P. 1997. APIS: architectural photogrammetry information system. International Archives of Photogrammetry and Remote Sensing, vol. 38(5C1B), Geo-Information Science and Earth Observation, University of Twente, Enschede, The Netherlands, pp. 23-27. IOANNIDIS, N.; CARLUCCI, R. 2002. Archeoguide: Augmented Reality-based Cultural Heritage On-site Guide, GITC, ISBN 908062053X. JUNG, Y.; BEHR, J.; GRAF, H. 2011. X3DOM as carrier of the virtual heritage, Remondino F., El-Hakim S. (Ed.): Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 4th ISPRS International Workshop 3DARCH 2011, Trento, Italy, 38(5/W16), (on CD-ROM). KNUYTS, K.; KRUTH, J.-P.; LAUWERS, B.; NEUCKERMANS, H.; POLLEFEYS, M.; LI, Q. 2001. Vision on conservation: VIRTERF. Proc. of the Int. Symp. On Virtual and Augmented Architecture. Springer, Dublin, pp 125–132. KURDY, M.; BISCOP, J.-L.; DE LUCA, L.; FLORENZANO, M. 2011. 3D Virtual Anastylosis and Reconstruction of several Buildings on the Site of Saint-Simeon, Syria, Remondino F., El-Hakim S. (Ed.): Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 38(5/W16), (on CDROM). 149
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
MANFERDINI, A.M.; REMONDINO, F. 2012. A review of reality-based 3D model generation, segmentation and web-based visualization methods. Int. Journal of Heritage in the Digital Era, Vol. 1(1), pp. 103-124, DOI 10.1260/2047-4970.1.1.103. PESCARIN, S.; CALORI, L.; CAMPORESI, C.; IOIA, M.D.; FORTE, M.; GALEAZZI, F.; IMBODEN, S.; MORO, A.; PALOMBINI, A.; VASSALLO, V.; VICO, L. 2008. Back to 2nd AD A VR on-line experience with Virtual Rome Project, Ashley M., Hermon S., Proenca A., Rodriguez-Echavarria K., (Ed.), Proc. 9th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2008), pp. 109-116. REMONDINO, F.; GRÜN, A.; VON SCHWERIN, J.; EISENBEISS, H.; RIZZI, A.; SAUERBIER, M.; RICHARDSRISSETTO, H. 2009. Multi-sensors 3D documentation of the Maya site of Copan. Proc. of 22nd CIPA Symposium, 11-15 Oct., Kyoto, Japan. REMONDINO, F.; RIZZI, A.; BARAZZETTI, L.; SCAIONI, M.; FASSI, F.; BRUMANA, R.; PELAGOTTI, A. 2011. Geometric and RadiometricAnalyses of Paintings. The Photogrammetric Record. RICHARDS-RISSETTO, H. 2010. Exploring Social Interaction at the Ancient Maya Site of Copán, Honduras: A Multi-scalar Geographic Information Systems (GIS) Analysis of Access and Visibility. Ph.D. Dissertation, University of New Mexico. SALONIA, P.; NEGRI, A. 2000. ARKIS: an information system as a tool for analysis and representation of heterogeneous data on an architectural scale. Proc. of the WSCG2000. Plzen, Czech Republic.
STADLER A.; KOLBE T.H. 2007. Spatio-Semantic Coherence in the Integration of 3D CityModels. In: Proceedings of the 5th International Symposium on Spatial Data QualityISSDQ 2007, Enschede, Netherlands, ISPRS Archives. VISINTINI, D.; SIOTTO, E.; MENEAN, E. 2009. The 3D modeling of the St. Anthony abbot church in San Daniele del Friuli: from laser scanning and photogrammetry to vrml/x3d model, Remondino F., El-Hakim S. (Ed.), Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 3th ISPRS International Workshop 3D-ARCH 2009, Trento, Italy, 38(5/W10), (on CD-ROM). VOSSELMAN, G.; MAAS, H. 2010. Airborne and terrestrial laser scanning. CRC, Boca Raton, 318 pp. ISBN: 9781904445-87-6. Web links
[NUBES] http://www.map.archi.fr/nubes/NUBES_ Information_System_at_Architectural_Scale/ Tempus.html [3DCOFORM] http://www.3d-coform.eu/ [VENUS] http://www.ccrmlabs.com/ [DIGSCO] http://www.digitalsculpture.org/ [CITYGML] http://www.citygml.org [KHRONOS] http://www.khronos.org/ [QUERYARCH3D] http://mayaarch3d.unm.edu/ index.php
150
CASE STUDIES
7.3 THE USE OF 3D MODELS FOR INTRA-SITE INVESTIGATION IN ARCHAEOLOGY Nicolo’ DELL’UNTO
7.3.1 INTRODUCTION
investigation, as they are able to provide more complete overviews of archaeological contexts.
The exponential evolution of spatial and visual technologies has deeply impacted archaeology as a discipline. Technology has always been an important part of archaeological practices, and its use has contributed to developing methods and theories for the investigation and analysis of archaeological sites. Instruments and tools
In particular, the diffusion of digital formats and the availability of powerful visualisation platforms, such as the Geographic Information System (GIS), have exponentially increased the ability to highlight and identify new information by placing data of different
that arearchaeologists typically usedfrom to conduct activitiesof have aided the veryfield beginning this discipline. The use of these instruments and tools has been customised over the years to improve the excavation process. In other scientific disciplines, results and hypotheses can be verified multiple times, whereas, in archaeology, this practice is not feasible given the irreversible nature of the investigation process (Barker, 1993). Therefore, choosing recording system is a fundamental part of archaeological research, as it will determine the quality and typology of the data employed during the interpretation process.
natures into a spatial relationship. Although new technologies have provided a plethora of options for recording material data, the use of these technologies during on-going archaeological investigations has always been related to the technologies’ ability to fit within the logistic framework and time constraints of the field campaign. In contrast with other areas of the cultural heritage sector, the longterm use of digital technologies during archaeological field activities requires sustainable and functional workflows for the acquisition, visualisation and permanent storage of the data.
The evolution of archaeological practices has been described in previous literature (Jensen, 2012), and experimentation with new investigation methodologies has always been evident in archaeology. It is important to highlight how the approaches that have been adopted in
Digital technologies influence how archaeologists experience sites, as the technologies are transforming a type of research that has traditionally developed in an isolated context into a more collective experience (Zubrow, 2009).
the past by different toscholars have represented an important contribution the definition of a balance between documentation and field activities.
7.3.2 3D MODELS AND FIELD PRACTICES
To date, this discussion has provided a platform through which scientific methodologies of investigation have been argued and defined.
Excavation is the primary method of data acquisition in archaeology. During excavation, fragmented contexts are recognised and then diachronically removed and recorded with the goal of reconstructing and interpreting the evolution of the site’s stratigraphy and chronological sequence (Barker, 1993).
In the last decade, archaeological practices and documentation have been strongly affected by the diffusion of digital technologies. A result of this process has been the introduction of new typologies of instruments and data that benefit current methods of
The documentation system that is adopted in the field is typically designed according to the characteristics of the 151
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 1. This image presents an example of a 3D model acquired during an investigation campaign in Uppåkra (Summer 2011). The model has been realised using Agisoft Photoscan and visualised through MeshLab site and is based on the systematic collection of archaeological evidence. The recording process may be one of the most delicate parts of a field campaign. In fact, at the end of an investigation, interpretative drawings, photographs and site records are the only data sources available for post-excavation research.
exposure. A 3D model provides a high-resolution geometric description of the archaeological evidence that characterises a site and is able to do so in the specific time frame of the investigation activity (Fig. 1). In contrast to a picture, 3D models can be measured and explored at different levels of detail, and, if generated during field activities, they can be used together with graphic documentation to plan new excavation strategies and monitor the field activity that has developed.
Field documentation plays an important role during the interpretation process (i.e., excavation and postexcavation) and significantly influences the successful planning of an on-going investigation. Since personal computers were first used in archaeology, instruments, such as the Computer Aided Designed (CAD), have been utilised to digitalise hand drawings and field records. Although these instruments are not comparable with the tools that are available today, the irintroduction revolutionised documentation by supporting the creation of maps that displayed varying levels of detail regarding a site in a single document.
Certainlyofthe possibility ofdatacreating 3D replicas archaeological would measureable be an important achievement for archaeological research, but it is important to note that 3D models cannot be used as a substitute for interpretative drawings. In fact, both methods provide descriptions of different aspects of the same context and their combined use may represent the most exhaustive visual tool for describing a site. When associated with field documentation, three-dimensional models can be used to track and reconstruct the evolution of a site’s field activities, including mapping the metamorphosis of an archaeological excavation through its entire life cycle.
Another important technological achievement in archaeology was the introduction of Geographic Information System (GIS), which is “a sophisticated database management system designed for the acquisition, manipulation, visualisation, management and display of spatially referenced data ” (Aldenderfer
7.3.3 INTRA-SITE 3D DOCUMENTATION IN UPPÅKRA
1996). Although GIS did not substantially affect how graphic documentation was realised during excavation
The use of three-dimensional models to document
nor did it influence typologyit ofdidthesupport information documented in fieldtherecords, data management and space analysis by providing the capacity to display a complete overview of on-going investigation activity through a temporal spatial connection of the whole dataset, once it was imported into the system.
archaeological important step in the archaeological contexts recordingis an process. Although the advantages to employing this typology of data during investigations are obvious, it is not as simple to define strategies for systematic employment in the field. As stated, to be successfully employed during excavation, 3D models need to be available during the time frame of the field activity and have to be visualised in a spatially related manner with all of the other types of documents collected during an investigation.
The use of 3D models to document archaeological investigations is an important novelty in the area of field recordings. In contrast within terpretative drawings, which provide a schematic and symbolic description of a context, a three-dimensional model has the capacity to display the full qualities of a context immediately upon
Since the spring of 2010, experiments at the archaeological site of Uppåkra, Sweden, have been 152
CASE STUDIES
conducted via collaboration between the Lund University (http://www.lunduniversity.lu.se/) and the Visual Computing Lab in Pisa (http://vcg.isti.cnr.it/). The goal of these experiments is to examine the advantages and disadvantages of using 3D models to document and interpret on-going archaeological investigations. This typology of research aims to highlight how the use of spatial technologies changes the perception of an investigation site through the direct employment of new digital approaches during excavation.
The process of 3D data construction begins with the employment of algorithms regarding the structure from motion (SFM). This procedure involves calculating the camera parameters for each image. After this operation, the software detects and matches similar geometrical features from each pair of consecutive images and calculates the figures’ corresponding positions in space. Once the camera’s position has been estimated, algorithms for dense stereo reconstruction are used to generate a detailed 3D model of the scene. During the second stage, the pre-estimated camera parameters and image pixels are used to build a dense cloud of points that are then processed into a high-resolution 3D model (Verhoeven, 2011; Scharstein & Szeliski, 2002; Seitz et al., 2006, Callieri et al., 2011) (Fig. 2). Although it is flexible and versatile, the success of this method largely depends on the skill of the operator with regard to taking a good set of pictures, the quality of the digital camera used to take the photographs and the computational characteristics of the computer used for data processing (Callieri et al., 2012).
The archaeological site of Uppåkra is considered one of the most important examples of an Iron Age central place in Sweden. The site is located in Scania, which is 5 kilometres south of Lund and consists of approximately 100 acres of land. To date, the archaeological investigation has revealed the existence of a settlement that was established at the beginning of the 1st century BC and existed till the end of the 11th century AD. This settlement has many different typologies of structures and finds. The site, which was discovered in 1934, has been the subject of archaeological investigations since 1996. It has proven from the very beginning to be an extraordinarily rich site. During afield campaign’s initial phase (1996-2000), a metal detector survey indicated the presence of approximately 20,000 findings, which supported the continuity of human activities at this site from the Pre-Roman Iron Age until the Viking Age (Larsson, 2007).
The first experiment using this technique was performed in the summer of 2010 during an excavation on the Southeast side of the previously mentioned long house. The goals of this experiment were to test the efficiency of this technique when producing complete 3D models within the time frame of the excavation and to gain insight into the level of accuracy of the 3D models created using this technique. This experiment was
Thus far, Uppåkra has been The an ideal environment that for conducting our experiments. rich stratigraphy characterise this site and the large variety of structures found to date have allowed for the testing of tools and instruments across a number of different types of archaeological situations. Currently, this research environment serves as an ideal place for developing and testing new research methodologies.
performed without joining theday field campaign, anddaily the photographs were taken every at the end of the excavation activities. Despite limitations in the software available at the time of the experiment, both of these goals were successfully accomplished. Every day, a complete 3D replica of the archaeological context under investigation was available for archaeologists to utilise to monitor their previous investigation activities (Fig. 3). Moreover, these 3D models were sufficiently accurate for use as geometrical references in documentation (Dellepiane et al., 2012).
One of the first in situ tests was conducted during the spring of 2010 and involved using a time of flight laser scanner to document an Iron Age long house that was situated in the Northeast area of the site. Despite the short amount of time needed to accomplish this task, the postprocessing of the raw data was extremely timeconsuming and did not provide results that could be utilised before the end of the field campaign. After this experience, it was clear that this typology of instrument, although extremely accurate and resolute, was not
Although these results were positive, whether this new typology of data could be employed during the practice of an excavation remains unclear as the first experiment was performed without joining the excavation campaign. Therefore, we designed an experiment to evaluate whether the models created during a field campaign could
appropriate for use in this experiment.
influence development an investigation campaign, given that the these models mayofprovide new information for interpretation. The goals of this experiment were as follows: (i) to assess the sustainability of a documentation method based on the combination of 3D models and traditional data, (ii) to evaluate the use of 3D models as geometrical references for documentation, and (iii) to shed light on whether the use of different visualisation tools increased comprehension of the stratigraphic sequence of the site within the time frame of the investigation. This experiment was performed in the summer of 2011 during an excavation campaign of a Neolithic grave, which was detected in 2010 in the
During the same time period, we were also testing Computer Vision techniques, which are tools that generate resolute tridimensional models from sets of unordered images. This method’s primary advantage is that a simple digital camera (without any preliminary calibration) can be used to generate a 3D model of a large archaeological context. Despite the limitations of the image processing software that was used at that time, this technique was extremely flexible in the field and provided resolute 3D models within the time frame of the excavation (Dellepiane et al., 2012). 153
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
Figure 2. This image shows the three steps performed by the software (i.e., Photoscan and Agisoft) to calculate the 3D model for the rectangular area excavated in 2011 during the investigation of a Neolithic grave in Uppåkra: (a) camera position calculations, (b) geometry creation, and (c) map projection
Northwest of the sitethewhen geophysical inspection of the areaarea highlighted cleara presence of anomalies (Trinks et al., 2013).
found the continuity north and south parts circular of the excavation. verify inthe of this structure, Toa perpendicular trench was excavated. A large pit was discovered in the middle of the structure and a stone paving was found at the bottom of this pit. (Fig. 4) (Callieri et al., 2011).
The excavation began with an investigation of a rectangular area that crossed the circular structure found during the geophysical inspection. The archaeological contexts were documented combining Total Station (which was used by the excavation team to produce graphic interpretations), field records and several sets of images, which were used to generate the 3D models. During the field campaign, circular-shaped ditches, which may have served as the border of a grave mound, were
This experiment assessed whether the use of 3D models could provide a better understanding of an on-going excavation during field activities. In particular, we attempted to use this new typology of data during the discussion frame, which typically occurs when there is a direct examination of the archaeological evidence. 154
CASE STUDIES
Figure 3. This image s ows the investigation area that was selected in 2010 o test the efficiency of the Computer Vision tec niques durin an archaeol gical excavation in Uppåk a. The upper part of the i age presents (A) a model created du ing the excavation overlapped with the raphic docu entation created during th investigatio campaign. he lower part of the image presents (B) n example o models organised in a te poral sequen e
Fi ure 4. This i age shows t o models of the excavation that were c eated at different times du ing the investigation campaign. In the first model, (a) the circular ditch is visible only n the Northwest rectangular ar a. The secon model shows (b) how the results of the archaeological investigati n allo ed for the di covery of a ditch that was in the Southe st rectangular area
155
3D MODELING IN ARCHAEOL
GY AND CULT RAL HERITAGE
Figure 5. This image shows part of the D models that were create during the e cavation of a grave, organised in a temporal sequenc
To achieve his goal, it as necessar to develop a 3D model befor the field investigation pr gressed. Although Computer ision techni ues create high-resolutio 3D models, we decided to aintain stric control over the number of p lygons used to describe e ch file. Ther fore, we established guidelines to develop odels that eq ated resolution a d usability, eaning that he files should be easy to ma age and store. In fact, the limited pace resources th t often char cterise archaeological arc ives could have revented the storage of the 3D models with the rest of the documentation, which would have resulted in the loss o these model in the future..
Ho ever, this solution was not available exp riment was performed.
The inability to visualise the t ree-dimensional models in spatial relation ith the site’ documentation prevented the reation of a complete vis al descriptio of the site’ doc mentation. his highlights how a 3D replica of an archaeological c ntext, if not connected to the site’ doc mentation, l ses a large part of its c mmunication power.
7.3.
After establishing the
hen the las
3D MODELS AND GI
uidelines f r processing the
models, we began dailysacquisition the site by ggeoreferencing he 3D amodel with the sitef grid. Durin the excavation, taff member primarily u ed the 3D files to discuss issues regarding t e on-going excavation, such as the horizontal relations mong different layers o the vertical pro ression of t e site strati raphy. Moreover, different vir ual perspecti es and angl s were utilised to review the omplex met morphos is of the site, hich would have een impossi le to do in th e real life. (Fig. 5) (Callieri et al., 2011).
Our results reve gical l shed invest light on the use 3D model during archaeol gations. Criofically, these results highlight the importa ce of findin new visual solutions for merging tri-dimensional data into exc vation routines. The increasing diffusion an use of 3 models in diff rent discipli es has encouraged the pri ate sector to pro ose new s lutions. Companies, su h as ESRI (htt ://www.esri. om/), have recently invested in dev loping GIS platforms t at have the capacity to manage and vis alise 3D odels. This technological imp ovement pr vides an i portant op ortunity fo gaining a new u derstanding of the curren topic. Thus the experiments included i this pape should be dev loped furthe to gain a pr liminary understanding o the advantages and disadva tages of c mbining 3D mo els with GIS.
In contrast with graphic documentation, during hich interpretatio s are reco ded as a esult of intense discussion rocesses, the 3D models were pri arily utilised to achieve a deeper understanding of the stratigraphic sequence. This test assessed whethe exploring ultiple 3D models of as ite in eal-time enh nced archaeologists’ ability to monitor an recognise archaeologic l evidence. This experiment revealed t at elaborat tri-dimensional models incr ase researchers’ sense of wareness, th reby improving t e quality of t e final interpretation. Although we achieve most of our goals, w were unab e to visualise both the traditio al documentation (i.e., gr phic elaborations and field re ords) and t e 3D files i the same virtual space.
Aft r briefly investigating the best data orkflow fo achieving a sust inable proc ss of data igration, we started a systematic import o the availabl information into a GIS. To obtain this esult, the hole dataset, which wa pre iously acquired in the fiel d, was re-pr cessed using Agi oft Photoscan (http://www.agisoft.ru/), which is a soft are that pro ides an advanced image- ased solution for reating three-dimensional contents fro still image (Ve hoeven, 20 1). Despite similarities with othe Co puter Visio application , the advantage of using Agi oft is that it semi-autom tically creat s scaled and
Obviously, the ideal situation would be to use a visualisation platform that was capable of displaying both graphic inte pretations a d 3D model within the same referenced v rtual space.
156
CASE STUDIES
Figure 6. This image shows the integration of the 3D models into the GIS geo-referenced texturised models, which is crucial using 3D files in a GIS platform.
period, but that had been excavated during a different time frame.
The visualisation of the dataset was performed using ArcScene, which is a 3D software application developed by ESRI that displays GIS data in three dimensions. After the models were optimised, they were imported and visualised using ArcScene with the shape files that were created during the investigation campaign (Fig. 6a). Once defined, the workflow was easy to apply, and, although there was necessary optimisation (Fig. 6b), the models maintained a sufficient level of visualisation.
Unfortunately, the typology of data collected during our experiments does not support performing this simulation. However, we intend to initiate new experiments focusing on these aspects.
7.3.5 CONCLUSION
An important advantage to having the 3D models
This paper presents the advantages of incorporating threedimensional models into current archaeological recording systems. The results supported the combination of 3D files with the current documentation system, as this approach would represent a more informative tool for the description of the excavation process. Additionally, the
available the connected GIS is that table can be defined forin and to an eachattribute file. Attribute tables can directly link portions of field records as metadata with the three-dimensional models, which allows for very interesting scenarios regarding the future development of field investigations.
results of our experiments indicate appropriate integration of 3D models within the that timethe frame of field activities exponentially increases the perception of the archaeological relations that characterize the on-going investigation by providing a 3D temporal reference of the actions performed on the site.
ArcScene only imported models smaller than 34,000 polygons.
When collecting large 3D datasets, Geographic Information System can be used to select and display the threedimensional files that are associated with similar field records (i.e., metadata). This operation will automatically display any artificial 3D environments that were created based on archaeological contexts belonging to the same
Abstract In recent decades, the development of technology that aids in documenting, analysing and communicating information regarding archaeological sites has affected 157
3D MODELING IN ARCHAEOLOGY ANDCULTURAL HERITAGE
the way that historical information is transmitted and perceived by the community.
Pena Serna, S.; Rushmeier, H. &Van Gool, L. (eds), Eurographics Association, pp. 33-40. DELLEPIANE, M.; DELL’UNTO, N.; CALLIERI, M.; LINDGREN, S. & SCOPIGNO, R. 2012. Archeological excavation monitoring using dense stereo matching techniques. Journal of Cultural Heritage, 14, Elsevier, pp. 201-210. FISHER, R.; DAWSON-HOWE, K.; FITZGIBBON, A.; ROBERTSON, C.; TRUCCO, E. 2005. Dictionary of Computer Vision and Image Processing, John Wiley & Sons, Hoboken.
Digital technologies have affected archaeology at all levels; for example, novel investigation methods have highlighted new and unknown aspects of archaeological research. The constant development of friendly userinterfaces has encouraged the diffusion of and experimentation with different approaches. This article discusses how the use of three-dimensional models has changed our perception of field practices in archaeology. Specifically, this paper presents several experiments in which three-dimensional replicas of archaeological contexts were processed and used to document and monitor the short lifetime of on-going archaeological excavations. These case studies demonstrate how the use of digital technologies during field activities allows archaeological researchers to timetravel through their work to revisit contexts and material that had been previously removed during the investigation process.
LARSSON, L. 2007. The iron age ritual building at Uppåkra, southern Sweden, Antiquity 81, pp. 11-25. SZELISKI, R. 2010. Computer Vision: Algorithms and Applications, Springer-Verlag, London. TRINKS, I.; LARSSON, L.; GABLER, M.; NAU, E.; NEUBAUER, W.; KLIMCYK, A.; SÖDERBERG, B.; THORÉN, H. 2013. Large-Scale archaeological prospection of the iron age settlement site UppåkraSweden, Neubauer, W.; Trinks, I.; Salisbury, R.B.; Einwögerer, C., (eds), Austrian Academy of Sciences Press, pp. 31-34. VERHOEVEN, G. 2011. Taking computer vision aloftarchaeological three-dimensional reconstruction from aerial photographs with photoscan, Wiley Online LibraryArchaeological prospection 18, pp. 67-73. JENSEN, O. 2012. Histories of archaeological practices, The National Historical Museum Stockholm, Studies
References ALDENDERFER, M. 1996. Introduction. Aldenderfer M. and Maschner H.D.G. (eds): Antropology, space, and geographic information system. Oxford University Press, Oxford, pp. 3-18. BARKER, P. 1993. Techniques of Archaeological Excavation, B.T. Batsford, London. CALLIERI, M.; DELL’UNTO, N.; DELLEPIANE, M.; SCOPIGNO, R.; SÖDERBERG, B. & LARSSON, L. 2011. Documentation and Interpretation of an Archeological Excavation: an experience with Dense Stereo Reconstruction tools. Dellepiane, M.; Nicolucci, F.;
20, Stockholm.
Websites http://meshlab.sourceforge.net/ http://www.agisoft.ru/ http://www.esri.com/
158
View more...
Comments