Citation
Comparison Analysis of Close-Range Photogrammetric Workflows for 3D Modeling Applications

Material Information

Title:
Comparison Analysis of Close-Range Photogrammetric Workflows for 3D Modeling Applications
Creator:
Soni, Himank M
Place of Publication:
[Gainesville, Fla.]
Florida
Publisher:
University of Florida
Publication Date:
Language:
english
Physical Description:
1 online resource (73 p.)

Thesis/Dissertation Information

Degree:
Master's ( M.S.C.M)
Degree Grantor:
University of Florida
Degree Disciplines:
Construction Management
Committee Chair:
Issa,Raja Raymond
Committee Co-Chair:
Gheisari,Masoud
Committee Members:
Liu,Rui
Graduation Date:
5/3/2019

Subjects

Subjects / Keywords:
photogrammetry
Construction Management -- Dissertations, Academic -- UF
Genre:
bibliography ( marcgt )
theses ( marcgt )
government publication (state, provincial, terriorial, dependent) ( marcgt )
born-digital ( sobekcm )
Electronic Thesis or Dissertation
Construction Management thesis, M.S.C.M

Notes

Abstract:
The purpose of this research is to understand the limitations of various workflows of reality capture technology by using parameters such as time, quality, accuracy. The paper first identifies available technologies and hardware for the reality capture and then reviews its workflows. The research includes a variety ranging from mobile phones, 360 cameras and DSLR cameras for close range photogrammetry. These devices are used to collect data from a real-world construction site that is classified on its size and location. The data collection workflow is identified, and an onsite experiment is performed. The collected data is used to generate a point cloud that has multiple applications in the AEC industry and is used in this study to develop a 3D model. This 3D modelling workflow for each equipment is analyzed for its accuracy, time and quality. Accuracy is analyzed using a normal distribution statistical analysis of mean deviation and standard variation values. The recorded time is used to calculate the speed of the workflow and total time taken by the identified workflow. The models are visually analyzed for texture quality and geometry. The study results will help the users understand the applications of technology in a better and efficient way which can potentially lead them to achieve better results in lesser time. The study is mainly deriving conclusions and results from the 3D Models and is thus limited in determining the other work flows and applications of photogrammetry. It is based on the study of one common site and the specific types of equipment and their workflows, thus limiting our study and providing a base for further research that can include new parameters and workflows. The study concludes that the DSLR camera is more accurate workflow with a mean error of 2 mm and the smartphone is the fastest photogrammetric workflow with 10 mm mean error accuracy. The 360 camera is not suitable for 3D digital Photogrammetric modelling due to distortion in the images and in the models produced. Photogrammetry being a complex process, its results depend on multiple factors the majority of which are identified in this study and its accuracy varies for each computation thus a range of possible errors for the workflows is defined in the study. The study does a comparative analysis of the workflows and concludes that the final choice of workflow would depend upon the time, size of the site and the accuracy requirements of the reality capture project. ( en )
General Note:
In the series University of Florida Digital Collections.
General Note:
Includes vita.
Bibliography:
Includes bibliographical references.
Source of Description:
Description based on online resource; title from PDF title page.
Source of Description:
This bibliographic record is available under the Creative Commons CC0 public domain dedication. The University of Florida Libraries, as creator of this bibliographic record, has waived all rights to it worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
Thesis:
Thesis (M.S.C.M)--University of Florida, 2019.
Local:
Adviser: Issa,Raja Raymond.
Local:
Co-adviser: Gheisari,Masoud.
Statement of Responsibility:
by Himank M Soni.

Record Information

Source Institution:
UFRGP
Rights Management:
Applicable rights reserved.
Classification:
LD1780 2019 ( lcc )

Downloads

This item has the following downloads:


Full Text

PAGE 1

COMPARISON ANALYSIS OF CLOSE RANGE PHOTOGRAMMETRIC WORKFLO WS FOR 3D MODELING APPLICATIONS By HIMANK SONI A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR TH E DEGREE OF MASTER OF SCIENCE IN CONSTRUCTION MANAGEMENT UNIVERSITY OF FLORIDA 2019

PAGE 2

© 2019 Himank Soni

PAGE 3

To my father, Manish Soni and mother, Preeti Soni

PAGE 4

4 ACKNOWLEDGMENTS I would like to express my sincere gratitude to my thesis advisor Dr. Raymond Issa, Director of Center for Advanced Construction Informatio n Modelling at the M.E. Rinker, Sr. School of Construction Management, University of Florida. He would ensure a solution to my problems and provided me with all the resources I needed to perform my thesis successfully. I would like to express my gratitude to Dr. Gheisari for providing me with the resources and guidance on 360 cameras. I would specially like to thank The CACIM and the HCTC lab at the Rinker school for making it very convenient to perform my experiment for the thesis and for the easy availabi lity of resources such as laser scanner, 360 cameras. I am much obliged to my committee members Dr. Masoud Gheisari and Dr. Rui Liu of the M.E. Ri nker, Sr. School of Construction Management for their valuable feedback on my thesis. I would also li ke to tha nk the staff at M.E . Rinker , Sr. S chool of Construction Management for the encouraging research and educational environment which made it easy to pursue my research on the topic of interests and to seek guidance from professors with similar research interests. I am very thankful to the University of Florida for giving me the pleasant environment and an opportunity to conduct my research. Finally, I would like to thank my parents for their enormous support throughout this journey and the constant encou ragement to keep going. I could not have achieved this without you both. Thank you.

PAGE 5

5 T ABLE OF CONTENT S Pages ACKNOWLEDGMENTS ................................ ................................ ................................ .. 4 LIST OF TABLES ................................ ................................ ................................ ............ 7 LIST OF FIGURES ................................ ................................ ................................ .......... 8 LIST OF ABBREVIATIONS ................................ ................................ ........................... 11 ABSTRACT ................................ ................................ ................................ ................... 12 CHAPTER 1 INTRODUCTION ................................ ................................ ................................ .... 14 1.1 Overview ................................ ................................ ................................ ........... 14 1.2 Assessment of Reality Capture Technologies ................................ .................. 15 1.3 Problem Statement ................................ ................................ ........................... 16 1.4 Objective ................................ ................................ ................................ ........... 17 1.5 Scope of Research ................................ ................................ ........................... 17 2 LITERATURE REVIEW ................................ ................................ .......................... 19 2.1 Brief History of Photogrammetry ................................ ................................ ....... 19 2.2 Classification of Photogrammetry ................................ ................................ ..... 20 2.3 Close R ange Photogrammetry ................................ ................................ ......... 21 2.3.1 Structure F rom Motion (SFM) Review ................................ ..................... 22 2.3.2 Accuracy and Precision of Photogrammetry ................................ ............ 24 2.4 Terrestrial Laser Scanning as a Comparison Equipment ................................ .. 27 3 RESEARCH METHODOLOGY ................................ ................................ ............... 30 3.1 Study Overview and Site Selection ................................ ................................ ... 30 3.2 Device Specifications ................................ ................................ ........................ 31 3.2.1 Faro M odel Focus3D X 120 Laser Scanner ................................ ............ 31 3.2.2 Digital Camera ................................ ................................ ......................... 32 3.2.3 Computing Hardware ................................ ................................ ............... 37 3.3 Image Acquisition ................................ ................................ .............................. 37 3.3.1 Aspe cts of Photography ................................ ................................ .......... 40 3.3.2 Camera Settings ................................ ................................ ...................... 42 3.3.3 Negative Image Qualities ................................ ................................ ........ 42 3.4 3D Modeling ................................ ................................ ................................ ...... 43 3.4.1 Laser Scan Modeling ................................ ................................ ............. 43 3.4.2 3D Modeling 360 º Image Data ................................ ................................ . 43 3.4.3 3D Modeling DSLR and I Phone Image Data ................................ .......... 45

PAGE 6

6 4 RESULTS AND ANALYSIS ................................ ................................ .................... 47 4.1 Results ................................ ................................ ................................ .............. 47 4.2 Analysis ................................ ................................ ................................ ............ 50 5 CONCLUSIONS AND RECOMMENDATIONS ................................ ....................... 59 Conc lusions ................................ ................................ ................................ ............ 59 Recommendations for Future Research ................................ ................................ . 62 APPENDIX A HISTORICAL ACCURACY DATA OF PHOTOGRAMMETRIC MODELS ............... 63 B IMAGE PROPERTIES ................................ ................................ ............................ 64 C PHOTOGRAMMETRY MOD ELS ................................ ................................ ............ 65 LIST OF REFERENCES ................................ ................................ ............................... 70 BIOGRAPHICAL SKETCH ................................ ................................ ............................ 73

PAGE 7

7 LIST OF TABLES Table page 1 1 Common applications of reality capture technologies ................................ ......... 16 4 1 Work flow comparison analysis ................................ ................................ ........... 58

PAGE 8

8 LIST OF FIGURES Figure page 2 1 Photogrammetric workflow ................................ ................................ ................. 23 2 2 Factors affecting accuracy ................................ ................................ .................. 26 2 3 Difference between Pulse time and phase modulation laser scanner ................ 29 3 1 Faro focus3D X 330 ................................ ................................ ............................ 32 3 2 Laser scanning workflow ................................ ................................ .................... 33 3 3 Ricoh Theta S workflow ................................ ................................ ...................... 35 3 4 Ricoh Theta S ................................ ................................ ................................ ..... 35 3 5 Nikon D3300 ................................ ................................ ................................ ....... 36 3 6 IPhone XR ................................ ................................ ................................ .......... 37 3 7 Shooting isolated objects ................................ ................................ .................... 38 3 8 Shooting Interior surroundings ................................ ................................ ............ 38 3 9 Shooting Facade ................................ ................................ ................................ 38 3 10 Radial deflection in Camera position ................................ ................................ .. 39 3 11 Aligned images representing actual Camera position ................................ ......... 40 3 12 Depth of field ................................ ................................ ................................ ...... 41 3 13 Autodesk Recap pro workflow ................................ ................................ ............ 44 3 14 Agisoft Workflow ................................ ................................ ................................ . 45 3 15 Autodesk Recap photo workflow ................................ ................................ ........ 46 4 1 Actual Site ................................ ................................ ................................ .......... 47 4 2 3D model from Nikon D3300 DSLR images ................................ ....................... 48 4 3 3D textured mesh from iPhone XR camera ................................ ........................ 48 4 4 3D Model from Ricoh Theta S 360 camera ................................ ......................... 49 4 5 3D point cloud from laser scanner ................................ ................................ ...... 49

PAGE 9

9 4 6 Cloud Comparison in Cloud compare ................................ ................................ . 51 4 7 Laser scan model depicted in blue color compared against 3D model from Smartphone camera represented by green color ................................ ............... 52 4 8 Laser scan model depicted in blue color compared against 3D model from DSLR camera represented by green color ................................ ......................... 52 4 9 Laser scan model depicted in blue color compared against 3D model from 360 camera represented by green color ................................ ............................. 53 4 10 Section view of laser scan compared with 360 º camera, DSLR and smartphone 3D model sections ................................ ................................ .......... 53 4 11 Normal distribution of laser scan compared with 360 camera photogrammetric models ................................ ................................ .................... 55 4 12 Normal distribution of deviation between laser scan point cloud and DSLR camera derived 3D Model ................................ ................................ ................... 55 4 13 Normal distribution of deviation between laser scan point cloud and Smartphone camera derived 3D Model ................................ .............................. 56 4 14 Normal distribution of deviation between DSLR camera and Smartphone camera derived 3D Model ................................ ................................ ................... 56 A 1 Accuracy Data of photogrammetric models comparing scale, error and precision of different types of subjects ................................ ................................ 6 3 B 1 Represent image parameters from the three cameras used for the study .......... 64 C 1 Representing a small size 3D model of a brick made using Iphone iPhone camera ................................ ................................ ................................ ............... 65 C 2 Representing large size 3D model of 6 story building façade made using iPhone XR camera ................................ ................................ ............................. 65 C 3 Representing a 3D model of a column using iPhone XR camera ....................... 66 C 4 Representing a 3D model of a CMU block using iPhone XR camera ................. 66 C 5 Representing a 3D model of brick pile made using iPhone XR camera ............. 67 C 6 3D model representing leaking concrete slab defect made using iPhone XR camera ................................ ................................ ................................ ............... 67 C 7 Representing 3D model of a under construction façade of a 3 story bungalow made using iPhone XR camera ................................ ................................ .......... 68

PAGE 10

10 C 8 Representing 3D model of a bedroom modeled using iPhone XR camera ......... 68 C 9 Representing a 3D model of interior space at oaks mall gainesville modeled using iPhone XR camera ................................ ................................ .................... 69 C 10 Representing interior space 3D model of Rinker school classroom made using iPhone XR camera ................................ ................................ .................... 69

PAGE 11

11 LIST OF ABBREVIATIONS AR Augmented reality BIM Building information modeling CAD Computer aided design PC Point Cloud SFM Structure from motion TLS Traditional laser scanners

PAGE 12

12 Abstract of Thesis Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science in Construction Management COMPARISON ANALYSIS OF CLOSE RANGE PHOTOGRAMMETRIC WORKFLO WS FOR 3D MODELING APPLICATIONS By Himank Soni M ay 2019 Chair: Raymond Issa Cochair: Masoud Gheisari Major: Constructio n Management The purpose of this research is to understand the limitations of various workflows of reality capture technology by using parameters such time, quality , accuracy. The paper first identifies available technologies and hardware for the reality capture and then reviews its workflows. The research includes a variety rang ing from mobile phones, 360 cameras and DSLR cameras for close range photogrammetry . Thes e devices are used to collect data from a real world construction site s that is classified on its size and location . The data collection workflow is identified, and an onsite experiment is performed. The collected data is used to generate a point cloud tha t has multiple applications in AEC industry and is used in this study to develop a 3D model . This 3D modeling workflow for each equipment is analyzed for its accuracy , time and quality. Accuracy is analyzed u sing a normal distribution statistical analysis of mean deviation and standard variation values. The recorded time is used to calculate the speed of the workflow and total time taken by the identified workflow. The models are visually analyzed for texture quality and geometry. The study results will he lp the users understand the applications of technology in a better and efficient way which can

PAGE 13

13 potentially lead them to achieve better results in lesser time. The study is ma inly deriving conclusions and results from the 3D Models and is thus limited in de termining the other work flows and applications of photogrammetry. It is based on the study of one common site and the their workflows, thus limiting our study and providing a base for further research that can include new paramete rs and workflows. The study concludes that the DSLR camera is more accurate workflow with mean error of 2 mm and the smartphone is the fastest photogrammetric workflow with 10 mm mean error accuracy. The 360 camera is not suitable for 3D digital Photogramm etric modeling due to distortion in the images and in the models produced. Photogrammetry being a complex process, its results depend on multiple factors the majority of which are identified in this study and its accuracy varies for each computation thus a range of possible errors for the workflows is defined in the study. The study does a comparison analysis of the workflows and concludes that the final choice of workflow would depend upon the time, size of site and the accuracy requirements of the reality capture project .

PAGE 14

14 CHAPTER 1 INTRODUCTION 1.1 Overview With the advent of 21 st century there has been a major change with respect to use of technology in every industry. The construction industry is also being modernized and the way construction projects are being executed and finished has seen some major changes. Digitalization has affected every step of this process and has made construction more organized and efficient. With the development of new hardware and software, the construction industry has seen many new applications of technologies and will be seeing more in the future. One such major technology of this century is cameras which can basically digitize the actual reality. Photography and the way the images are being processed has seen some major development in the recent decade and now have multiple applications in the cons truction industry ranging from construction planning, progress monitoring to understanding as built construction better. Reality capture is one such process which makes use of static, mobile and aerial laser scanning or photogrammetry to produce a digital 3D model of a real world object or site. In this process millions of points are generated known together as point clouds that can be used to map, measure and develop a textured high resolution 3D model. The simplicity of the process makes it easier to acq uire millions of points instead of tens or hundreds using traditional field surveying methods hence saving a lot of time and manual labor costs. Lesser time is wasted in visiting the jobsites and surveyors, engineers, designers and contractors all together can leverage this technology to make faster and more appropriate decisions. (Autodesk 2018)

PAGE 15

15 As the technology is maturing, reality capture technologies have become more efficient and economically viable for companies of all sizes. Reality capture technolo gies such as 3D laser scanning, Lidar and photogrammetry are slowly becoming a normal part of practice in the construction industry and their new applications are being developed at a pace faster than before. These technologies are helping in bridging the gap between other computing technologies such as building information modeling (BIM), Augmented Reality (AR), and Virtual Reality (VR) and have certainly become an important part of the construction industry with a lot of potential for further development and research in future. 1.2 Assessment of Reality Capture T echnologies There are three major reality capture technologies prevalent in the construction industry namely Lidar, Laser Scanning and photogrammetry. These three technologies can be used to creat e point clouds and 3D models of the desired object or site. The key difference in these technologies is in the choice of hardware, software and their workflow for gathering data. The choice between these technologies depends on the type application require d from the technology and on various factors including the size of the object or the site to be modeled, available time for capturing and processing data, quality of output required, accuracy of the model required, Natural site conditions, available hardwa re and the budget for the capture . All the three technologies have some unique applications and some common type of applications for the construction industry . Table 1 1 shows the common types of application for reference. (Gosavi 2018)

PAGE 16

16 Table 1 1. C omm on applications of reality capture technologies . Common Application Laser Scanning Lidar Photogrammetry Dimension verification Building inspection Progress monitoring Historical building documentation Monitoring structural deformation Survey mapping Bridge load testing Computation of quantities Trenchless engineering Development of shop drawings Quality assessment geotechnical works As shown in Table 1 1, photogrammetry has the largest number of common type of applications among all the three technologies and thus has a broader scope of research. Photogrammetry is more easily available and economical compared to its alternatives and c an also be employed in multiple disciplines such as optics, physics, biology and information sciences (Thomas, 2010) . Photogrammetry besides its many advantages also has some limitations such as overall its workflow is more t ime consuming compared to others, the accuracy of the point cloud is not as precise as it can be with other technologies, the point clouds obtained are not very dense and there is higher chances of inaccuracies attributable to distortion of lenses and ligh ting conditions during capturing of images. Overall for achieving the most accurate and high quality output, a combination of laser scanning and photogrammetry is advisable (Gosavi 2018) . 1.3 Problem Statement Photogrammetry has a large number of applicati ons in construction and it is being used to generate 3D model s from small structural nonstructural deformities to large land areas and buildings. There are numerous types of cameras available in the

PAGE 17

17 market to choose from each having a different range and limitations. There is a lack of literature when it comes to choosing a workflow that involves not only the standard DSLR setup but also new more accessible mobile phone cameras and their workflows to analyze construction related 3D modeling. The problem is because of the outdated literature which only focuses on a single workflow and thus misses out on the variabilities that can help in choosing a better workflow. The question is, what are the advantag es and limitations of choosing a given workflow from among other workflows? What is the range, accuracy, quality and time requirements for each of the possible workflows? What are the possible variabilities that might improve the workflow efficiencies? 1.4 Objective The study mainly focuses on understanding the possible variabilities in the photogrammetric 3D modeling workflows of construction site objects and spaces. The specific objectives of the study are: Reviewing photogrammetric 3D modeling workflo w. 3D modeling with the different types of cameras. Analyzing the 3D models for quality, accuracy and time. Comparing the workflows based on 3D modeling results. 1.5 Scope of Research The scope of the research is to analyze photogrammetric workflows usin g multiple cameras for the accuracy and time taken to 3D model construction site spaces and objects. The research is limited to building construction spaces and objects and considers objects as small as a brick to as large as a building. The research would further analyze the possible variabilities and their effects on the generated 3D model.

PAGE 18

18 The scope of this research is limited to only selected applications of the devices used and do not consider all the possible applications. The research is dependent o n the available hardware, software and chosen building spaces and objects. Choice of building spaces is limited to the available and permitted under construction sites or completed building construction projects. The study is also limited to vertical build ing construction type of projects and does not consider modeling other types of infrastructure such as industrial and heavy infrastructure projects. The data and model obtained are specific to the chosen workflow and building environment and there can be m any other workflows and site environment where these work flows might give a different result. The research does not include all possible workflows i.e. all the devices , software and photogrammetric algorithms available and all the types of field condition s where this technology is implemented in the industry. The study is derived from the selected cameras and building spaces and does not take into consideration or solve the problems of unfavorable conditions such as low light, bad weather , etc. The limita tions considered in the research are a small subset and there is scope in further developing research by adding new factors, workflows and including inputs from professionals with construction industry experience.

PAGE 19

19 CHAPTER 2 LITERATURE REVIEW 2 .1 Brief History of P hotogrammetry The term photogrammetry was first used by a German architect Albrecht Meydenbauer in late 19 th century around 1858 . The origins of photogrammetry can be traced back to 1492 optical perceptivity (Ghosh, 2005) . In 1849 Frenchman Laussedat became the first person to use terrestrial photographs for topographic mapping and came to be referred to as the father of photogrammetry (Ghosh, 2005) . Ghosh summarizes some important historical dev elopments in the field of photogrammetry . He noted that John Heinrich Lambert in 1759 introduced the concept of inverse central perspective and space resection of conjugate images which can be translated mathematically to trace the location of the point of image capture. Later, in 1833 Guido Hauck was able to relate projective geometry to photogrammetry which further went to become the basis to most of the classic analytic photogrammetric developments until the invention of modern algorithms like SFM using computational power ( Maldonadoa 2016) . Since the inception of photogrammetry in the late 19 th century, photogrammetry has come long way with respect to its applications such as a mapping tool to 3D modeling and the results generated from the technology. A major contribution to achieve these results came from the algorithm developed by Shimon Ullman in 1979 which is called the SFM algorithm or technique. The algorithm can calculate the location of the points on an object in 3D space from multiple photos tak en from different locations and generate a point cloud data which forms the basis of 3D documentation of objects (Farzad 2016) .

PAGE 20

20 2.2 Classification of P hotogrammetry Photogrammetry is a broad science that can be classified in the following multiple classifi cations (Luhmann 2014) . Classification 1. By camera position a nd distance: Satellite photogrammetry (h>300km) Aerial Photogrammetry (h>200ft, h>300m) Close range photogrammetry (h<300m) Macro Photogrammetry (h<1m) Mobile Photogrammetry (moving camera) Classification 2. By no . of images Single Image photogrammetry e.g. ortho photos (n images, n=1) Stereo photogrammetry (n=2) Multi image photogrammetry (n>2) Classification 3 . By method of recording and processing: Plane table photogrammetry Analogue pho togrammetry Analytical photogrammetry Digital photogrammetry Videogrammetry Panorama photogrammetry Line photogrammetry Classification 4 . By application of specialist area: Architectural photogrammetry Engineering photogrammetry Industrial photogrammetry Forensic photogrammetry Biostereometrics Multi media photogrammetry Structure from motion Classification 5 . By availability of measurement results Offline photogrammetry Online photogrammetry Real time photogrammetry

PAGE 21

21 2.3 Close R ange P hotogrammetry T he American Society for Photogrammetry and Remote Sensing (ASPRS, 2008) defines photogrammetry as the skill, science and technology aimed at providing reliable information on physical objects and environment , through recording measur ing and interpretation of photo images, as well as the images of EM radiation and other phenomena. Close range photogrammetry is similar to aerial photogrammetry as both work on same photogrammetry principles. The main difference is that in close range pho togrammetry the object to camera distance is less than 1000 ft or 300m. The outputs from c lose range photogrammetry is considered non topographic as it is not in the format of terrain models or topographic maps, instead produces 3D models which are measure ments and point clouds capable of producing as built information and object elevations (Luhmann 2014). Photogrammetry is based on the triangulation technique and uses multiple images overlapped in solving trigonometry. Close range photogrammetry can be app lied to 3D models small to large objects. The most common Photogrammetric 3D modeling technique requires images with an ideal overlap of 60% within the photos which are then further processed in a Blackbox technology software that usually runs on the Stru cture from motion (SFM) Multi view stereo (MVS) algorithm accompanied by other algorithms. The output obtained from these software packages can be a dense point cloud with .RCP, .RCS, .ply etc. format or a 3D mesh with or without texture e.g. .OBJ format (Duke 2018) . The advancements in technology have affected both the hardware i.e. the cameras and the software system for close range photogrammetry. With availability of ca meras photogrammetry has become more easily available and efficient (Therry,

PAGE 22

22 2018) . There are new software and apps being developed to make photogrammetry more accurate and accessible (Lievendag 2018) . Close range photogrammetry can be a replacement for laser scanning and other reality capture technologies in cases where the accuracy is appropriate and it is a more economical replacement to its alternatives. The workflow for photogrammetry has become simpler and faster with the a dvancements of the recent decade (Duke 2018) . Figure 2 1 shows the general photogrammetry workflow. 2.3.1 Structure F rom Motion (SFM) R eview In the past few years SFM technique has become the most commonly applied technique in the photogrammetry process. It has been proven to rapidly acquire 3D point cloud data for minimal expense. The technique has been used in the majority of algorithm. The basics of SFM technique have be en derived from traditional photogrammetry and it relies on the data obtained from multiple view images. It is simpler than traditional photogrammetry as it uses an automatic point identification and matching technique based on highly redundant and iterative bundle adjustment algorithm to solve the geometry and calculate the position of points in the image. The number of images needed to recreate a scene depends on the size and complexity of t he scene itself (Simón.P Villasenín , 2017) . The SFM is a sequential pipeline that consists of two phases. The first phase is a correspondence search between images and the second phase is of the iterative incremental reconstruction. The first phase is composed of three sequential steps : feature extraction, feature matching and geometric verification which produces a scene graph or view graph representing relationships between geometrically verified images.

PAGE 23

23 The second phase starts with an initialization step followed by three steps of re construction namely image registration, triangulation and bundle adjustment. Simone Bianco,(2018) gave a detailed description of the steps involved and go further to explain the next step in point cloud generation. Figure 2 1. Photogrammetric w orkflow (adapted from: Gosavi, 2018) Start Reach Site area Fix a path for acquring images Perform intial settings on camera Capture photos in sequence Import images to computer Register photos using photogrammetry software Process data and check 3D model Is the model adequate ? Yes Finish No

PAGE 24

24 The SFM method uses multiple algorithms for each step above depending upon the software used. Some common examples of algorithms used are SIFT, RANSAC, and multi view stereo (MVS) algorithm. Commercial software are avai lable such as Autodesk Recap Photo, Agisoft Metashape professional which are full implementations of these algorithms and since all the algorithms are grouped in a single software it is hard to identify the exact algorithm used in the particular software m aking it a black box software technology (Simón.P Villasenín, 2017) . After a successful 3D construction is performed using the SFM MVS pipelines, the obtained results can be analyzed for quality and accuracy by comparing it to t he ground reality with similar data representation or its TLS scanning output. The point densities and spatial resolution achievable through this technique is higher than those of Total Station, differential GPS and comparable to TLS (traditional laser sca nning). One of the drawbacks of SFM MVS is that the accuracy of the output is less precise for long range and large sites compared to its alternatives. Nevertheless SFM MVS has revolutionized the photogrammetric 3D modeling technology (M.W. Smith, 2015) . 2.3.2 Accuracy and Precision of P hotogrammetry Accuracy in photogrammetry can be defined as the closeness of the measured value to the actual value being measured or the standard value. On the other hand precision of a model is the closeness of those measured points to one another or put in other words , the finest measurement possible. This study will focus on the accuracy of the models considering the output from the TLS as a reference. Since the TLS data error range from 1 2% it is considered acceptable for the accuracy calculation. Typically, 4 8 % m argin error with 95% confidence is considered acceptable for scientific studies. Models having error less than 5% or 1 meter are considered

PAGE 25

25 acceptable for SFM modeling. Accuracy, precision and percent error are all dependent on the application of the model and requirements of the user modeling it (Duke 2018) . The accuracy of models generated using photogrammetry depend on many factors as represented i n Fig 2 2 . The accuracy is directly dependent on the object structure, size and complexity along with the t ype of camera, lighting conditions and the experience of the operator. The software used to process the images also play a role in determining the accuracy of the model as different software produce different results with the same set of photos and additio nally same software can produce different results with same set of photos if processed twice. The difference between the structure and the model is also attributed to the scaling factor which relates the distance between the points in the model to the diff erence between the points on the real structure. There can be many more factors that need further research to study the accuracy of photogrammetric modeling (Rebecca K. Napolitano, 2017) . Highly detailed photos produce high deta iled models and high detail in models is directly proportional to the precision and accuracy of the model. Higher quality cameras with larger sensor sizes and more megapixels have proven to produce high quality models compared to the lower megapixel camera s. One of the most important factors that affect the details in a model is the ground sample distance (GSD). It is the smallest element that can be distinguished by the camera sensor in the image of the object being modeled. Bigger GSD equates to lower spa tial resolution of the image therefore less visible detail in the model and lower GSD images will produce a better detail in the model. The value of GSD depends on the pixel size or pixel pitch, focal length and the

PAGE 26

26 distance of the object to the camera. Eq uation 2 1 gives the relationship between these factors (M. Faltýnová, 2016) . Figure 2 2. Factors affecting accuracy (adapted from: Rebecca K. Napolitano, 2017 ) (2 1) (P. Sapirstein, 2016) reviews other studies on accuracy of photogrammetry on objects of different scales and sizes. He use d RMS or the standard deviation to quantify errors and p recision is expressed as 1: K where K is the size of the scene divided by the reported standard error. The errors were calculated by either comparing the field measurements using total station or laser scanner output of the same object. The study shows an error range of 0.07 mm for 0.2m scale object to 40mm for 35m scaled object. Factors affecting Accuracy Building Size Material Accessibility Complexity Lighting Time of day/year Geographic location Adjecent buildings Weather Camera Resolution Distortion Depth of field Sensor properties Operator Number of images Quality Variation of perspectives Software Repeatability

PAGE 27

27 It can be inferred from the past that bigger models have larger absolute error values and cameras with better sensor and optics can achieve better precision and accuracy. Photogram metry is a complex process and its errors vary in every implementation, thus defining an absolute value of error for a project is not possible, only an expected range of error values from a photogrammetric model of a definite size can be estimated (P. Sapirstein, 2016) . Th e current study assess ed the accuracy of models from different cameras and software setting by comparing i t with the TLS output. 2.4 T errestrial Laser Scanning as a Comparison E quipment The laser scanner is a de vice that operates by emitting laser (light amplification by emission of radiation) in the required direction and catches back its reflection to estimate the distance of the point. A laser scanner emits light in regular pattern to create a dense sampling o f the object surface in the form of a point cloud and thus is a non destructive and non contact technology for capturing digital 3D shape of objects. The level of detail from these scanners depend on the scan and resolution settings and their distance from the object (Gosavi , 2018) . The laser scanner works by using a central rotating prism reflecting one vertical laser at a time which will later bounce back to the scanner and this information is nowadays combined with a panoramic image to produce distance, surface texture and color details. The output produced from laser scanner is a dense point cloud which can be directly compared to photogrammetric point cloud using third party software. Laser scanning technology can be traced back to the becam computing power (Lachlan Broome, 2016) . Initially laser scanning found its application in machine building industry and in the industrial design process to aid computer assisted design(CAD) process. as the technology advanced it found its application in

PAGE 28

28 many other fields like geodesy, mapping, c ivil engineering, architecture, urban planning, medicine and others. There are two commonly used variations of this technology namely the pulse system and the phase shift system. The pulse time of flight system is the most rapid form of laser scanning syst em. It works by measuring the time taken by the emitted laser to come back to the scanner. The accuracy of this system is independent to the distance between the scanner and the object. It is thus generally used for long range scanning (>100m). Phase modul ation is a type of scanner that emits continuous laser at same intensity and measures the distance by determining the shift in the phase or the change in wavelength of the emitted laser to the reflected back laser signal. Shorter transmitted wavelength imp roves the range resolution but reduces the maximum ranges. To counter this loss a process called modulation is used where laser of both high and low frequency is used to obtain good range and resolution. There is a odd increase in power usage of this equip ment making it more suitable for close range scanning. This drawback is compensated with the accuracy from phase modulation type scanner which is generally found to be better than the other types (Lachlan Broome, 2016) . Acc urac y and Influences on T errestrial Laser Scanners : Accuracy of laser scanner output is considered survey grade due + 2mm expected error making it an easy choice for a comparison equipment. Laser scanners accuracy can vary on model to model basis and depends on the site condition and the method of use. Below are some factors affecting accuracy of TLS (Dai, 2013)

PAGE 29

29 Figure 2 3. Difference between Pulse time and phase modulation laser scanner (adapted from: C DOT , 2011) Reflective object properties Shiny sur faces and surfaces with high reflective properties will be hard to record by the scanners. Atmospherics Atmospheric factors like temperature, humidity and pressure will affect the measurement. The error being very small ca n be still countered by inputtin g the correct atmospheric information into the scanner. Interfering radiation Laser scanners operate on a specific bandwidth and any interference from outside radiations can affect the measurements. This is the reason scanners have a radiation filter inst alled. Angle of incidence If the laser hits the objects at acute angles it will cause the dispersion of laser affecting the return laser and hence affect the accuracy of the image of the object.

PAGE 30

30 CHAPTER 3 RESEARCH METHODOLOGY 3.1 Study O ve rview and Site S election The purpose of the study is to identify the difference in the workflows of three different photogrammetric devices i.e. ( and smartphone camera s) for photogrammetric 3D modeling and then compare the models for quality, time and accurac y. The 3D modeling experiment was conducted inside a renovation area in a local mall . The site was first laser scanned and then photographs for photogrammetry were taken. The object studied was an elevator room of size 15x15x13.5 ft and the area captured was approximately 100 m 2. The chosen scene for experiment was smaller compared to the total site area which was around 9000 m 2 to make this experiment less time consuming. The laser scan output was used as the reference for comparison with photogrammetric outputs due to its higher accuracy. The registration and processing of scans was done using Recap Pro software available from Autodesk TM as a free trial to university students. The study began with a literature review of the photogrammetry process and the n multiple trial 3D modeling experiments were conducted. The trial experiments were conducted using all the three types of cameras for both interior and exterior spaces. The trial experiments were performed on objects and buildings related to construction industry and were chosen as such to be of different sizes and scales. Th e trial experiments are described in Appendix B. the trial experimentation helped in better understanding of the photogrammetric process and in a more practical fashion thus forming th e basis of research plan and site selection.

PAGE 31

31 The study follows a methodology represented by the following steps: Step 1. Project Planning Defining project goals Choosing suitable equipment Step 2. Acquiring images Adjusting camera settings Recording time of capture Recording distance of capture Step 3. Photogrammetric software processing 3D modeling Adjusting model scale Cleaning point cloud for unwanted points Exporting output in common format Step 4. Analyzing 3D models Comparing point cloud for accuracy Comparing workflows 3.2 Device S pecifications 3.2.1 Faro M odel Focus3D X 120 Laser S canner The scanner being used for this research is manufactured by the company Faro (see Figure 3 1) . The scanner is based on phase shift technology and has a ran ge of .6 to 120 meters. The error range of the scanner is ± 2mm and field of view is 305º vertically / 360º horizontally. It can measure points at speeds between 122,000 to 976,000 points per second. The scanner weighs 5 kgs and its dimensions ar e 240x200x 100 mm. It uses a 20 M W Class 3R laser. The battery life of the equipment is 4.5 hours per charge and it comes with 2 batteries. It is classified as Class 54 for environmental protection and costed 28,000 $ at the time of purchase (Faro,2013).

PAGE 32

32 Figure 3 1 . Faro focus3D X 330 (Source: Photo courtesy of author ) 3.2.2 Digital Camera The choice of digital cameras is one of the most important aspect of photogrammetric process. There is a wide variety of digital cameras types and models present in the market for photographers . For this research we will focus on three cameras types such as 360 camera, DSLR camera and smartphone camera which are based on a different image capturing systems and can be used for close range photogrammetry. The DSLR camera is a profes sional grade camera with fully customizable lens options and photography settings. It is known known to produc e top end quality photographs.

PAGE 33

33 Figure 3 2. Laser scanning workflow (adapted from: Gosavi, 2018 ) Start Reach Project area and analyze the site Fix scanning path Set the tripod and scanner with battery and SD card and perform initial setings Perform Scanning Intended target area compeleted? Yes Import data to computer Register scan in a 3D modeling software and process data Finish No Start scan Review scan for any occlusions Move scanner to next station

PAGE 34

34 The other choice of camera type is a 360 º camera which can capture the complete 360 º surrounding view in just one photo. These 360 º cameras commonly use dual fisheye lenses and are still new technology developed in this preset decade for the camera consumer market. The last choice of camera t ype is a smartphone camera. The smartphone camera has seen major advancements in the current decade and are now capable of producing high quality image. This camera type is also the most easily available and easy to use among the other camera types (Daza , 2017) . The study will use images captured by the specific camera models des cribed next for photogrammetry. 3.2.2.1 Ricoh Theta S C amera The 360 º camera used for this research is manufactured by the company Ricoh. The camera has a Field of view of 360º and can capture images at resolution 12 megapixels. The camera has a CMOS sensor with size 1/2.3 and a lens value of F 2.0. The images were captured at 100 ISO setting and it took about 1 minute on average to capture a single image. The size of image captured were 5376x2368 pixels. Exposure time of 1/4 sec, Aperture of F/2 and a focal length of 1 mm was used to capture the images. The camera works with a smartphone app available on both android and apple operating systems to capture images and has an onboard m icro SD card storage to store pictures. The pictures can be stitched in the smartphone app itself or on a PC with software available at the Ricoh website. Figure 3 3 represents the general workflow of the camera to obtain the desired equirectangular image output for photogrammetric modeling.

PAGE 35

35 Figure 3 3. Ricoh T heta S workflow (Source: Photo courtesy of author) Figure 3 4. Ricoh T heta S (Source : Photo courtesy of author) Start Reach site Fix image capture path Connect camera to smartphone if not connected Set Camera and tripod to location Click picture using phone app Site area covered Stich photos using phone/comptuter Export equirectangular photo output If not covered

PAGE 36

36 3.2.2.2 Nikon D3300 C amera The D3300 DSLR camera model manufactured by Nikon is used for this study. The camera has a CMOS sensor of size 23.5x15.6 mm and is capable of producing images of 24.20 megapixels with resolution of 6000x4000 pixels. The camera dimensions are 124x 98x 76 mm. The camera costed US$ 650 at the time of purchase and included a 18 55 mm lens. It can capture 700 images on a single charge. The images were taken at variable ISO speeds and the Shutter speed was fixed at 1/60. A fixed 18mm focal length was used with F 3.5 Aperture for capturing t he images. The photos were taken at 4496x3000 pixel size for this study. Figur e 3 5. Nikon D3300 (Source: Photo courtesy of author ) 3.2.2.3 iPhone XR Smartphone C amera The smartphone camera used for this study is manufactured by Apple and is a 12 megapi 1.8 aperture lens equivalent to 26 mm focal length. The pixel pitch on this model is 1.4 µm. The pictures were captured at aperture F/1.8, exposure time 1/30 seconds, fixed focal length of 4 mm an d variable iso speed setting. The phone captured pictures of size 3024x4032 pixels in .jpeg format which were directly used in a photogrammetric software.

PAGE 37

37 Figure 3 6. IPhone XR (source: Photo courtesy of author) 3.2.3 Computing H ardware The computer use d for this study was Lenovo Yoga 720 running Intel core i7 7 th generation with 16GB ram 500 GB SSD storage and Nvidia 1 GB 1050 GPU. The computer hardware requirement for photogrammetric software is recommended to have GPU and RAM upwards of 8GB. The bette r the configuration of the computer, faster will be the photogrammetric processing. Since the photogrammetric and laser scanning deals with large file sizes, it is recommended to have a PC with 100 GB free space or an ex ternal hard disk to store data. 3.3 Image Acquisition The image acquisition is the most important stage of photogrammetric 3D modeling as the quality of the 3D model will directly depend on the quality of images used to model. The image acquisition should be done systematically considering n ot only the quality of images but also other factors such as shooting distance from the object, time taken to acquire the total data and using ground control points (GCP). The

PAGE 38

38 photos should have overlap of about 60 80% and should be captured in a particula r fashion as shown in Figures 3 7 through 3 10 for the best results. Fig ure 3 7. Shooting isolated objects (adapted from: Autodesk recap photo guide) Fig ure 3 8. Shooting Interior surroundings (adapted from: Autodesk recap photo guide) Figure 3 9 . Shooting Facade (adapted from: Autodesk recap photo guide) Isolated Object Interior Spaces Exteriors / Facade

PAGE 39

39 Figure 3 10. Radial deflection in Camera position (Source : Photo cou rtesy of author) For this study a total of 29 pictures were clicked with the Ricoh theta S 360 º camera. The DSLR was used to click 73 pictures and iPhone XR was used to click 63 pictures at around 5 º approx. radial deflection from a distance around 6 7 met ers. The distance was chosen to accommodate the structure in the frame and the angle was chosen as such to have a minimum 70% overlap. The cameras were set on auto mode and variable ISO speeds were used along with a fix focal length to maximize the detail s and quality of the images for the study. Figure 3 11 shows the actual alignment of images used for 3D modeling the smartphone and D SLR camera s .

PAGE 40

40 Figu re 3 11. Aligned images representing actual Camera position (Source: Photo courtesy of author) 3.3.1 Asp ects of P hotography To acquire good quality of images there are some aspects of photography that needs to be considered and applied. Some aspects that are important when clicking pictures for photogrammetry purposes are as follows. 3.3.1.1 Exposure The exposure of an image is determined by three camera setting parameters. These three parameters are ISO speed, Shutter speed and Aperture. Same exposure can be achieved by different combinations of these three settings. The aperture controls the amount o f light entering the camera and shutter speed controls the duration of exposure. The ISO speed controls how the sensor reacts to the given amount of light. Higher ISO induces grains and noise in photos causing damage to the photo quality, Slower shutter sp eeds and improper aperture too can cause blurring. An optimal

PAGE 41

41 combination of three should be used that has low ISO values around 100 200 and higher aperture and shutter speeds. (Laakso , 2016) 3.3.1.2 Depth of Field It is the range of distance that is prope rly focused by the camera and can produce sharp pictures. The image is sharp at the focusing distance but loses focus gradually on objects in front and back. Larger aperture gives shallower depth of field and smaller aperture gives deeper depth of field. T he depth of field can be controlled by adjusting the focusing distance and aperture for optimum image quality. Figure 3 3 shows change in depth of field with change in aperture for a fixed distance. (Laakso, 2016) Figure 3 12 . Depth of field ( adapted from: Laakso, 2016) 3.3.1.3 Optical Distortion It is a type of distortion caused due to the curvature and optical design of the lens. This distortion causes straight lines to appear curved in the image. Fish eye lenses

PAGE 42

42 have more o ptical distortion than a standard lens. The photogrammetry software must correct this distortion for accurate modeling. 3.3.2 Camera Settings Focal length Fixed lenses should be preferred for photogrammetric modeling as they produce better results. Zoom lenses should be used with same focal length for all pictures. Aperture Use manual or aperture priority mode depending on the case. Aperture should be set for optimum exposure and depth of field. I SO sensitivity Iso value should not be too high and arou nd 200 Iso is preferred Shutter speed Shutter speed should be adjusted to avoid blurring. Lower shutter speeds should be compensated by using tripod. File format Image should be clicked at maximum size or at highest quality. Auto rotate function auto r otate function should be turned off if available. White balance set white balance to fixed from automatic for even results. 3.3.3 Negative Image Q ualities Here are some qualities in images that can cause negative effect on photogrammetric 3D modeling wo rkflow and were avoided as much as possible for this study (Blizard Brandon, 2014) . Losing Focus: losing focus in images decreases the image quality Movement: Any kind of movement of subject or camera will cause blur and affect photogrammetric modeling. Transparent and shiny objects: transparent and shiny objects in images will cause distortion in photogrammetric models. Very thin objects: very thin objects in pictures can cause the photogrammetric software to face difficulty in placing dots and do the triangulation process for the object causing negative effect on 3D model. Crisscrossing objects: Software is unable to track points on an image blocked by crisscrossed surface specially when the crisscrossed objects are similar.

PAGE 43

43 Very plain featureless textures: If there are no texture and features the software is unable to rebuild the depth for example plain wall. Very repetitive features: Very repetitive features can cause the software to mix points and jump to an identical nearby pattern causing distortion in 3D model. Blinking or moving light: too many blinking or moving lights can cause the deformity in 3D models. Image compression or postprocessing: Highly compressed images can cause the software to face difficulties and create false features. Dark shadows: dark shadows are hard to track for software and cause noise and eve holes in the model. Lens Flares or excessive light: Lens flare will affect all the points the light will cross through in the image. Under or over expo sure: Images with under and over exposed areas loose details and can cause light and dark patches in the model. 3.4 3D Modeling The software used for 3D modeling point clouds from laser scans was Autodesk Recap Pro provided free to university students on a tr ial license. 3.4.1 Laser Scan M odeling The Laser scan was set to 1/5 th Resolution and total of 7 Scans were taken around the structure. The scans were imported to the computer in .FLS format. The scans were then imported into the Autodesk Recap Pro soft ware where they were registered and indexed. The point cloud was generated, cleaned or edited to contain only relevant points and then exported in .PTS format. Figure 3 13 shows the workflow used. 3.4.2 3D Modeling 360 º Image Data Agisoft Metashape profe ssional software was used to 3D model the 29 360 º images clicked using Ricoh theta S. The images were imported in a new chunk and

PAGE 44

44 then camera calibration is set to spherical to adjust for the dual fisheye lens distortion. The pictures are first aligned at highest setting option, then a dense point cloud is created using ultra high accuracy setting. The dense point cloud is then meshed at ultra high setting to create 3D model which is then textured, scaled and then exported in .Ply format for analysis. The 3 D modeling followed the workflow shown Figure 3 14. (Agisoft, 2018) Figure 3 13. Autodesk Recap pro workflow (adapted from: Autodesk recap guide) Create Import Register Index Launch View/Fly Edit Measure, Annotate, Markup Publish and share Download/ Export

PAGE 45

45 Figure 3 14. Agisoft Workflow (Source: Photo courtesy of author) 3.4.3 3D Mo deling DSLR and I Phone Image D ata Autodesk Recap photo software was used to 3D model the images from the DSLR and iPhone XR camera. The photos were uploaded to the Autodesk cloud and the processing was done online. The completed 3D model was then downloaded to be analyzed. The mesh model was cleaned for relevant points. The created model was scaled and exported for further analysis. Figure 3 15 represents the Autodesk Recap Photo workflow.

PAGE 46

46 Figure 3 15. Autodesk Recap photo workflow (Source: Phot o courtesy of author) Open Recap photo Select create 3D object Add photos Create model Download model Edit Model Scale Export model

PAGE 47

47 CHAPTER 4 RESULTS AND ANALYSIS 4.1 Results A 3D mesh model for all the three cameras and a colored pointcloud for the laser scanner was produced. The model was then edited and cleaned for the excess elements and points to obtain an even geometry for further analysis. The edited models were scaled and exported in .pts for laser scanner and .ply or .obj for photogrammetric models. The time taken by the camera to acquire the images and the time taken by the software to produce 3D mode ls was also recorded. Figure 4 1 through 4 5 show the obtained results. Figure 4 1. Actual Site (Source: Photo courtesy of author)

PAGE 48

48 Figure 4 2. 3D model from Nikon D3300 DSLR images (Source: Photo courtesy of author) Figure 4 3. 3D textured mesh fr om iPhone XR camera (Source: Photo courtesy of author)

PAGE 49

49 Figure 4 4. 3D Model from Ricoh Theta S 360 camera (Source: Photo courtesy of author) Figure 4 5. 3D point cloud from laser scanner (Source: Photo courtesy of author)

PAGE 50

50 4.2 Analysis The models obtained from software were first visually analyzed. The models obtained from the DSLR and Smartphone had a clearer geometry and a richer texture compared to the model obtained from the 360 camera Ricoh theta S. The m odel from iPhone XR camera had brighter texture due to lower shutter speeds causing higher exposure compared to the DSLR camera derived model. The model developed using Ricoh theta contained more elements compared to other camera models, but they lacked pr ecision creating holes and incomplete mesh for some elements. The model also had uneven texture and geometry compared to the other models. To compare the time of each workflow the total workflow time was divided into two parameters. The first parameter mea sured the time taken by equipment to gather data and images on the field and the second parameter is a record of the time taken by the workflow to process the captured data or images using computing software. Rate of capture is a measure of speed at which photogrammetry can be performed on site and is represented as letter R. The value of R is defined by the equation ( 4 1) The rate of capture was calculated as a parameter to compare the Reality capture technologies based on the speed at which they can capture reality capture data i n field. The calculated values are shown in Table 4 1. To compare the accuracy and precision of the mo dels Cloud compare software was used (S.Altmana, 2017) . The point cloud 3D model from the laser scanner was used as a reference to compare the accuracy with the photogrammetric model

PAGE 51

51 developed from the digital cameras. The two m odels were first aligned to each other and then a statistical test was run which compared the point cloud of the photogrammetric model with the laser scanner model to calculate the mean difference and the standard deviation. The software uses a scalar fie ld color code which ca n be used to display the intensities of points on the model and can also be used to recognize the deviations on the compared model as shown in the F igure 4 6. The areas in green represent negative deviation and the yellow represent de viations closer to zero, while red and blue are the extreme ends. This scalar field can be used as a tool for visual analysis of accuracy. Fig ure 4 6. Cloud Comparison in Cloud compare (Source: Photo courtesy of author)

PAGE 52

52 Figure 4 7. Laser scan model depicted in blue color compared against 3D model from Smartphone camera represented by green color (Source: Photo courtesy of author) Figure 4 8 . Laser scan model depicted in blue color compared against 3D model from DSLR cam era represented by green color (Source: Photo courtesy of author)

PAGE 53

53 Figure 4 9 . Laser scan model depicted in blue color compared against 3D model from 360 camera represented by green color (Source: Photo courtesy of author) Fig ure 4 10 . Section view of laser scan compared with 360 º camera, D SLR and smartphone 3D model sections (So urce: Photo courtesy of author)

PAGE 54

54 Statistical Analysis : To compare the accuracy and precision of the models Cloud compare software has tools to calculate the mesh to cloud (in case of TLS vs photogrammetric models) and mesh to mesh (in case of two 3D meshes) distance calculation tool. The distance tool can be used to calculate the RMS and other error values such as the mean error and standard deviation. This study will focus on the mean error and standard deviation and study it statistically using the statistical tools available on cloud compare software. The error values are plotted on a Gauss curve or a folded normal distribution curve using a statistical tool in the C loud compare software . The Mean error and the standard deviation are given by the formulas below. ( 4 2 ) ( 4 3 ) Where, n: number of all distances Xi: distance from Point i to its next neighbor Xm: mean distance F igures 4 11 through 4 13 repr esent the normal distribution between photogrammetric models and laser scanner. And Figure 4 14 compares the two meshes from the iPhone XR and Nikon D3300 D SLR statistically.

PAGE 55

55 Figure 4 11. Normal distribution of laser scan compared with 360 camera photog rammetric models (Source: Photo courtesy of author) Figure 4 12. Normal distribution of deviation between laser scan point cloud and DSLR camera derived 3D Model (Source: Photo courtesy of author)

PAGE 56

5 6 Figure 4 13. Normal distribution of deviation betwe en laser scan point cloud and Smartphone camera derived 3D Model (Source: Photo courtesy of author) Fig ure 4 14. Normal distribution of deviation between DSLR camera and Smartphone camera derived 3D Model (Source: Photo courtesy of author)

PAGE 57

57 The visual scalar field analysis of the compared models are represented in Figure s 4 6 t hrough 4 10. The figures represent the photogrammetric models against the laser scanning point cloud model and can be used to analyze the areas of discrepancies. It can be observed that the areas inside the elevator shaft and the corner edges were prone to more higher error values compared to the exterior walls of the elevator for the D SLR and S martphone models while the 360 º model had the most errors around the complete surface and even failed to capture the interior part of the elevator shaft. The statistical a nalysis graph represents the count of points studied against the deviation values and would represent the percentage of points covered or the confidence level when the mouse is hovered over the graph as shown in the graph for the 360 º camera in F igure 4 11 . It should be noted that 95% of errors lie under twice the standard deviation value and since twice the deviation value of 252.9 mm is close to and over 5 % of the largest dimension of the model ( 5m) which comes out to be 250 mm, the validation of accuracy for the 360 camera model cannot be confirmed or termed a n accurate model. The model from iPhone XR camera had 63000 faces on its mesh compared to 43000 faces on D SLR mesh model making it more detailed model. The statistical analysis of the S martphone deri ved model is shown in F igure 4 13 and the mean error value for the S martphone model for this specific case is 10.8 mm and the standard deviation is 14.54 mm, which implies that the majority or 65% of error values lies within the 14.5 mm deviation and that the 95% values lie within less the 50 mm (5% of size) limit to validate the model accuracy and classify it fit for research studies. The Smartphone workflow was also found to be the fastest with capability of capturing 15

PAGE 58

58 pictures per minute on site and t ook the least time to process the data compared to other photogrammetric workflows. The model generated using the D SLR camera Nikon D3300 was the second fastest workflow to capture data on site but showed slower performance while processing data as seen in the table below. The camera achieved a mean deviation of 2 . 1 mm while a standard deviation of 1 4.85 mm making it the most evenly distributed error model or the most accurate model. The 95% of the error values lie under the twice the standard deviation va lue which is less than the 5% of the object size, thus confirming its validity for accuracy standards. Figure 4 14 shows the statistical comparison between the two accuracy wise valid photogrammetric models and found the mean error difference to be 6.8 mm and a standard deviation to be 19.3 mm. Since the standard deviation on both the models is very close and the mean difference of errors for DSLR model is lesser. It is the most accurate photogrammetric model here with lesser prob abilities of high value er rors. Table 4 1 . Workflow comparison analysis Sr no. Parameters TLS 360 Cam DSLR phone 1 No. of images or scan 7 29 73 63 2 Image resolution 5376x2688 4496x3000 3024x4032 4 Focal length in mm 1 18 4 5 Aperture F/2 F/3.5 F/1.8 6 Exposure time in sec 1/4 1/60 1/30 7 Rate of capture (R) .170 1.03 9.125 15.75 8 Time to acquire data in minutes 41 28 8 4 9 Time for processing data in minutes 55 125 126 87 10 Total time in hours 1.6 2.55 2.23 1.53 11 Accuracy Mean distance (in mm) 15.75 2 . 13 .10.8 Standard deviation (in mm) 126.45 1 4 . 85 14.54

PAGE 59

59 CHAPTER 5 CONCLUSIONS AND RECOMMENDATIONS C onclusions The reality capture technologies were compared for quality time and accuracy of the models created using photogrammetric workflow for three different type of digital cameras. A model was created using the images from a DSLR camera model Nikon D3300, A smartphone model iPhone XR and a 360 º came of model name Ricoh Theta S. Agisoft Metashape professional software was used to 3D model the 29 360 º images clicked using Ricoh theta S. Autodesk Recap Photo was used to 3D model the images captured from iPhone XR and Nikon D3300. Autodesk Recap Pro software was used to model 3D pointcloud from The Faro Focus Terrestrial laser scanner. The tim e for the workflows defined are recorded. A visual analysis was performed following a cloud compare and statistical analysis using the cloud compare software. The results from the DSLR and the iPhone XR camera were visually rich in texture and geometry com pared to the 360 º camera photogrammetric model. The model from iPhone XR camera appeared to be brighter and richer in texture and more detailed than the DSLR camera due to better sensor exposure settings and image processing on the iPhone XR. The result fr om 360 camera was uneven and of poor quality visually. The rate of capture for the three workflows found the iPhone XR to be the fastest workflow to acquire data on the job site. The rate of capture of DSLR camera was lesser than smartphone camera but grea ter than 360 camera by approximately 9 times, please refer table 4.1. The terrestrial laser is the slowest technology to capture scans (or images) on field.

PAGE 60

60 Total workflow time of the 360 º camera photogrammetric 3D modelling workflow is higher than oth er workflows due to high data acquiring and processing time. The total workflow time calculated in hours is lowest for smart phone camera photogrammetric modeling and is followed by terrestrial laser scanning workflow for the second fastest workflow. 360 c amera photogrammetry workflow has the longest time taking workflow compared to other workflows here. The accuracy comparison results from the cloud compare software found the models developed using the 360 º camera photogrammetric workflow to have a maximu m standard deviation of 126 mm and a mean/ average deviation of 15 mm to make it the least accurate model among the three. The models developed using DSLR photogrammetric workflow had the highest accuracy with the mean deviation of 2. 0 mm compared to mean deviation of 10.8 mm from the smartphone photogrammetric model. Nevertheless, the D SLR camera derived 3D model has a higher standard deviation value of 14.8 mm compared to 14. 5 4 mm standard deviation of the smartphone camera photogrammetric model. 95% of the error values are less than twice the standard deviation or 252 mm for the 360 º camera work flow, 29.6 mm for the DSLR camera workflow and 2 9 . 04 mm for the S martphone camera workflow. The study concludes that S martphone photogrammetry is the fastest pho togrammetry workflow with accuracy comparable or very close to that of DSLR derived photogrammetric model. The texture developed with the S martphone workflow is rich and geometrically well defined. The Photogrammetric workflow using the 360 camera is the s lowest and most geometrically inaccurate workflow. The texture quality of the model is also the lowest however, it can capture the most amount of data or

PAGE 61

61 points compared to other digital cameras. The photogrammetric workflow using DSLR camera is the most accurate workflow and is the most versatile workflow to create 3D models of different textures and geometry. The workflow takes more time to capture data on site and to create the 3D model but can deliver accurate results. The DSLR camera workflow for this study used half its resolution to match the resolution of the S martphone images and was found it to be still more accurate due to the bigger sensor and processing system. Photogrammetric 3D modeling is a complex process involving multiple factors that ca n affect the quality and accuracy of the models. Scope and size of the project plays a major role in selecting the correct workflows for the photogrammetric 3D the smart phone camera workflow to be the least time consuming workflow over all and the DSLR camera workflow to be the most accurate workflow. The smartphone camera can be used to 3D model objects of sizes around the one used in the study or lesser with accuracy co mparable to the TLS and DSLR camera. The DSLR camera is capable of modeling objects of small sizes to objects of sizes greater than the one used in study with accuracy greater than of any other camera type. For large scale or area projects and for the pro jects with unfavorable photogrammetric or light conditions the terrestrial laser scanner can be used to develop a 3D model. Photogrammetry modeling can be improved by increasing the n umber of pictures and the image quality used for modeling. This study was successfully able to compare the different photogrammetric workflows and identify the parameters that can affect the photogrammetry models and their workflows.

PAGE 62

62 Recommendations for Future R esearch The recommendation for future research from this study would be to analyze workflows with other site conditions and on multiple objects. F uture stud ies should also consider objects of different sizes and from other types of construction projects such as heavy civil and infrastructure projects. Furthermore, future studies should include compar ison analysis of different hardware o r different cameras models of same t y pe . A comparison analysis of other photogrammetry softwares can also be performed for research. A relation between parameters such as n umber of images to accuracy, F n umber . to accuracy or n umber of images to speed of capture can be performed to further develop this study. The study use d mean and standard deviation calculated in comparison to laser scanner data for accuracy , however fut ure studies c ould use the actual dimensions of the object or site and additionally c ould also use other accuracy parameters to further study the accuracy of photogramm etry w orkflows.

PAGE 63

63 APPENDIX A HISTORICAL ACCURACY DATA OF PHOTOGRAMMETRIC MODELS Figure A 1. Accuracy Data of photogrammetric models comparing scale, error and precision of different types of subjects (adapted from: P. Sapirstein, 2016)

PAGE 64

64 APPENDIX B IMAGE PROPERTIES Figure B 1. R epresent image parameters from the t hree cameras used for the study (Source: Photo courtesy of author)

PAGE 65

65 APPENDIX C PHOTOGRAMMETRY MODELS Different S izes and types of photogrammetry model produced with less than 100 images as reference for the study using iPhone XR camera Figure C 1. R epresenting a small size 3D model of a brick made using Iphone iPhone camera (Source: Photo courtesy of author) Figure C 2. R epresenting large size 3D model of 6 story building façade made using iPhone XR camera (Sou rce: Photo courtesy of author)

PAGE 66

66 Figure C 3. R epresenting a 3D model of a column using iPhone XR camera (Source: Photo courtesy of author) Figure C 4. R epresenting a 3D model of a CMU block using iPhone XR camera (Source: Photo courtesy of author)

PAGE 67

67 Figure C 5. R epresenting a 3D model of brick p ile made using iPhone XR camera (Source: Photo courtesy of author) Figure C 6. 3D model representing leaking concrete slab defect made using iPhone XR camera (Source: Photo courtesy of author)

PAGE 68

68 Figure C 7. R epresenting 3D model of a under construction façade of a 3 story bungalow made using iPhone XR camera (Source: Photo courtesy of author) Figure C 8. R epresenting 3D model of a bedroom modeled using iPhone XR camera (Source: Photo courtesy of author)

PAGE 69

69 Figure C 9. R epresenting a 3D model of interior space at oaks mall gainesville modeled using iPhone XR camera (Source: Photo courtesy of author) Figure C 10. R ep resenting interior space 3D model of Rinker school classroom made using iPhone XR camera (Source: Photo courtesy of author)

PAGE 70

70 LIST OF REFERENCES Agisoft. (2018). Agisoft Metashape User Manual: Professional Edition, Version 1.5. 2018 Agisoft LLC. ASPRS, A. p. (2008). Guidelines for procurement of professional aerial imagery, photogrammetry,LiDAR and related remote sensor based geospatial mapping services. Photogrammetric Engineering and Remote Sensing, Vol. 74. p. 1286 1295. Autodesk. (2017). Autodesk ReCap Best practices and how to get started guide. Autodesk®. Autodesk. (2018). How reality capt ure is changing the design and construction industry. Barnes, A., Simon, K., & Wiewel, A. (2014). From Photos to Models Strategies for using digital photogrammetry in your project. University of Arkansas. Blizard, B. (2014, feb 19). The Art of Photogramme try: How to Take your photos . Retrieved https://www.tested.com/art/makers/460142 art photogrammetry how take your photos/ Broome, L. (2016). Comparison betwee n terrestrial close range photogrammetry and terrestrial laser scanning. University of southern queensland. Daza, L. (2017, June 9th). What are the Different Types of Cameras Used for Photography? Retrieved from Adorama learning center: https://www.adorama.com/alc/what are the different types of cameras used for photography Duke, C. (2018). Evaluation of Photogrammetry at Diffrent Scales. Auburn University. Faltýnová M, M. E. (2016). Building fcade documentation using laser scanning and phototgrammetry and data implementation into BIM. Farzad, M. R. (2016). Interior Close range Digital Photogrammetry For an Operational Building. Texas A&M Universi ty. Fei Dai, A. R. (2013). Comparison of Image Based and Time of Flight Based Technologies for Three Dimensional Reconstruction of Infrastructure. American Society of Civil Engineers. Ghosh, S. (2005). Fundamentals of Computational Photogrammetry. New Delh i: Concept Publishing Company. Gosavi, R. N. (2018). Asesment of three building technologies workflows: laser scanning, LIDAR scanning and photogrammetry and their usage. Gainesville: University of Florida.

PAGE 71

71 Gustavo O. Maldonadoa, S. R. (2016). Dicrepency A nalysis Between Close range Photogrammetry and Terristrial LIDAR. Deaprtment of Civil Enginnering and Construction Management, Georgia Southern University. Laakso, A. (2016). From reality to 3D model post production of photogrammetry based model. Lahti Uni versity of Applied Sciences. Lievendag, N. (2018). 3 free mobile 3d scanning apps. Retrieved from 3DScanexpert.com: https://3dscanexpert.com/3 free 3d scanning apps/ Maghiar Marcel, M. G. (2 016). Accuracy Comparison of 3D Structural Models Produced via Close Range Photogrammetry and Laser Scanning. Construction Research Congress 2016, p. 784 789. Nikolov, I., & Madsen, C. (2016). Benchmarking Close range Structure from Motion 3D Reconstructio n Software under Varying Capturing Conditions. Aalborg: Architecture, Design and Media Technology, Aalborg University. range photogrammetry a 3D printing case study. Novi Sad: University of Novi Sad, Faculty of Technical Sciences, Department of Graphic Engineering and Design. Rebecca, N. B. (2017). Minimizing the adverse effects of bias and low repeatability precision in photogrammetry software through statistical analysis. Journal of Cul tural Heritage. S.Altmana, W. X. (2017). Evaluation of low cost terrestrial photogrammetry for 3D reconstruction of complex buildings . ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume IV 2/W4, , 200 208. Sapirstein , P. (2016). Accurate measurement with photogrammetry at large sites. Journal of Archaeological Science. Simón Peña Villasenín, M. G. D. S. (2017). 3 D Modeling of Historic Façades Using SFM Photogrammetry Metric Documentation of Different Building Types o f a Historic Center. International Journal of Architectural Heritage Conservation, Analysis, and Restoration, 872 874. Simone Bianco, G. C. (2018). Evaluating the Performance of Structure from Motion Pipelines. Journal of imaging. Smith, C. Q. (2015). Stru cture from motion photogrammetry in physical geography. Sage Journals, 248 269. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLI B3,, 215 217.

PAGE 72

72 T herry, M. (2018, june). Retrieved from Medium.com: https://medium.com/@EightyLevel/what camera should you use for photogrammetry 3a67864bd4eb Thomas Luhmann, S. R. (2014). Close Range Photogrammetry and 3D Imaging. Gottingen, Germany: Deutsche Nationalbibliothek. Thomas, L. (2010). Close range photogrammetry for industrial applications. ISPRS Journal of Photogrammetry and Remote Sensing, Volume 65, Issue 6, p. 558 569.

PAGE 73

73 BIOGRAPHICAL SKETCH Himank Soni is a researcher who possesses interest in the AEC sector and New technologies for construction related applications. He received his MS in Construction Management from the University of Florida. He possesses a bac field of civil engineering from Manipal Institute of Technology, Manipal, India. Himank aims to work in the construction industry as a construction management professional in the long run.