際際滷shows by User: KlenopiPucihar / http://www.slideshare.net/images/logo.gif 際際滷shows by User: KlenopiPucihar / Thu, 08 Sep 2016 17:18:23 GMT 際際滷Share feed for 際際滷shows by User: KlenopiPucihar Using a Mobile Phone as a 2D Virtual Tracing Tool: Static Peephole vs. Magic Lens /slideshow/using-a-mobile-phone-as-a-2d-virtual-tracing-tool-static-peephole-vs-magic-lens/65832057 sketchinhcc12-160908171823
Traditional sketching aids rely on the physical production of templates or stencils which is particularly problematic in the case of larger formats. One possible solution is 2D virtual tracing using a virtual template to create a physical sketch. This paper evaluates a mobile phone as a 2D virtual tracing tool by comparing three tracing methods: (i) a traditional tracing method with a printed template, (ii) a virtual tracing method Static Peephole (SP) in which the virtual template is manually adjusted to a physical contour by drag and pinch gestures, and (iii) a virtual tracing method augmented reality Magic Lens (ML) in which template is projected on the physical object such as paper hence navigation is possible through physical movement of the mobile device. The results show that it is possible to use mobile phones for virtual tracing, however, ML only achieved comparable performance to SP mode and traditional methods continued to be quicker and preferred by users. Paper: http://link.springer.com/chapter/10.1007%2F978-3-319-44805-3_22]]>

Traditional sketching aids rely on the physical production of templates or stencils which is particularly problematic in the case of larger formats. One possible solution is 2D virtual tracing using a virtual template to create a physical sketch. This paper evaluates a mobile phone as a 2D virtual tracing tool by comparing three tracing methods: (i) a traditional tracing method with a printed template, (ii) a virtual tracing method Static Peephole (SP) in which the virtual template is manually adjusted to a physical contour by drag and pinch gestures, and (iii) a virtual tracing method augmented reality Magic Lens (ML) in which template is projected on the physical object such as paper hence navigation is possible through physical movement of the mobile device. The results show that it is possible to use mobile phones for virtual tracing, however, ML only achieved comparable performance to SP mode and traditional methods continued to be quicker and preferred by users. Paper: http://link.springer.com/chapter/10.1007%2F978-3-319-44805-3_22]]>
Thu, 08 Sep 2016 17:18:23 GMT /slideshow/using-a-mobile-phone-as-a-2d-virtual-tracing-tool-static-peephole-vs-magic-lens/65832057 KlenopiPucihar@slideshare.net(KlenopiPucihar) Using a Mobile Phone as a 2D Virtual Tracing Tool: Static Peephole vs. Magic Lens KlenopiPucihar Traditional sketching aids rely on the physical production of templates or stencils which is particularly problematic in the case of larger formats. One possible solution is 2D virtual tracing using a virtual template to create a physical sketch. This paper evaluates a mobile phone as a 2D virtual tracing tool by comparing three tracing methods: (i) a traditional tracing method with a printed template, (ii) a virtual tracing method Static Peephole (SP) in which the virtual template is manually adjusted to a physical contour by drag and pinch gestures, and (iii) a virtual tracing method augmented reality Magic Lens (ML) in which template is projected on the physical object such as paper hence navigation is possible through physical movement of the mobile device. The results show that it is possible to use mobile phones for virtual tracing, however, ML only achieved comparable performance to SP mode and traditional methods continued to be quicker and preferred by users. Paper: http://link.springer.com/chapter/10.1007%2F978-3-319-44805-3_22 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/sketchinhcc12-160908171823-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Traditional sketching aids rely on the physical production of templates or stencils which is particularly problematic in the case of larger formats. One possible solution is 2D virtual tracing using a virtual template to create a physical sketch. This paper evaluates a mobile phone as a 2D virtual tracing tool by comparing three tracing methods: (i) a traditional tracing method with a printed template, (ii) a virtual tracing method Static Peephole (SP) in which the virtual template is manually adjusted to a physical contour by drag and pinch gestures, and (iii) a virtual tracing method augmented reality Magic Lens (ML) in which template is projected on the physical object such as paper hence navigation is possible through physical movement of the mobile device. The results show that it is possible to use mobile phones for virtual tracing, however, ML only achieved comparable performance to SP mode and traditional methods continued to be quicker and preferred by users. Paper: http://link.springer.com/chapter/10.1007%2F978-3-319-44805-3_22
Using a Mobile Phone as a 2D Virtual Tracing Tool: Static Peephole vs. Magic Lens from Klen opi Pucihar
]]>
210 5 https://cdn.slidesharecdn.com/ss_thumbnails/sketchinhcc12-160908171823-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
ICMI '13: Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality : Device vs . User Perspective Rendering /slideshow/evaluating-dualview-perceptual-issues-in-handheld-augmented-reality-device-vs-user-perspective-rendering/29429028 icmiv4-131222111544-phpapp02
In handheld Augmented Reality (AR) the magic-lens paradigm is typically implemented by rendering the video stream captured by the back-facing camera onto the devices screen. Unfortunately, such implementations show the real world from the devices perspective rather than the users perspective. This dual-perspective results in misaligned and incorrectly scaled imagery, a predominate cause for the dual-view problem with potential to distort users spatial perception. This paper presents a user study that analyzes users expectations, spatial-perception, and their ability to deal with the dual-view problem, by comparing device-perspective and fixed Point-of-View (POV) user-perspective rendering. The results confirm the existence of the dual-view perceptual issue and that the majority of participants expect user-perspective rendering irrespective of their previous AR experience. Participants also demonstrated significantly better spatial perception and preference of the user-perspective view. Full Paper: http://dl.acm.org/citation.cfm?id=2522848.2522885]]>

In handheld Augmented Reality (AR) the magic-lens paradigm is typically implemented by rendering the video stream captured by the back-facing camera onto the devices screen. Unfortunately, such implementations show the real world from the devices perspective rather than the users perspective. This dual-perspective results in misaligned and incorrectly scaled imagery, a predominate cause for the dual-view problem with potential to distort users spatial perception. This paper presents a user study that analyzes users expectations, spatial-perception, and their ability to deal with the dual-view problem, by comparing device-perspective and fixed Point-of-View (POV) user-perspective rendering. The results confirm the existence of the dual-view perceptual issue and that the majority of participants expect user-perspective rendering irrespective of their previous AR experience. Participants also demonstrated significantly better spatial perception and preference of the user-perspective view. Full Paper: http://dl.acm.org/citation.cfm?id=2522848.2522885]]>
Sun, 22 Dec 2013 11:15:43 GMT /slideshow/evaluating-dualview-perceptual-issues-in-handheld-augmented-reality-device-vs-user-perspective-rendering/29429028 KlenopiPucihar@slideshare.net(KlenopiPucihar) ICMI '13: Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality : Device vs . User Perspective Rendering KlenopiPucihar In handheld Augmented Reality (AR) the magic-lens paradigm is typically implemented by rendering the video stream captured by the back-facing camera onto the devices screen. Unfortunately, such implementations show the real world from the devices perspective rather than the users perspective. This dual-perspective results in misaligned and incorrectly scaled imagery, a predominate cause for the dual-view problem with potential to distort users spatial perception. This paper presents a user study that analyzes users expectations, spatial-perception, and their ability to deal with the dual-view problem, by comparing device-perspective and fixed Point-of-View (POV) user-perspective rendering. The results confirm the existence of the dual-view perceptual issue and that the majority of participants expect user-perspective rendering irrespective of their previous AR experience. Participants also demonstrated significantly better spatial perception and preference of the user-perspective view. Full Paper: http://dl.acm.org/citation.cfm?id=2522848.2522885 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icmiv4-131222111544-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> In handheld Augmented Reality (AR) the magic-lens paradigm is typically implemented by rendering the video stream captured by the back-facing camera onto the devices screen. Unfortunately, such implementations show the real world from the devices perspective rather than the users perspective. This dual-perspective results in misaligned and incorrectly scaled imagery, a predominate cause for the dual-view problem with potential to distort users spatial perception. This paper presents a user study that analyzes users expectations, spatial-perception, and their ability to deal with the dual-view problem, by comparing device-perspective and fixed Point-of-View (POV) user-perspective rendering. The results confirm the existence of the dual-view perceptual issue and that the majority of participants expect user-perspective rendering irrespective of their previous AR experience. Participants also demonstrated significantly better spatial perception and preference of the user-perspective view. Full Paper: http://dl.acm.org/citation.cfm?id=2522848.2522885
ICMI '13: Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality : Device vs . User Perspective Rendering from Klen opi Pucihar
]]>
261 3 https://cdn.slidesharecdn.com/ss_thumbnails/icmiv4-131222111544-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds presentation White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
EICS '11: Estimating Scale Using Depth From Focus for Mobile Augmented Reality /slideshow/eics-initialising-scale-using-dff-v2final/29428718 eics-initialisingscaleusingdffv2final-131222104313-phpapp01
Whilst there has been a considerable progress in augmented reality (AR) over recent years, it has principally been related to either marker based or apriori mapped systems, which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no apriori information, using techniques borrowed from robotic vision, are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and apriori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and adds considerably to the enhancement of mobile augmented reality. As the solution only requires an auto-focusing camera, it is also applicable to other AR platforms. Full Paper: http://dl.acm.org/citation.cfm?id=1996461.1996531]]>

Whilst there has been a considerable progress in augmented reality (AR) over recent years, it has principally been related to either marker based or apriori mapped systems, which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no apriori information, using techniques borrowed from robotic vision, are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and apriori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and adds considerably to the enhancement of mobile augmented reality. As the solution only requires an auto-focusing camera, it is also applicable to other AR platforms. Full Paper: http://dl.acm.org/citation.cfm?id=1996461.1996531]]>
Sun, 22 Dec 2013 10:43:13 GMT /slideshow/eics-initialising-scale-using-dff-v2final/29428718 KlenopiPucihar@slideshare.net(KlenopiPucihar) EICS '11: Estimating Scale Using Depth From Focus for Mobile Augmented Reality KlenopiPucihar Whilst there has been a considerable progress in augmented reality (AR) over recent years, it has principally been related to either marker based or apriori mapped systems, which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no apriori information, using techniques borrowed from robotic vision, are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and apriori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and adds considerably to the enhancement of mobile augmented reality. As the solution only requires an auto-focusing camera, it is also applicable to other AR platforms. Full Paper: http://dl.acm.org/citation.cfm?id=1996461.1996531 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eics-initialisingscaleusingdffv2final-131222104313-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Whilst there has been a considerable progress in augmented reality (AR) over recent years, it has principally been related to either marker based or apriori mapped systems, which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no apriori information, using techniques borrowed from robotic vision, are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and apriori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and adds considerably to the enhancement of mobile augmented reality. As the solution only requires an auto-focusing camera, it is also applicable to other AR platforms. Full Paper: http://dl.acm.org/citation.cfm?id=1996461.1996531
EICS '11: Estimating Scale Using Depth From Focus for Mobile Augmented Reality from Klen opi Pucihar
]]>
390 4 https://cdn.slidesharecdn.com/ss_thumbnails/eics-initialisingscaleusingdffv2final-131222104313-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-KlenopiPucihar-48x48.jpg?cb=1641299710 http://scholar.google.co.uk/citations?user=_g9YpKwAAAAJ&hl=en https://cdn.slidesharecdn.com/ss_thumbnails/sketchinhcc12-160908171823-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/using-a-mobile-phone-as-a-2d-virtual-tracing-tool-static-peephole-vs-magic-lens/65832057 Using a Mobile Phone a... https://cdn.slidesharecdn.com/ss_thumbnails/icmiv4-131222111544-phpapp02-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/evaluating-dualview-perceptual-issues-in-handheld-augmented-reality-device-vs-user-perspective-rendering/29429028 ICMI &#39;13: Evaluating D... https://cdn.slidesharecdn.com/ss_thumbnails/eics-initialisingscaleusingdffv2final-131222104313-phpapp01-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/eics-initialising-scale-using-dff-v2final/29428718 EICS &#39;11: Estimating S...