際際滷shows by User: VladimirKulyukin / http://www.slideshare.net/images/logo.gif 際際滷shows by User: VladimirKulyukin / Fri, 02 Dec 2016 20:51:06 GMT 際際滷Share feed for 際際滷shows by User: VladimirKulyukin Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals /slideshow/toward-sustainable-electronic-beehive-monitoring-algorithms-for-omnidirectional-bee-counting-from-images-and-harmonic-analysis-of-buzzing-signals/69774290 el24312-161202205106
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals]]>

Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals]]>
Fri, 02 Dec 2016 20:51:06 GMT /slideshow/toward-sustainable-electronic-beehive-monitoring-algorithms-for-omnidirectional-bee-counting-from-images-and-harmonic-analysis-of-buzzing-signals/69774290 VladimirKulyukin@slideshare.net(VladimirKulyukin) Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals VladimirKulyukin Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/el24312-161202205106-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals
Toward Sustainable Electronic Beehive Monitoring: Algorithms for Omnidirectional Bee Counting from Images and Harmonic Analysis of Buzzing Signals from Vladimir Kulyukin
]]>
609 5 https://cdn.slidesharecdn.com/ss_thumbnails/el24312-161202205106-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring /slideshow/digitizing-buzzing-signals-into-a440-piano-note-sequences-and-estimating-forager-traffic-levels-from-images-in-solarpowered-electronic-beehive-monitoring-69774206/69774206 imecs2016pp82-87-161202204743
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring]]>

Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring]]>
Fri, 02 Dec 2016 20:47:43 GMT /slideshow/digitizing-buzzing-signals-into-a440-piano-note-sequences-and-estimating-forager-traffic-levels-from-images-in-solarpowered-electronic-beehive-monitoring-69774206/69774206 VladimirKulyukin@slideshare.net(VladimirKulyukin) Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring VladimirKulyukin Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/imecs2016pp82-87-161202204743-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring
Digitizing Buzzing Signals into A440 Piano Note Sequences and Estimating Forager Traffic Levels from Images in Solar-Powered, Electronic Beehive Monitoring from Vladimir Kulyukin
]]>
276 6 https://cdn.slidesharecdn.com/ss_thumbnails/imecs2016pp82-87-161202204743-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Generalized Hamming Distance /slideshow/generalized-hamming-distance/54927440 ghd-151109211724-lva1-app6892
Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a "Generalized Hamming distance" that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently. We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance," and illustrate its use in the area of object recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure's effectiveness in relating similar bitstrings.]]>

Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a "Generalized Hamming distance" that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently. We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance," and illustrate its use in the area of object recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure's effectiveness in relating similar bitstrings.]]>
Mon, 09 Nov 2015 21:17:24 GMT /slideshow/generalized-hamming-distance/54927440 VladimirKulyukin@slideshare.net(VladimirKulyukin) Generalized Hamming Distance VladimirKulyukin Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a "Generalized Hamming distance" that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently. We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance," and illustrate its use in the area of object recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure's effectiveness in relating similar bitstrings. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ghd-151109211724-lva1-app6892-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Many problems in information retrieval and related fields depend on a reliable measure of the distance or similarity between objects that, most frequently, are represented as vectors. This paper considers vectors of bits. Such data structures implement entities as diverse as bitmaps that indicate the occurrences of terms and bitstrings indicating the presence of edges in images. For such applications, a popular distance measure is the Hamming distance. The value of the Hamming distance for information retrieval applications is limited by the fact that it counts only exact matches, whereas in information retrieval, corresponding bits that are close by can still be considered to be almost identical. We define a &quot;Generalized Hamming distance&quot; that extends the Hamming concept to give partial credit for near misses, and suggest a dynamic programming algorithm that permits it to be computed efficiently. We envision many uses for such a measure. In this paper we define and prove some basic properties of the :Generalized Hamming distance,&quot; and illustrate its use in the area of object recognition. We evaluate our implementation in a series of experiments, using autonomous robots to test the measure&#39;s effectiveness in relating similar bitstrings.
Generalized Hamming Distance from Vladimir Kulyukin
]]>
639 5 https://cdn.slidesharecdn.com/ss_thumbnails/ghd-151109211724-lva1-app6892-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Adapting Measures of Clumping Strength to Assess Term-Term Similarity /slideshow/adapting-measures-of-clumping-strength-to-assess-termterm-similarity/54926917 cooccurfinal-151109210215-lva1-app6891
Automated information retrieval relies heavily on statistical regularities that emerge as terms are deposited to produce text. This paper examines statistical patterns expected of a pair of terms that are semantically related to each other. Guided by a conceptualization of the text generation process, we derive measures of how tightly two terms are semantically associated. Our main objective is to probe whether such measures yields reasonable results. Specifically, we examine how the tendency of a content bearing term to clump, as quantified by previously developed measures of term clumping, is influenced by the presence of other terms. This approach allows us to present a toolkit from which a range of measures can be constructed. As an illustration, one of several suggested measures is evaluated on a large text corpus built from an on-line encyclopedia.]]>

Automated information retrieval relies heavily on statistical regularities that emerge as terms are deposited to produce text. This paper examines statistical patterns expected of a pair of terms that are semantically related to each other. Guided by a conceptualization of the text generation process, we derive measures of how tightly two terms are semantically associated. Our main objective is to probe whether such measures yields reasonable results. Specifically, we examine how the tendency of a content bearing term to clump, as quantified by previously developed measures of term clumping, is influenced by the presence of other terms. This approach allows us to present a toolkit from which a range of measures can be constructed. As an illustration, one of several suggested measures is evaluated on a large text corpus built from an on-line encyclopedia.]]>
Mon, 09 Nov 2015 21:02:15 GMT /slideshow/adapting-measures-of-clumping-strength-to-assess-termterm-similarity/54926917 VladimirKulyukin@slideshare.net(VladimirKulyukin) Adapting Measures of Clumping Strength to Assess Term-Term Similarity VladimirKulyukin Automated information retrieval relies heavily on statistical regularities that emerge as terms are deposited to produce text. This paper examines statistical patterns expected of a pair of terms that are semantically related to each other. Guided by a conceptualization of the text generation process, we derive measures of how tightly two terms are semantically associated. Our main objective is to probe whether such measures yields reasonable results. Specifically, we examine how the tendency of a content bearing term to clump, as quantified by previously developed measures of term clumping, is influenced by the presence of other terms. This approach allows us to present a toolkit from which a range of measures can be constructed. As an illustration, one of several suggested measures is evaluated on a large text corpus built from an on-line encyclopedia. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/cooccurfinal-151109210215-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Automated information retrieval relies heavily on statistical regularities that emerge as terms are deposited to produce text. This paper examines statistical patterns expected of a pair of terms that are semantically related to each other. Guided by a conceptualization of the text generation process, we derive measures of how tightly two terms are semantically associated. Our main objective is to probe whether such measures yields reasonable results. Specifically, we examine how the tendency of a content bearing term to clump, as quantified by previously developed measures of term clumping, is influenced by the presence of other terms. This approach allows us to present a toolkit from which a range of measures can be constructed. As an illustration, one of several suggested measures is evaluated on a large text corpus built from an on-line encyclopedia.
Adapting Measures of Clumping Strength to Assess Term-Term Similarity from Vladimir Kulyukin
]]>
439 6 https://cdn.slidesharecdn.com/ss_thumbnails/cooccurfinal-151109210215-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures /slideshow/him3412-2015-cameraready20may2015/48408952 him34122015cameraready20may2015-150520225705-lva1-app6891
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures]]>

A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures]]>
Wed, 20 May 2015 22:57:05 GMT /slideshow/him3412-2015-cameraready20may2015/48408952 VladimirKulyukin@slideshare.net(VladimirKulyukin) A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures VladimirKulyukin A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/him34122015cameraready20may2015-150520225705-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures
A Cloud-Based Infrastructure for Caloric Intake Estimation from Pre-Meal Videos and Post-Meal Plate Waste Pictures from Vladimir Kulyukin
]]>
614 4 https://cdn.slidesharecdn.com/ss_thumbnails/him34122015cameraready20may2015-150520225705-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory /VladimirKulyukin/fecs-2015-fec2350cameraready fecs2015fec2350cameraready-150515222928-lva1-app6891
Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory]]>

Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory]]>
Fri, 15 May 2015 22:29:28 GMT /VladimirKulyukin/fecs-2015-fec2350cameraready VladimirKulyukin@slideshare.net(VladimirKulyukin) Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory VladimirKulyukin Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/fecs2015fec2350cameraready-150515222928-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory
Exploring Finite State Automata with Junun Robots: A Case Study in Computability Theory from Vladimir Kulyukin
]]>
635 4 https://cdn.slidesharecdn.com/ss_thumbnails/fecs2015fec2350cameraready-150515222928-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning /slideshow/image-blur-detection-with-2d-haar-wavelet-transform-and-its-effect-on-skewed-barcode-scanning/48123734 ipcv2015ipc2351blurdetectionfinaldraft13may2015-150513235651-lva1-app6891
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning]]>

Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning]]>
Wed, 13 May 2015 23:56:51 GMT /slideshow/image-blur-detection-with-2d-haar-wavelet-transform-and-its-effect-on-skewed-barcode-scanning/48123734 VladimirKulyukin@slideshare.net(VladimirKulyukin) Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning VladimirKulyukin Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2015ipc2351blurdetectionfinaldraft13may2015-150513235651-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed Barcode Scanning from Vladimir Kulyukin
]]>
924 3 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2015ipc2351blurdetectionfinaldraft13may2015-150513235651-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels /slideshow/ipcv-2015-ipc2241textskewangledetectionfinaldraft13may2015/48123691 ipcv2015ipc2241textskewangledetectionfinaldraft13may2015-150513235451-lva1-app6891
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels]]>

Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels]]>
Wed, 13 May 2015 23:54:51 GMT /slideshow/ipcv-2015-ipc2241textskewangledetectionfinaldraft13may2015/48123691 VladimirKulyukin@slideshare.net(VladimirKulyukin) Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels VladimirKulyukin Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2015ipc2241textskewangledetectionfinaldraft13may2015-150513235451-lva1-app6891-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels
Text Skew Angle Detection in Vision-Based Scanning of Nutrition Labels from Vladimir Kulyukin
]]>
513 3 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2015ipc2241textskewangledetectionfinaldraft13may2015-150513235451-lva1-app6891-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints /slideshow/ijip-892-published/39384546 ijip-892published-140922115308-phpapp02
V. Kulyukin & T. Zaman. "Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints." International Journal of Image Processing (IJIP), V olume (8) : Issue (5) : 2014, pp. 355-383.]]>

V. Kulyukin & T. Zaman. "Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints." International Journal of Image Processing (IJIP), V olume (8) : Issue (5) : 2014, pp. 355-383.]]>
Mon, 22 Sep 2014 11:53:08 GMT /slideshow/ijip-892-published/39384546 VladimirKulyukin@slideshare.net(VladimirKulyukin) Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints VladimirKulyukin V. Kulyukin & T. Zaman. "Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints." International Journal of Image Processing (IJIP), V olume (8) : Issue (5) : 2014, pp. 355-383. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ijip-892published-140922115308-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> V. Kulyukin &amp; T. Zaman. &quot;Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints.&quot; International Journal of Image Processing (IJIP), V olume (8) : Issue (5) : 2014, pp. 355-383.
Vision-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera A lignment Constraints from Vladimir Kulyukin
]]>
679 3 https://cdn.slidesharecdn.com/ss_thumbnails/ijip-892published-140922115308-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Effective Nutrition Label Use on Smartphones /slideshow/icomp-2014-icm2211nl05mar14/34924778 icomp2014icm2211nl05mar14-140520174724-phpapp02
Effective Nutrition Label Use on Smartphones]]>

Effective Nutrition Label Use on Smartphones]]>
Tue, 20 May 2014 17:47:24 GMT /slideshow/icomp-2014-icm2211nl05mar14/34924778 VladimirKulyukin@slideshare.net(VladimirKulyukin) Effective Nutrition Label Use on Smartphones VladimirKulyukin Effective Nutrition Label Use on Smartphones <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/icomp2014icm2211nl05mar14-140520174724-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Effective Nutrition Label Use on Smartphones
Effective Nutrition Label Use on Smartphones from Vladimir Kulyukin
]]>
883 9 https://cdn.slidesharecdn.com/ss_thumbnails/icomp2014icm2211nl05mar14-140520174724-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity /slideshow/the-2014-international-conference-on-image-processing-computer-vision-pattern-recognition-ipcv-2014-an-algorithm-for-mobile-visionbased-localization-of-skewed-nutrition-labels-that-maximizes-specificity/34924750 ipcv2014ipc2241skewednllocalization20may2014-140520174556-phpapp02
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity]]>

An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity]]>
Tue, 20 May 2014 17:45:56 GMT /slideshow/the-2014-international-conference-on-image-processing-computer-vision-pattern-recognition-ipcv-2014-an-algorithm-for-mobile-visionbased-localization-of-skewed-nutrition-labels-that-maximizes-specificity/34924750 VladimirKulyukin@slideshare.net(VladimirKulyukin) An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity VladimirKulyukin An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2014ipc2241skewednllocalization20may2014-140520174556-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity
An Algorithm for Mobile Vision-Based Localization of Skewed Nutrition Labels that Maximizes Specificity from Vladimir Kulyukin
]]>
648 4 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2014ipc2241skewednllocalization20may2014-140520174556-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud /slideshow/ipcv-2014-ipc2691finaldraft15mar14/34924702 ipcv2014ipc2691finaldraft15mar14-140520174414-phpapp02
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud]]>

An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud]]>
Tue, 20 May 2014 17:44:14 GMT /slideshow/ipcv-2014-ipc2691finaldraft15mar14/34924702 VladimirKulyukin@slideshare.net(VladimirKulyukin) An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud VladimirKulyukin An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2014ipc2691finaldraft15mar14-140520174414-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud
An Algorithm for In-Place Vision-Based Skewed 1D Barcode Scanning in the Cloud from Vladimir Kulyukin
]]>
861 6 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2014ipc2691finaldraft15mar14-140520174414-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference /slideshow/nar-mapaugmentation-icchp2014/33284431 narmapaugmentationicchp2014-140408131324-phpapp02
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference]]>

Narrative Map Augmentation with Automated Landmark Extraction and Path Inference]]>
Tue, 08 Apr 2014 13:13:24 GMT /slideshow/nar-mapaugmentation-icchp2014/33284431 VladimirKulyukin@slideshare.net(VladimirKulyukin) Narrative Map Augmentation with Automated Landmark Extraction and Path Inference VladimirKulyukin Narrative Map Augmentation with Automated Landmark Extraction and Path Inference <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/narmapaugmentationicchp2014-140408131324-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference from Vladimir Kulyukin
]]>
705 6 https://cdn.slidesharecdn.com/ss_thumbnails/narmapaugmentationicchp2014-140408131324-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones /VladimirKulyukin/skip-trie-matching-a-greedy onlinepublication26aug13-130826111813-phpapp02
A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones]]>

A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones]]>
Mon, 26 Aug 2013 11:18:13 GMT /VladimirKulyukin/skip-trie-matching-a-greedy VladimirKulyukin@slideshare.net(VladimirKulyukin) Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones VladimirKulyukin A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/onlinepublication26aug13-130826111813-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones
Skip Trie Matching: A Greedy Algorithm for Real-Time OCR Error Correction on Smartphones from Vladimir Kulyukin
]]>
1767 6 https://cdn.slidesharecdn.com/ss_thumbnails/onlinepublication26aug13-130826111813-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones /slideshow/visionbased-localization/24459820 ipcv13nftssf01-130720203943-phpapp01
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones]]>

Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones]]>
Sat, 20 Jul 2013 20:39:43 GMT /slideshow/visionbased-localization/24459820 VladimirKulyukin@slideshare.net(VladimirKulyukin) Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones VladimirKulyukin Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv13nftssf01-130720203943-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Vision-Based Localization &amp; Text Chunking of Nutrition Fact Tables on Android Smartphones
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones from Vladimir Kulyukin
]]>
523 5 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv13nftssf01-130720203943-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smartphones /slideshow/skip-trie-matching/21216303 dictap2013submission124cameraready-130515132342-phpapp01
]]>

]]>
Wed, 15 May 2013 13:23:42 GMT /slideshow/skip-trie-matching/21216303 VladimirKulyukin@slideshare.net(VladimirKulyukin) Skip Trie Matching for Real Time OCR Output Error Correction on Android Smartphones VladimirKulyukin <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/dictap2013submission124cameraready-130515132342-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Skip Trie Matching for Real Time OCR Output Error Correction on Android Smartphones from Vladimir Kulyukin
]]>
2222 4 https://cdn.slidesharecdn.com/ss_thumbnails/dictap2013submission124cameraready-130515132342-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones /slideshow/visionbased-localization-text-chunking-of-nutrition-fact-tables-on-android-smartphones/20889518 ipcv2013nftcamerareadyipc3047-130509185640-phpapp02
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones]]>

Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones]]>
Thu, 09 May 2013 18:56:40 GMT /slideshow/visionbased-localization-text-chunking-of-nutrition-fact-tables-on-android-smartphones/20889518 VladimirKulyukin@slideshare.net(VladimirKulyukin) Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones VladimirKulyukin Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2013nftcamerareadyipc3047-130509185640-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Vision-Based Localization &amp; Text Chunking of Nutrition Fact Tables on Android Smartphones
Vision-Based Localization & Text Chunking of Nutrition Fact Tables on Android Smartphones from Vladimir Kulyukin
]]>
1007 4 https://cdn.slidesharecdn.com/ss_thumbnails/ipcv2013nftcamerareadyipc3047-130509185640-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Toward Blind Travel Support through Verbal Route Directions: A Path Inference Algorithm for Inferring New Route Descriptions from Existing Route Descriptions /slideshow/toward-blind-travel-support-through-verbal-route-directions-a-path-inference-algorithm-for-inferring-new-route-descriptions-from-existing-route-descriptions/13938715 kulyukintorehjaug12-120810144602-phpapp01
The work presented in this article continues our investigation of such assisted navigation solutions where the main emphasis is placed not on sensor sets or sensor fusion algorithms but on the ability of the travelers to interpret and contextualize verbal route directions en route. This work contributes to our investigation of the research hypothesis that we have formulated and partially validated in our previous studies: if a route is verbally described in sufficient and appropriate amount of detail, independent VI travelers can use their O&M and problem solving skills to successfully follow the route without any wearable sensors or sensors embedded in the environment. In this investigation, we temporarily put aside the issue of how VI and blind travelers successfully interpret route directions en route and tackle the question of how those route directions can be created, generated, and maintained by online communities. In particular, we focus on the automation of path inference and present an algorithm that may be used as part of the background computation of VGI sites to find new paths in the previous route directions written by online community members, generate new route descriptions from them, and post them for subsequent community editing.]]>

The work presented in this article continues our investigation of such assisted navigation solutions where the main emphasis is placed not on sensor sets or sensor fusion algorithms but on the ability of the travelers to interpret and contextualize verbal route directions en route. This work contributes to our investigation of the research hypothesis that we have formulated and partially validated in our previous studies: if a route is verbally described in sufficient and appropriate amount of detail, independent VI travelers can use their O&M and problem solving skills to successfully follow the route without any wearable sensors or sensors embedded in the environment. In this investigation, we temporarily put aside the issue of how VI and blind travelers successfully interpret route directions en route and tackle the question of how those route directions can be created, generated, and maintained by online communities. In particular, we focus on the automation of path inference and present an algorithm that may be used as part of the background computation of VGI sites to find new paths in the previous route directions written by online community members, generate new route descriptions from them, and post them for subsequent community editing.]]>
Fri, 10 Aug 2012 14:46:00 GMT /slideshow/toward-blind-travel-support-through-verbal-route-directions-a-path-inference-algorithm-for-inferring-new-route-descriptions-from-existing-route-descriptions/13938715 VladimirKulyukin@slideshare.net(VladimirKulyukin) Toward Blind Travel Support through Verbal Route Directions: A Path Inference Algorithm for Inferring New Route Descriptions from Existing Route Descriptions VladimirKulyukin The work presented in this article continues our investigation of such assisted navigation solutions where the main emphasis is placed not on sensor sets or sensor fusion algorithms but on the ability of the travelers to interpret and contextualize verbal route directions en route. This work contributes to our investigation of the research hypothesis that we have formulated and partially validated in our previous studies: if a route is verbally described in sufficient and appropriate amount of detail, independent VI travelers can use their O&M and problem solving skills to successfully follow the route without any wearable sensors or sensors embedded in the environment. In this investigation, we temporarily put aside the issue of how VI and blind travelers successfully interpret route directions en route and tackle the question of how those route directions can be created, generated, and maintained by online communities. In particular, we focus on the automation of path inference and present an algorithm that may be used as part of the background computation of VGI sites to find new paths in the previous route directions written by online community members, generate new route descriptions from them, and post them for subsequent community editing. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/kulyukintorehjaug12-120810144602-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> The work presented in this article continues our investigation of such assisted navigation solutions where the main emphasis is placed not on sensor sets or sensor fusion algorithms but on the ability of the travelers to interpret and contextualize verbal route directions en route. This work contributes to our investigation of the research hypothesis that we have formulated and partially validated in our previous studies: if a route is verbally described in sufficient and appropriate amount of detail, independent VI travelers can use their O&amp;M and problem solving skills to successfully follow the route without any wearable sensors or sensors embedded in the environment. In this investigation, we temporarily put aside the issue of how VI and blind travelers successfully interpret route directions en route and tackle the question of how those route directions can be created, generated, and maintained by online communities. In particular, we focus on the automation of path inference and present an algorithm that may be used as part of the background computation of VGI sites to find new paths in the previous route directions written by online community members, generate new route descriptions from them, and post them for subsequent community editing.
Toward Blind Travel Support through Verbal Route Directions: A Path Inference Algorithm for Inferring New Route Descriptions from Existing Route Descriptions from Vladimir Kulyukin
]]>
1584 16 https://cdn.slidesharecdn.com/ss_thumbnails/kulyukintorehjaug12-120810144602-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines /slideshow/eyefree-barcode-detection-on-smartphones-with-niblacks-binarization-and-support-vector-machines/12676638 worldcompipvc2012-120424180746-phpapp01
]]>

]]>
Tue, 24 Apr 2012 18:07:45 GMT /slideshow/eyefree-barcode-detection-on-smartphones-with-niblacks-binarization-and-support-vector-machines/12676638 VladimirKulyukin@slideshare.net(VladimirKulyukin) Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines VladimirKulyukin <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/worldcompipvc2012-120424180746-phpapp01-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Eye-Free Barcode Detection on Smartphones with Niblack's Binarization and Support Vector Machines from Vladimir Kulyukin
]]>
865 3 https://cdn.slidesharecdn.com/ss_thumbnails/worldcompipvc2012-120424180746-phpapp01-thumbnail.jpg?width=120&height=120&fit=bounds document White http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Cloud Computing /slideshow/eyesight-sharing-in-blind-grocery-shopping-remote-p2p-caregiving-through-cloud-computing/12676603 eyesharecamerareadyicchp2012-120424180312-phpapp02
]]>

]]>
Tue, 24 Apr 2012 18:03:11 GMT /slideshow/eyesight-sharing-in-blind-grocery-shopping-remote-p2p-caregiving-through-cloud-computing/12676603 VladimirKulyukin@slideshare.net(VladimirKulyukin) Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Cloud Computing VladimirKulyukin <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/eyesharecamerareadyicchp2012-120424180312-phpapp02-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br>
Eyesight Sharing in Blind Grocery Shopping: Remote P2P Caregiving through Cloud Computing from Vladimir Kulyukin
]]>
702 4 https://cdn.slidesharecdn.com/ss_thumbnails/eyesharecamerareadyicchp2012-120424180312-phpapp02-thumbnail.jpg?width=120&height=120&fit=bounds document Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-VladimirKulyukin-48x48.jpg?cb=1523188922 www.linkedin.com/pub/vladimir-kulyukin/23/2a2/150 https://cdn.slidesharecdn.com/ss_thumbnails/el24312-161202205106-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/toward-sustainable-electronic-beehive-monitoring-algorithms-for-omnidirectional-bee-counting-from-images-and-harmonic-analysis-of-buzzing-signals/69774290 Toward Sustainable Ele... https://cdn.slidesharecdn.com/ss_thumbnails/imecs2016pp82-87-161202204743-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/digitizing-buzzing-signals-into-a440-piano-note-sequences-and-estimating-forager-traffic-levels-from-images-in-solarpowered-electronic-beehive-monitoring-69774206/69774206 Digitizing Buzzing Sig... https://cdn.slidesharecdn.com/ss_thumbnails/ghd-151109211724-lva1-app6892-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/generalized-hamming-distance/54927440 Generalized Hamming Di...