Hao Tang

Picture of Hao    Tang


Professor
Computer Information Systems

EMAIL: htang@bmcc.cuny.edu

Office: F-930N

Office Hours:

Phone: +1 (212) 220-1479

Dr. Tang focuses on research in augmented and accessible learning for the people with special needs, especially the Blind and Visually Impaired and Autism Spectrum Disorders. The lab is working on cutting-edge research in virtual reality, augmented reality, artificial intelligence and geospatial information science, supported by NSF, DoD, DIA and DHS.

Dr. Tang has encouraged students to participate in his research projects and has guided students in presenting their research findings related to artificial intelligence and assistive technology, many students continued working on research projects with him after transferring to senior colleges. Some students became software developers at the top tech companies, including Apple, Amazon and Microsoft, and fintech companies, JPMorgan Chase, as well as the federal agencies, such as Environmental Protection Agency and Department of Homeland Security.

Research assistant positions with scholarship are available: prospective students (undergraduate, master and doctoral students) will work on cutting-edge research. Please send your resume and briefly introduce your research experience and interests to htang@bmcc.cuny.edu.

Expertise

3D computer modeling, artificial intelligence, virtual and augmented reality, mobile computer vision and their applications in security, surveillance, assistive technology and education.

Degrees

Ph.D. CUNY Graduate Center, Computer Science

Courses Taught

CSC 331 (Data Structures)

Research and Projects

Dr. Tang’s most recent research Projects includes

  1. Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane
  2. Build Accessible Storefront Open Source Map using Crowdsourcing and Deep Learning
  3. Sidewalk Material Classification on Multimodal Data using Deep Learning
  4. Integrating AR and VR for Mobile Remote Collaboration
  5. Assistive Navigation using Mobile App
  6. Virtual Reality Mobile Physics Lab App

Dr. Tang’s research projects include:

  1. National Science Foundation Research Grant (#2131186), “CISE-MSI, Training a Virtual Guide Dog for Visually Impaired People to Learn Safe Routes Using Crowdsourcing Multimodal Data”, PI, 2021-2024.
  2. CUNY C.C. Research Grant – track 2, Mentored Undergraduate Research, “Exploring Virtual Environments by Visually Impaired using a Mixed Reality Cane without Visual Feedback”, Single-PI, 1/2021-12/2021
  3. National Science Foundation Research Grant, “PFI-RP: Smart and Accessible Transportation Hub for Assistive Navigation and Facility Management”, BMCC PI, collaboration with faculty at CCNY, Rutgers University and Lighthouse Guild, 2018-2021.
  4. National Science Foundation Research Grant, “SCC-Planning: Integrative Research and Community Engagement for Smart and Accessible Transportation Hub (SAT-Hub)”, Senior Personnel, with faculty in CCNY and Rutgers University, 2017-2018.
  5. Department of Homeland Security Research Grant, “Verification of Crowd Behavior Simulation by Video Analysis”, Single-PI, 3/2016-12/2017
  6. Faculty Development Grant, “Accurate Indoor 3D Model Generation by Integrating Architectural Floor Plan and RGBD Images”, PI, 4/2016-4/2017
  7. PSC-CUNY Research Awards Track B, Single-PI, 2013, 2014, 2015, 2017, 2018, 2020
  8. CUNY C.C. Research Grant – track 2, Mentored Undergraduate Research, “Mobile Indoor Navigation for the Blind”, Single-PI, 9/2016-9/2017
  9. CUNY Innovations in Language Education (ILE) Grants, “Microlearning Based Mobile Game for Mandarin Learning and Assessment”, Co-PI, 2016-2017

Publications

Research Book Chapters (2012-present):

  1. F. Hu, H. Tang, T. Alexander, Z. Zhu, “Computer Vision Techniques to Assist Visually Impaired People to Navigate in an Indoor Environment”, Computer Vision for Assistive Healthcare,Elsevier
  2. Edgardo Molina, Wai Khoo and Hao Tang and Zhigang Zhu, Registration of Video Images,Theory and Applications of Image Registration, http://www.wiley.com/WileyCDA/WileyTitle/productCd-1119171717.html,Wiley Press

Peer-Reviewed Journal Papers (2012-present):

  1. J. Liu, H. Tang, W. Seiple, Z. Zhu. Annotating Storefront Accessibility Data Using Crowdsourcing, accepted by Journal on Technology and Persons with Disabilities, 2022, Project website
  2. G. Olmschenk, X. Wang, H. Tang and Z. Zhu, Impact of Labeling Schemes on Dense Crowd Counting Using Convolutional Neural Networks with Multiscale Upsampling. International Journal of Pattern Recognition and Artificial Intelligence, Special Issue for VISAPP, Vol. 35, No. 13 October 2021
  3. Zhigang Zhu, Jin Chen, Lei Zhang, Yaohua, Chang, Tyler Franklin, Hao Tang, Arber Ruci, iASSIST: “An iPhone-Based Multimedia Information System for Indoor Assistive Navigation”, accepted by International Journal of Multimedia Data Engineering and Management, 2020.
  4. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Generalizing semi-supervised generative adversarial networks to regression using feature contrasting”, Computer Vision and Image Understanding, V. 186, September, 2019
  5. Hu Feng, Zhigang Zhu, Juery Mejia, Hao Tang and Jianting Zhang, “Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration”, ASM EE Journal, V1 (1), Dec. 2017
  6. Hao Tang, Tayo Amuneke, Juan Lantigua, Huang Zou, William Seiple and Zhigang Zhu. “Indoor Map Learning for the Visually Impaired”, Journal on Technology and Persons with Disabilities, Journal on Technology and Persons with Disabilities, V5. June 2017.
  7. Hao Tang, Norbu Tsering, Feng Hu, and Zhigang Zhu. “Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan”, Journal on Technology and Persons with Disabilities, Journal on Technology and Persons with Disabilities, V4. Oct. 2016
  8. Hu Feng, Norbu Tsering, Hao Tang, and Zhigang Zhu. “Indoor Localization for the Visually Impaired Using a 3D Sensor”. Journal on Technology and Persons with Disabilities, V4. Oct. 2016
  9. Maggie Vincent, Hao Tang, Wai Khoo, Zhigang Zhu and Tony Ro, “Shape Discrimination using the Tongue: Feasibility of a Visual to Tongue Stimulation Substitution Device”, Journal of Multisensory Research, 2016 29, 773-798.
  10. Hao Tang, and Zhigang Zhu, “Content-Based 3D Mosaics for Representing Videos of Dynamic Urban Scenes”, IEEE Transactions on Circuits and Systems for Video Technology, 22(2), 2012, 295-308

Peer-Reviewed Conference Papers (2012-present):

  1. Lei Zhang, Kelvin Wu, Bin Yang, Hao Tang, and Zhigang Zhu. “Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane Without Visual Feedback”, ISMAR 2020 – International Symposium on Mixed and Augmented Reality, November 9-13, 2020. Video Demo 
  2. Yaohua Chang, Jin Chen, Tyler Franklin, Lei Zhang, Arber Ruci, Hao Tang and Zhigang Zhu. “Multimodal Information Integration for Indoor Navigation Using a Smartphone”. IRI2020 -The 21st IEEE International Conference on Information Reuse and Integration for Data Science, August 11-13, 2020 (Full Regular Paper for Oral Presentation, 28% acceptance rate)
  3. Zhigang Zhu, Jie Gong, Cecilia Feeley, Huy Vo, Hao Tang, Arber Ruci, William Seiple and Zhengyi Wu. “SAT-Hub: Smart and Accessible Transportation Hub for Assistive Navigation and Facility Management”. Harvard CRCS Workshop on AI for Social Good, July 20-21, 2020
  4. Greg Olmschenk, Hao Tang, and Zhigang Zhu. “Improving Dense Crowd Counting Convolutional Neural Networks using Inverse k-Nearest Neighbor Maps and Multiscale Upsampling”. VISAPP 2020, the 15th International Conference on Computer Vision Theory and Applications.
  5. Hao Tang, Xuan Wang, Greg Olmschenk. Cecilia Feeley, Zhigang Zhu. “Assistive Navigation and Interaction with Mobile & VR Apps for People with ASD”. The 35th CSUN Assistive Technology Conference, March 9-13, 2020.
  6. Huang Zou, Hao Tang, “Remote Collaboration in a Complex Environment”, Proceedings of the International Conference on Artificial Intelligence and Computer Vision, March 2020
  7. Jeremy Venerella, Lakpa Sherpa, Tyler Franklin, Hao Tang, Zhigang. Zhu. “Integrating AR and VR for Mobile Remote Collaboration”, In: Proceedings of the International Symposium on Mixed and Augmented Reality, Oct 2019. Video Demo
  8. Greg Olmschenk, Hao Tang, Jin Chen and Zhigang Zhu, “Dense Crowd Counting Convolutional Neural Networks with Minimal Data using Semi-Supervised Dual-Goal Generative Adversarial Networks”, CVPR Workshop on Weakly Supervised Learning for Real-World Computer Vision Applications, Long Beach, CA, 2019.
  9. Jeremy Venerella, Lakpa Sherpa, Hao Tang, Zhigang Zhu, “A Lightweight Mobile Remote Collaboration Using Mixed Reality”, CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Long Beach, CA, 2019.
  10. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Crowd Counting with Minimal Data Using Generative Adversarial Networks for Multiple Target Regression”, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 1151-1159, Lake Tahoe, NV, 2018
  11. Jie Gong, Cecilia Feeley, Hao Tang, Greg Olmschenk, Vishnu Nair, Zhixiang Zhou, Yi Yu, Ken Yamamoto and Zhigang Zhu. “Building Smart Transportation Hubs with Internet of Things to Improve Services to People with Special Needs”, Transportation Research Board (TRB) 96th Annual Meeting, January 8-12, 2017
  12. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Pitch and Roll Camera Orientation from a Single 2D Image Using Convolutional Neural Networks”. Proceedings of the 14th Conference on Computer and Robot Vision, Edmonton, Alberta, May 17-19, 2017
  13. Feng Hu, Norbu Tsering and Hao Tang, Zhigang Zhu, “RGB-D Sensor Based Indoor Localization for the Visually Impaired”, 31th Annual International Technology and Persons with Disabilities Conference, March 21-26, 2016
  14. Hao Tang, Norbu Tsering and Feng Hu, “Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan”, 31th Annual International Technology and Persons with Disabilities Conference, March 21-26, 2016
  15. Zhigang Zhu, Wai L. Khoo, Camille Santistevan, Yuying Gosser, Edgardo Molina, Hao Tang, Tony Ro and Yingli Tian, “EFRI-REM at CCNY: Research Experience and Mentoring for Underrepresented Groups in Cross-disciplinary Research on Assistive Technology”. The 6th IEEE Integrated STEM Education Conference (ISEC), March 6, 2016, Princeton, New Jersey (one of the 5 H. Robert Schroeder Best Paper Award Nominees among 50 oral papers).
  16. Hao Tang, Tony Ro, Zhigang Zhu. “Smart Sampling and Transducing 3D Scenes for the Visually Impaired”. IEEE International conference on Multimedia and Expo (ICME), 2013 (oral). The paper is selected in the best paper award Nominee (rate: 2.4%).
  17. Hao Tang, Maggie Vincnt, Tony Ro, Zhigang Zhu. “From RGB-D to Low-Resolution Tactile: Smart Sampling and Early Testing”. IEEE Workshop on Multimodal and Alternative Perception for Visually Impaired People, ICME 2013

Honors, Awards and Affiliations

  1. The Best Paper Award Nominees, the 15th International Conference on Computer Vision Theory and Applications, Malta, February 2020.
  2. “CUNY-American Dream Machine” on New York Post and MTA, 2016-2017
  3. DHS S&T Research Grant, U.S. Department of Homeland Security, 2016 Link
  4. The Best Paper Award Nominees, The 6th IEEE Integrated STEM Education Conference (ISEC), March 6, 2016, Princeton, New Jersey, 2016.
  5. Summer Research Team Award, U.S. Department of Homeland Security, 2015
  6. The Best Paper Award Finalist, IEEE International Conference on Multimedia and Expo (ICME), 2013

Additional Information

Former Research Assistants:

  1. Benjamin Rosado, Cybersecurity using Virtual Reality, 2021-2022, ODNI, Now Data Analyst at DHS
  2. Erii Sugimoto, Indoor Navigation for Visually Impaired, 2016-2018, CUNY Collaborative Research Grant and BFF, Now Software Engineer at Apple Inc.
  3. Ben Adame, Crowd Counting from Video Footage, 2018-2018, DHS S&T Research Grant, Now System Specialist, FBI
  4. Sihan Lin, Indoor Navigation for Visually Impaired, 2016-2018, MEISP, Now Software Engineer at JPMorgan Chase.
  5. Tayo Amuneke, Pre-journey Mobile App for the Visually Impaired, 2015-2017, LSAMP, Now Software Engineer at Microsoft Inc.
  6. Sanou Wourohire Laurent, Language-based Learning Mobile App, 2015-2016, LSAMP, Now Software Engineer at JPMorgan Chase.
  7. Juan Lantigua, Pre-journey Mobile App for the Visually Impaired, 2015-2017, MEISP, Now Software Engineer at JPMorgan Chase.
  8. Norbu Tsering, Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan, 2015-2017, NSF-REM, Now Software Development Engineer at Amazon Web Services
  9. Huang Zou, Remote Collaboration in a Complex Environment, 2014-2017, CRSP, Now Software Development Engineer at Velan Studios, Inc (Video Game Development)
  10. Jeury Mejia, Real-time indoor assistive localization, 2014-2016, NSF-REM, Now Software Engineer at Jopwell
  11. Jiayi An, Accessible Game for Blind People, 2015-2016, NSF-REM, Now Software Engineer at US Environmental Protection Agency
  12. Olesya Medvedeva, Machine Learning Algorithm for Speaker Recognition and Emotion Detection, 2014-2016, Transfer to Columbia U. Now Software Engineer at MLB Advanced Media, L.P
  13. Rodny Perez, Detect MTA Door on a Mobile Phone, 2013-2014, Now Software Engineer at JPMorgan Chase.

Acknowledgments:

  • Datacamp: a learning platform for data science