Sayed Nadim

prof_pic_n.jpg

U-Space 2, A-dong,

670 Daewang Pangyo-ro, Bundang-gu,

Seongnam-si, Gyeonggi-do, South Korea.

Hi there


I am Sayed Nadim.

I am originally from Bangladesh and currently working as a researcher in South Korea. Thank you for visiting my profile! Feel free to reach out if you want to collaborate or just have a chat.

What Do I Do

I am a computer vision researcher with experience in developing cutting-edge AI-based solutions and a passionate developer who loves to explore new technologies and build innovative projects.

I have 6+ years of experience in designing traditional/deep learning methods for 2D & 3D image processing techniques, utilizing supervised/semi-supervised/unsupervised learning, working with different sensors (RGB pinhole/wideangle/Fisheye cameras, Monochrome cameras, NIR/LWIR cameras, Neuromorphic sensors, LiDAR, IMU, etc.), and deep generative vision models.

What I’m Working On

  • I’m currently working on in-cabin algorithms and monocular depth estimation.

  • I’m also trying to find a way to have a better convolution mechanism (this is above my current knowledge but I am digging deeper)

Languages/Frameworks: I usually work with Python and PyTorch with some C++.

Looking to Collaborate On

  • Open source projects that make a difference.
  • Innovative ideas that can solve real-world problems.

Education

  • Master’s from the Department of IT Convergence Engineering (2019-2021) from Gachon University, under the supervision of Prof. Yong Ju Jung.
    • Applied research in depth, generative vision solutions, low-level vision, and sensor calibration.
  • Undergraduate studies from the Department of Electronics and Telecommunication Engineering from University of Liberal Arts Bangladesh (2013-2017), under the supervision of Dr. S. M. Mahbubur Rahman.

How to Reach Me

  • Email: smnadimuddin(at) gmail (dot) com
  • LinkedIn: SayedNadim

Thank you for visiting my website.

news

Nov 20, 2024 [Job] I have joined Deep In Sight as a senior AI/ML researcher.
Nov 20, 2023 [Job] I have been assigned as a Group Lead at DeltaX.ai. I will be overseeing the Automotive Perception Group and mainly be working with several teams to build ADAS, DMS, OMS and SCMS solutions.
Apr 14, 2023 [Conference] Our team has successfully participated in CVPR’23 2nd Monocular Depth Estimation Challenge.
Oct 25, 2022 [Job] I have joined DeltaX.ai as a Team Lead and AI Researcher.
Sep 4, 2022 [Journal] Our paper “Multi-Scale Attention-Guided Non-Local Network for HDR Image Reconstruction” has been published in the Sensors (Q1, Impact Factor - 3.847) Journal.
Aug 1, 2022 [Conference] Our team has successfully participated in ECCV’22 AIM Challenge on Reversed ISP (Track 1).
Jun 26, 2022 [Journal] Our paper “Unsupervised Deep Event Stereo for Depth Estimation” has been accepted in the IEEE Transactions on Circuits and Systems for Video Technology- (Q1, Impact Factor - 5.859).
May 10, 2022 [Journal] Our paper “SIFNet: Free-form image inpainting using color split-inpaint-fuse approach” has been published in the Computer Vision and Image Understanding Journal (Volume 221, August 2022, 103446) - (Q1, Impact Factor - 4.886).
Feb 3, 2021 [Conference] Our paper “Deep Event Stereo Leveraged by Event to Image Translation” is published in AAAI Conference on Artificial Intelligence (AAAI-21).
Jul 21, 2020 [Conference] Our team has successfully participated in ECCV AIM Challenge on Image Extreme Inpainting.