About project presentation
The project presentation is to allow your audience (students and instructor) to understand a research problem that you solve with deep learning. It is an important skill.
Here are some important questions regarding the project presentation:
A1: Your presentation should be around 20 minutes (15 minutes for project presentation + 5 minutes for question-answering). I will count the time during your presentation. You have to finish your presentation within required time.
So practice over and over.
A2: You should present like you present a research paper in a conference. Typically, these should be included in the presentation: title, presenter, description of the problem you solve, motivation (and difference with previous works), detailed methodology, experiments, ablation study, limitation, and conclusion. You can look at some presentation examples from CVPR'19 at here
A good presentation slide will be a plus.
A3: Presentation can be done by an individual member or collaboration of members in the team.
A4: Please use the computer in the classroom for your presentation, because there might be some problems with connections to your own computer. You need to upload your slide to the computer in classroom before presenation.
A5: No.
A6: That depends. Generally, for a 20-minutes presentation, 20 pages are appropriate for the slide. But it's up to you and your presentation content.
A7: A good presentation should be clear with compact slides. Background, motivation and differences from previous works should be discussed. Figures and texts should be appropriately arranged. The presenter(s) should be able to fluently present the content and answer questions from the audiences.
Presentation schedule
Following is the presentation schedule:
| Date | Project |
| 4/14 |
Deep Learning Based Facial Emotion Detection Using Convolutional Neural Networks Syam Tej Prakash and Likitha Kamalapuram |
| 4/14 |
Deep Learning - Based Character - Level Text Generation Using GRU Networks Spoorthy Reddy Alimineti and Archana Chenigepally |
| 4/14 |
Improving Small Object Detection Using YOLO-Based Deep Learning Models Mahimanvitha Chinnamsetti and Rohitha Aradhyula |
| 4/14 |
Transformers Models for Response Clarity Detection Arun Vurukonda and Nagakiran |
| 4/16 |
Robust Image Classification Under Noisy Conditions Using CNNs Saba Siddiqi |
| 4/16 |
I’m Something of a Painter Myself: Monet-Style Image Generation via GANs Son Phan |
| 4/16 |
Multimodal Stock Movement Prediction via Financial News Sentiment, Candlestick Chart Visual Features, and Price Time Series Snehal Teja Adidam |
| 4/16 |
Robust Image Recognition Using Enhanced Convolutional Neural Networks with Data Augmentation Pavani and Sivani |
| 4/21 |
Nowcasting Local Economic Activity in Texas Cities Using Nighttime Lights and Deep Temporal Models Orhan Erdem |
| 4/21 |
Learning-Based Image Super-Resolution using Convolutional Neural Networks Sushma K |
| 4/21 |
Deep Learning for Content Moderation: Near-Duplicate Video Detection with CNNs Etsub Feleke |
| 4/21 |
Comparing CNN Architectures for Single-Image Super-Resolution Lokesh and Harishwar |
| 4/23 |
PhysicsDenoiseNet: Learning to Remove Noise and Sharpen Low-Light Photos Using a Physics-Guided Neural Network Dibakar Barua |
| 4/23 |
Unmasking Political Question Evasions: A comparative Study for Response Clarity Detection Aj Varadharj |
| 4/23 |
HaarSSM: Haar Wavelet-Guided Selective State Space Model with Degradation-Adaptive Routing for Unified Image Restoration Ashish Rathnakar Shetty |
| 4/23 |
Traffic Scene Visual Question Answering Using Multimodal Deep Learning Sai Grishyanth Magunta and Mallikarjun Kotha |
| 4/28 |
Image Classification Gayathri Saxena and Yash raj mathur |
| 4/28 |
Sentiment Classification Using Fine-Tuned Transformer Models Michael Marin |
| 4/28 |
Deep Learning Based Intelligent Feedback Generation for Block-Based Programming in Gamified Environments Jayed Mohammad Barek |
| 4/28 |
Facial Emotion Recognition Using Deep Learning Shreshna Anugu |
| 4/28 |
Medical Specialty Classification from Clinical Transcriptions Using Fine-Tuned ClinicalBERT Revanth Putta |