Hey there! I'm

Arnav Sharma

I'm a|

Building ML models, analyzing data, and creating AI-powered applications.

01.

About Me

Hey! I'm Arnav Sharma, and I'm currently pursuing a Bachelor's Degree of Science in Computer Science at Penn State University, with a minor in Artificial Intelligence. I'm passionate about creating AI-powered solutions and love the entire process of taking an idea from ideation to design to production.

My favorite part of programming is building full-stack applications and AI models that solve real-world problems. I love working with machine learning, data science, and creating web applications that make a difference. I'm particularly interested in natural language processing and building conversational AI systems.

When I'm not coding, you can usually find me trying new recipes, unwinding with a good TV show, or getting competitive over board games. I'm also a big fan of staying active and have been hitting the gym regularly for almost two years now.

Arnav Sharma

Some technologies I like to work with

Python
Python
JavaScript
JavaScript
Java
Java
C++
C++
React.js
React.js
Node.js
Node.js
Express.js
Express.js
Flask
Flask
TensorFlow
TensorFlow
Pandas
Pandas
NumPy
NumPy
Docker
Docker
HTML
HTML
CSS
CSS
Git
Git

02.

Experience

WeFIRE Logo

Software Engineering Intern

Wefire

January 2025 - Present
Hayward, CA

Developed comprehensive Reddit data analysis and monitoring solutions for financial sentiment tracking and market intelligence.

AI-Powered Reddit Post Analyzer
View on GitHub →

Developed a robust scraping tool in Python to collect and analyze up to 5,000 posts from targeted subreddits, automating the process of gathering user sentiment. Integrated the Google Gemini API to perform complex NLP tasks, automatically classifying each post by financial domain and question type, and generating concise summaries. Engineered a data processing pipeline using Pandas to systematically structure the raw scraped data and AI-generated insights.

SubReddit Monitor & Notification Tool
View on GitHub →

Engineered an automated bot to monitor Reddit posts in real-time by streaming data from specified subreddits using the PRAW library. Implemented a keyword-matching system to instantly identify relevant posts and developed a notification pipeline that sends detailed email alerts via SMTP. Features real-time data streaming, intelligent filtering algorithms, and multi-channel notification delivery.

PythonGoogle Gemini APIPRAWPandasSMTPNLPData ProcessingWeb ScrapingReal-time ProcessingAutomationMonitoring

03.

Projects

Stock Return Forecaster with Deep Learning

Stock Return Forecaster with Deep Learning

Architected and trained a Long Short-Term Memory (LSTM) neural network to forecast future stock returns based on historical time-series price data. Implemented a data processing workflow to structure sequential stock data, normalizing features and creating sliding windows suitable for training a recurrent neural network. Developed the deep learning model using Keras, stacking multiple LSTM and Dropout layers to effectively capture temporal dependencies and prevent overfitting, saving the final trained model as an H5 file.

PythonTensorFlowKerasScikit-learnPandas
Customer Churn Predictor

Customer Churn Predictor

Built a classification model to predict customer churn for a telecommunications company, enabling proactive retention strategies by identifying at-risk customers with optimal accuracy for the dataset provided. Leveraged the PyCaret AutoML library to accelerate the end-to-end machine learning workflow, from data preparation and feature engineering to model training and performance evaluation. Analyzed and visualized key features influencing churn using Plotly, delivering actionable insights into customer behavior and demographics.

PythonScikit-learnPandasPlotlyPyCaret
House Price Prediction Model

House Price Prediction Model

Developed a predictive model to forecast house sale prices by analyzing a dataset of over 1,400 properties and 80 distinct features, delivering results with 80% accuracy. Engineered a data preprocessing pipeline to handle missing values, encode categorical data, and scale numerical features, preparing the dataset for optimal model performance. Trained and evaluated multiple regression algorithms, including Linear Regression, Ridge, Lasso, and XGBoost, identifying the most accurate model and serializing it for future deployment using Pickle.

PythonScikit-learnPandasXGBoostMatplotlib
PSU Menu Analyzer Website

PSU Menu Analyzer Website

A full-stack web application that scrapes and analyzes Penn State University dining menus. Features AI-powered nutritional analysis using Google Gemini API, real-time menu scraping, dietary preference filtering, and CSV export functionality for comprehensive nutritional data.

PythonFlaskHTMLCSSJavaScriptGoogle Gemini APIWeb Scraping
Chat With My Resume

Chat With My Resume

An intelligent resume chatbot built with Retrieval-Augmented Generation (RAG) technology. Allows users to have natural conversations about professional background, skills, and experience through an AI-powered interface that understands context and provides detailed responses.

PythonLangChainOpenAI APIChromaDBFAISSFlaskRAG

05.

Chat with my Resume

Ask me anything about my experience, skills, projects, or career goals. I'm powered by AI and have access to my complete resume information.

Arnav's Resume Assistant

Ask me anything about my background

Messages: 100/100

Start a conversation

Choose a topic below or type your own question

AI-Powered

Powered by Google Gemini for intelligent responses

Real-time Chat

Interactive conversation about my background

Comprehensive

Access to my complete resume and project details

06.

Contact

If you have opportunities or are interested in collaboration, please email me. You can also connect on social media for questions or just to say hi! My inbox is always open, and I'll try to get back as soon as possible.

© Arnav Sharma 2025. All rights reserved.

This site is built with Next.js, React, TypeScript, Tailwind CSS, and Framer Motion. View the source code on Github