Hoa Nguyen

Logo

Resume | LinkedIn | GitHub

I am pursuing MS degree in Computer Science with a focus on NLP and Deep Learning at the Technical University of Darmstadt, Germany

Previously, I received my Bachelor Degree in Business Information System at the Nuremberg Institute of Technology, Germany

Portfolio

Computer Vision


Emoticon Generation with VAE

Open In Collab
In this project, a VAE-based generative model is implemented that can produce new emojis, resembling those we are familiar with. Moreoever,a latent space interpolation & analysis is also done.

Reconstructed face with VAE

Read more »

Emotion Recognition with Facial Landmark

Open In Collab
Facial expression can be considered as concrete evidence to identify human feelings. With the recent advancement in computer vision and deep learning, it is possible to detect emotions from facial images. Facial keypoint detection serves as a basis for Emotional AI application such as analysis of facial expression, detecting dysmorphic facial signs for medical diagnosis. Detecting facial keypoints is a very challenging problem since facial features can vary greately from one individual to another. The primary objective of this project is to predict keypoint positions on image by inspecting different deep learning models based on Convolutional Neural Network.

Read more »





Machine Learning

Telco Customer Churn

Open In Collab
In this project, we focus on a real application of machine learning in marketing. The dataset we will be using is the Telco dataset available on Kaggle. Our objectives is to find out some interesting statistics about the behavior of our customers and predict some measure that can affect our sales. Read more »


Software Engineering

Blockchain

Prototype of a full stack solution that generates humidity data using sensors, which is persisted in the Hyperledger Fabric blockchain framework and visualizes the blockchain data using Flask. Read our project's report here

Read more »



Web Crawling

View on GitHub

The use of the web crawler is inevitable when it comes to collecting massive data set. The use case for the web crawler implemented in this thesis is to extract information from an official announcement containing new building permissions. While running web crawler on a local machine is fine or do-once tasks, and a small amount of data, where the crawling process can be triggered manually. However, this is not a sustainable, and reliable solution for retrieving a huge amount of data. Web crawler can be optimized with deploying into the cloud to reduce operational management and increase parallelism. Cloud computing also provides greater flexibility in term of computing capacity, and IP address. Read my thesis here

Read more »






Natural Language Processing

Twitter Sentiment Analysis with BERT on EU-Solidarity

Open In Collab

Read more »

Sentence similarity based on semantic nets and corpus statistics

View on GitHub

Read more »