Enhancing Learning with AI

Sololearn 2022 || Learning Experience Design || UX/UI Design

About Sololearn: Sololeanr is an eLearning platform (web and mobile) that teaches programming languages. Sololearn maintains a portfolio of 25 courses and 21 million registered users.
Background: Recognizing the need to enhance the learning experience, we partnered with Prosus, one of our investors, who had been developing an AI model that could significantly benefit our product.

Role: As the product designer for Sololearn, I was responsible for implementing an AI feature into our coding exercises. I focused on ensuring a smooth and seamless integration that would benefit our users in their learning journey. By leveraging AI-generated feedback, the team provided valuable assistance and support, empowering users to overcome coding challenges easily.

The Problem


The Code Playground feature within Sololearn provides users with an excellent opportunity to practice their coding skills. However, it became evident that users needed more guidance and support when encountering challenges during their coding exercises. Recognising this gap, our team believed that leveraging AI could effectively bridge the support and guidance divide, providing users with the assistance they needed to overcome coding obstacles.

Research & Analysis


In addition to revisiting existing feedback, we extensively analysed user behaviour within the Code Playground feature. This involved collecting qualitative and quantitative data to gain deeper insights into user engagement and completion rates.

By examining how users interacted with the feature, we discovered that many users struggled when their code needed to run correctly or produce incorrect output. Syntax errors, in particular, proved to be a standard stumbling block. Armed with this data, we recognised the opportunity to leverage AI-generated feedback to provide immediate assistance and enhance the learning experience.



Goals and Objectives


Our primary purpose was to provide users with assistance explaining why their code was not running correctly. Through AI-generated feedback, we aimed to help users quickly identify and resolve coding issues within the app itself. The desired outcome was an increase in the completion rate of Code Playground exercises, resulting in fewer users quitting the feature.

Exploration and Scope


To uncover the vast potential of AI in coding exercises while defining our project's scope, my team cooperated very closely with each other. Our agile approach allowed us to swiftly progress from concept to the Minimum Viable Product (MVP) release within three weeks.

To ensure seamless collaboration among all team members, I facilitated workshops to align everyone with the AI's capabilities and limitations.

We identified four robust features from our brainstorming sessions that we firmly believed would provide immense value to our users. However, for the MVP version, we made the strategic decision to prioritise the implementation of two of these features.





By focusing on these components, we aimed to deliver our users a powerful and effective AI-driven coding experience right from the initial release.

◆ Explaining Syntax Errors

◆ Utilising user feedback to train the AI model

Concept Development


We iterated on the concept based on feedback generated by the AI model. Our focus was on delivering feedback effectively to users.
We introduced a new screen layout allowing users to view the code and the feedback message simultaneously. The feedback messages were divided into two parts: a clear explanation of what went wrong and guidance on improving their code. To measure the helpfulness of the feedback plus traint the AI model, we added a small response area where users could indicate their satisfaction with a thumbs-up or thumbs-down.

Implementation and Rollout


To successfully integrate the new feature, we opted for a phased implementation across all platforms (iOS, Android, and web) within our popular "Python for Beginners" course. We collected user feedback and data throughout this process to continuously improve our AI model. This data-driven approach allowed us to optimise the feature's performance, resulting in an enhanced and tailored learning experience for our users.

Evaluation and Results 


Data analysis revealed promising outcomes, with increased completion rates for Code Playground exercises and overall user engagement. User feedback indicated that the AI-generated suggestions were considered highly useful. The AI model was learnig very fast and quicly improving. Encouraged by the MVP's success, we expanded the AI-generated coding feedback feature.

Iterations 


In the subsequent version, we enhanced the user experience by highlighting problematic areas of the code and offering written suggestions for resolution. We continued refining the AI model to provide more accurate and understandable feedback.

Conclusion


The implementation of AI-powered coding feedback within Sololearn's Code Playground feature marked a significant milestone in enhancing the learning experience for our users. The AI assistant acted as a personal tutor, providing real-time support and empowering users to overcome coding challenges. This technology will revolutionise how users learn to code, providing valuable guidance and support throughout their coding journey.