Knowledge gained from Projects

Git-Mastery:

MarkBind:

Open:

RepoSense:

TEAMMATES:

Git-Mastery

DESMOND WONG HUI SHENG

Github Copilot

GitHub Copilot is a sophisticated AI-powered developer tool that functions as an intelligent pair programmer, helping developers write code with greater efficiency and less manual effort. Unlike traditional autocomplete, it uses advanced large language models to understand the deep context of the project to suggest everything from single lines of code to entire functional blocks. By automating routine tasks and providing real-time technical guidance, it allows developers to focus more on high-level problem solving and architectural design.

Learning Points:

  • Ensure Full Repo Understanding: Before implementation, provide Copilot with comprehensive context by indexing the repository and opening relevant files to ensure it understands the project structure and architectural decisions.
  • Verify Understanding via Q&A: Use a Q&A section in Copilot Chat to clarify its internal logic and ensure it isn't making false assumptions about the codebase. Explicitly asking AI not to assume but to verify against actual files (like package.json or custom logic) prevents confident but incorrect "hallucinations".
  • Provide Existing Code Examples: Feed Copilot snippets of the existing project patterns to ensure consistency across the codebase.
  • Demand a Full Implementation Plan: Instruct Copilot to generate a step-by-step reasoning plan before it writes a single line of code. Breaking down complex tasks into atomic, verifiable steps allows developers to identify logical flaws in the AI's approach early on.
  • Evaluate Multiple Solutions: Prompt Copilot to offer several distinct solutions so developers can evaluate the trade-offs in robustness and platform compatibility themselves. Moreover, developers can also provide some possible solution to Github Copilot for evaluation.

Resources:

CodeRabbit

CodeRabbit is an AI-powered code review tool that integrates directly into developers' GitHub Pull Requests (PRs). For team project, it acts as an automated "gatekeeper" that catches common mistakes before a human mentor reviews the code.

Learning Points:

  • Learn automated context-aware reviews: CodeRabbit analyzes the entire repository to understand how a change in one file might break another, providing instant, comprehensive feedback on every pull request.
  • Identify edge cases in PRs: It identifies potential issues, such as bugs that static analyzers miss, security oversights, and logic errors that only surface under specific conditions.
  • Summarize complex changes: It automatically generates a summary and walkthrough of code changes, which is particularly handy for large PRs where context helps teammates understand the scope of the work.

Resources:

Claude Agent Skills

Claude Agent Skills is a specialized feature within the Claude ecosystem that allows developers to define and package complex, reusable behaviors as "skills" that the AI can invoke when needed.

Learning Points:

  • Encapsulate Domain Logic: Developers can create specific skills that encapsulate the complex logic of their project, such as a "Scope-Parser-Skill" that handles the different scenario, ensuring the AI uses the most robust method every time.
  • Modularize Intelligence: Instead of overwhelming the AI with a massive list of instructions, developers can break them into modular skills that the agent only activates when the context of the task requires them.
  • Continuous Skill Refinement:. Developers can iteratively improve these skills by feeding the agent feedback from failed runs.

Resources:

GOYAL VIKRAM

Tool/Technology 1

List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.

Tool/Technology 2

...

JOVAN NG CHENGEN

Knowledge

List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.

ChatGPT Codex App

  • $30/month subscription for ChatGPT Plus
  • There are interfaces to use Codex: app, CLI, IDE extension
  • Supports worktrees to run multiple AI agents at the same time on different features
  • Reusable skills that can be shared across agents, and a marketplace to share skills
  • Supports Background Automations, it can run unprompted on scheduled workflows to manage bugs, review pull requests, or deploy to cloud platform
  • Best feature is remote codex, that allows us to ask codex to perform the task on cloud machines, and open PR (even on our mobile devices) at codex website or the mobile app

LOH JIA XIN

GitHub Copilot

GitHub Copilot is an AI-powered coding assistant built by GitHub and OpenAI that helps programmers write code faster and with less effort.

Learning points:

  • Give examples of existing code with similar functionality, to avoid hallucination and ensure the code quality is consistent across the codebase
  • Properly describe the expected usage of the product which is not always obvious when implementing libraries and dependencies. This can affect the implementation method chosen.
  • Ask for many possible solutions (ie. planning) and evaluate them yourself before deciding on one and guiding the AI tool to implement it.

Resources:

Claude Code

Claude Code is an AI coding assistant by Anthropic that helps with code generation, refactoring, debugging, and repository-level understanding through natural language instructions.

Learning points:

  • Give explicit constraints (tech stack, coding style, and file targets) so the tool can make precise edits with fewer iterations.
  • Use task-specific command modes (for example /review) to get focused outputs for different goals such as code review, planning, or implementation.

Resources:

Tool/Technology 2

...

SAN MUYUN

ContextManager

A context manager in Python is a structured way to handle resources so setup and cleanup are done safely and automatically, usually through the with statement. Instead of manually opening and closing files or connections, a context manager guarantees that cleanup logic still runs even if an exception occurs inside the block, which reduces bugs and resource leaks. Under the hood, this works through the context manager protocol (__enter__ and __exit__), where __enter__ prepares the resource and __exit__ handles teardown. This pattern makes code cleaner, easier to read, and more reliable because resource handling is localized to one clear block. Common examples include file I/O, database transactions, thread locks, and temporary state changes, and a good rule is to use with whenever something must always be released or restored.

resources:

Utilising Codex for coding

OpenAI Codex is an AI coding agent, a large language model specialized for software tasks, combined with tools that let it work inside a real project. It can be very useful to read and understand files across a repository, run terminal commands, propose and apply code edits, generate test cases etc.

AI Agent:

Agent.md is a project-specific instruction file that tells an AI coding assistant how to behave inside a codebase: what the project does, how the folders are organized, coding conventions, testing commands, and things to avoid. It improves AI performance by informing the LLM exactly how the project’s structure or style from scattered files look like, so it makes more accurate edits, follows the right conventions, avoids breaking assumptions, and produces code that fits the repository better.

Features of a good Agent.md includes:

  • State a clear role
  • Indicates the executable commands
  • Project knowledge
  • Real Examples

Ideally, Agent.md should be very specific on the task that this particular AI Agent should carry out, like writing test cases, debug, refactoring, instead of being very generic.

resources

Prompt Engineering:

Prompt engineering is the practice of designing clear, specific instructions so an AI can produce more accurate, useful, and consistent outputs.

A good prompt should:

  • give enough information about the task
  • provide desired response/output format
  • any constraints that is not clearly stated in the task information provided to the LLM

If the task is complicated, it is always recommended to break it into smaller tasks, and prompt the AI to complete a sequence of smaller tasks.

resources:

MarkBind

CHUA JIA CHEN THADDAEUS

GitHub Copilot

GitHub Copilot is a code completion and programming AI-assistant that assist users in their IDE. It uses LLMs can assist with code generation, debugging or refactoring through either real-time suggestions or language prompts

Key Learning Points:

  • Context is King! Providing all relevant files immediately leads to faster, more accurate fixes. The AI performs best when it "sees" the full scope of the bug.

  • Don't stop at the first working fix. Specifically, we can try asking the AI to improve code quality and clarity after the initial generation helps eliminate "AI-style" clutter and technical debt.

  • Initial AI suggestions often prioritize the simplest fix. It still requires manually prompts & investigation for edge cases to ensure the solution is robust.

  • We should treat AI as a collaborator, and not an automated system. Reviewing proposed changes and selectively implementing only what fits the project context prevents the introduction of unnecessary or incorrect logic.

Agent Skills

Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently. These skills are designed to be automatically founded and used by AI agents, providing them user-specific context they can load on demand hence extending their capabilities based on the task they’re working on.

Key Learning Points:

  • Moving instructions from a giant system prompt into a SKILL.md file allows for "progressive disclosure," where the agent only loads the knowledge it needs for the specific task at hand.

  • Provide a strict directory layout (Instructions in SKILL.md, automation in scripts/, and context in references/) to ensure the agent can reliably find and execute tools.

  • Include specific trigger phrases & tags, as well as clear and well defined steps. This prevents the agent from guessing, thus ensuring it follows the exact logic required for the most effective application.

HARVINDER ARJUN SINGH S/O SUKHWANT SINGH

AI-Assisted Coding Tools

So far, I've experimented with many AI coding tools, incorporating them into my workflow and trying them out.

I've tried opencode, Mistral Vibe, Proxy AI, Github Copilot, Cline.

They can be divided into roughly two categories; AI tools that live in the CLI and those that integrate directly into IDEs. While their usage may be similar (many of them can inject context using slash/@ commands, in addition to other common features), their value can be quite different.

I've found that tools that integrated directly into IDEs felt superior for engineering, provided that one does not enable settings such as "YOLO" mode (editing without permission). This way, you may review the AI's work file-by-file, and guiding it if its approach needs changing.

While I've found human-in-the-loop workflows to feel better as a developer (more supervision over work), less hands-on approaches also can be useful for iterating quickly. However, I've also found that the success of these methods are highly contingent on model quality.

On top of that, leveraging plan/act modes, skills, and keeping context focused can improve model performance.

Resources: Cline documentation

TypeScript (and the JavaScript/Node ecosystem)

So far, I've been working on much of the CLI components of MarkBind. I've done much research on it due to working on PRs such as TypeScript migration and Migrating TypeScript output from CJS to ESM. I would say currently I'm more well-versed in it than an average developer due to my deep involvement in these migration works where I've had to update tooling, workflows, and developer experience regarding TypeScript.

Key Learning Points

  • CJS vs ESM: CommonJS (CJS) and EcmaScript Modules (ESM) are different module systems for the JavaScript environment. Their key difference is in how they resolve imports/other modules - CJS was primarily designed for server-side, and modules are loaded synchronously. This is compared to ESM which allows for both synchronous and asynchronous module loading. As the ecosystem is currently moving towards ESM, MarkBind has migrated to ESM.
  • Node Versions: Even-versioned releases in Node are LTS. Using newer version of Node allow you to use newer features that are sometimes nicer for Dev Experience.

Resources: CJS vs ESM (Better Stack)

Langchain

I learnt and used Langchain to create an AI workflow for the CSV classifier task.

Resources: Langchain documentation

HON YI HAO

AI Coding Tools

Skills

Worked with the team to explore adding repo-specific AI coding skills for common harnesses such as Claude, OpenCode and GitHub Copilot.

Subagents

Created subagents to handle specific tasks such as writing unit tests, generating documentation, and refactoring code.

Vue

Reactivity (Vue 3)

Learned about Vue 3's reactivity system, including ref, reactive, and computed properties.

Used reactive and computed to implement dynamic data tag count in CardStack Component.

TypeScript migration

tsc

Learned to use the TypeScript compiler (tsc) to check for type errors and compile TypeScript code to JavaScript.

Several useful configs/flags learned:

  • outDir: specifies the output directory for compiled JavaScript files.

PDF generation for MarkBind sites

When making my own course notes with MarkBind, I realised a need to export the entire site to pdf file so that it could be printed and brought to exams, and distribute easily in general. That's why I started exploring the idea of creating a @/packages/core-pdf module that achieves this goal.

Applications

  • Export MarkBind sites to be distributed as pdf files
    • example use case: generate pdf for CS2103T ip/tp to be submitted for use in CS2101
  • Export MarkBind sites to be printed for offline use
    • example use case: generate pdf for personal notes to be printed and brought to exams

Experimentation

Considering MarkBind already creates a static site with proper formatting, with appropriate CSS for media print, I decided to leverage on that and use a headless browser to render the site and print it to pdf.

Challenges

Solved (i think)

Page order when merging

  • In the built _site directory, the pages are individual .html files, with no particular order in the directory.
    • This makes it difficult to determine the correct page order when merging them into a single pdf.
  • Implemented solution:
    • Parse the .html pages to extract the <site-nav> structure, which contains the correct page order.
    • Use the extracted page order to merge the individual pdfs in the correct sequence.

Hidden elementes

  • Some elements (e.g. panels) can be collapsed by default and thus hidden when rendered, which may lead to missing content in the generated pdf.
  • Implemented solution:
    • Inject javascript into the rendered page to expand all collapsible elements before printing to pdf, ensuring all content is included in the final output.

Outstanding issues

Big dependency/bundle size

  • The headless browser library (Puppeteer) is quite large, which may not be ideal for a MarkBind plugin.
  • Possible solution: Make Puppeteer an optional dependency, try to use the system's browser if available, only fallback to Puppeteer if no suitable browser is found.

iframes rendering

  • Some pages with iframes (e.g. pdf and youtube videos) may not be able to show the rendered content but just the placeholder instead
  • Attempted solution: use Puppeteer to take a screenshot of the iframe and inject that into the page. I can't get it to work though

Open

DALLAS LIM JUIN LOON

OpenCode

OpenCode is a CLI program for agentic coding. It allow users to use the same application with different LLM models, supporting many different providers.

Aspects learned

  • Planning: Creating a plan for the agent to follow in order to complete a task allows ensuring implementation will be as specified.
  • Context management: Provide only strictly required information to the agent when prompting and writing project rules to manage the context window effectively. Agent's exploration of the codebase can provide most of the low level details.
  • Skills: Progressive disclosure of skills to the agent allows sterring the agent's behavior only when needed, preventing pollution of context

Resources

  • Youtube: Many content creators create videos about agentic coding that can be transferred across different harnesses.
  • Aihero: A website with articles and resources for agentic coding.

QML

A declarative language for designing user interfaces, which can be used through PySide to create GUI applications in python.

Aspects learned

  • Syntax: Using QML classes to create layouts, signals to handle interations, models and views to display data.
  • Resources: Including external resources like images and custom modules to use custom classes written in QML and python.

Resources

GABRIEL MARIO ANTAPUTRA

Electron

Learnt how to develop desktop applications using Electron. In particular, although I had prior experience with Electron, the tech stack was a bit different from what I had used before. For example, I used electron forge before, but this time I used electron builder, which is supposed to be more powerful and flexible. I also had to learn how to set up the project structure and configure the build process for the electron app.

React (Typescript)

Used React with Typescript for the frontend of the electron app. Learnt core concepts like using hooks for state management and side effects, as well as how to structure a React application. I also had to learn how to integrate React with Electron.

Tailwind CSS

Learnt how to use Tailwind CSS for styling the electron app.

Redux

Learnt how to use Redux for state management in the electron app.

Prompt and Context Engineering

Practiced context and prompt engineering by comparing self proposed solution with AI generated solution and analyzing the differences. By comparing different prompts and contexts, I learnt how to craft better prompts and optimise context to get better results from AI models.

Using Github Copilot Code Review

Found out about Github Copilot Code Review and used it to review my code and get suggestions for improvements.

RepoSense

CHEN YIZHONG

Tool/Technology 1: GitHub Copilot

  • Collaborative Context Management: Understand how to provide good context for Copilot Agent to work with, such as providing relevant files or documentation snippets (.github/copilot-instructions.md) to helps the agent generate more accurate suggestions.

  • Verification and Human-in-the-Loop: Understand to treat Copilot’s output as a "first draft" that requires additional prompting and extensive testing regarding logic flow and existing architectural constraints.

  • AI-Assisted Code Reviews: Use Copilot directly in GitHub to analyze PRs by asking it to explain complex logic, identify potential edge-case bugs, or ensure adherence to project standards. It can also be called directly to a review on its own.

Resources used:

Tool/Technology 2: Claude Code

  • Agentic Workflows: Experimented with using autonomous subagents capable of executing multi-step tasks like major refactoring. (Breaking down into Planning + Model Making + Writing Tests + Executing with TDD)

  • Efficient Branch Management (Worktrees): Leveraging Git worktrees with Claude Code to perform mutliple concurrent experimental changes in isolated environments, preventing interference with main development tasks.

Resources used:

HALDAR ASHMITA

AI Coding Tools

This module exposed me to multiple AI coding tools that are popular in the market, such as GitHub Copilot, Codex by OpenAI, and Cluade Code. (Elaborate on learnings from each)

Terminal UIs

Implementing TUIs, potential libraries, TUI vs GUI

.yaml files

A YAML file is a human readable, plain-text file used mainly for configuration. DevOps, and data serialization. It uses indentation-based, hierarchical structures rather than curly braces or bracckets for data storage. Such a key-value pair structure makes it more readable and human friendly compared to JSON or XML. Key aspects:

  • Uses whitespace indentation to define structure, with colons : separating keys and values.
  • It supports multiple data types (such as strings, integers, booleans etc.), sequences such as lists/arrays, and maps.
  • Highly sensitive to indentation (spaces, not tabs). Comments are created with the # symbol.

Worked with YAML files when exploring the development of a GUI .yaml config filee generation wizard.

(Add more about learnings w.r.t its use in RepoSense)

GitHub Actions

Calling LLMs in Programs

for the CSV activity

GitHub Worktrees

Worktrees are a Git feature that allow you to have multiple branches checked out simultaneously on your system, through multiple working directories all associated with a single local repository. This allows users to work on multiple features/branches without constantly switching between them and stashing changes.

  • All worktrees share the same underlyinh Git history, saving disk space compared to multiple full clones.
  • They eliminate the need for git stash when switching tasks, as each has their own dedicated workspace. Operations like git fetch or creating a new branch are immediately reflected in all linked worktrees.

Commands

  • git worktree list
  • git worktree add <path> [<branch>]
  • git worktree remove <path>
  • git worktree prune

Resources used:

Java Packages

Files

Files package, JVM, shutdown hooks

YU LETIAN

Project Knowledge

RepoSense

Vue.js

resources Used:

.zoom__day(v-for="day in selectedCommits", :key="day.date")

the :key attribute is a special directive used by Vue.js (which is being rendered by Pug) to uniquely identify each element in a looped list. It tells Vue which elements have changed, been added, or removed, allowing it to accurately update the DOM.

Term Explanation
v-for A Vue.js directive used to render a list of items by iterating over an array or object.
:key : is shorthand for v-bind, which binds the value of the attribute to a JavaScript expression. In this case, it binds the key attribute to dynamic variable day.date.

In Vue, custom events emitted with emit() from a child component can be listened to in the parent component using the @event-name syntax.

But they are only received by the immediate parent component.

Pug

Pug's parser is misinterpreting where the attribute value ends when you split it across lines. e.g.

:class="{warn: user.name === '-',
    'active-text': ...}"

The parser gets confused by the line break and the quote positioning. It's a quirk of how Pug tokenizes/parses attributes across multiple lines.

In pug, indentation defines parent-child relationships:

tooltip
  a.message-title        // child of tooltip
    .within-border       // child of a
  span.tooltip-text      // child of tooltip (sibling of a)

How does tooltip component know about tooltip-text? In style.css, we have:

  &:hover {
    .tooltip-text {
      opacity: 1;
      visibility: visible;
    }
  }

HTML

Tag Explanation
<span> The <span> tag is an inline container used to mark up a part of a text

ESLint

Need cd into the frontend folder to run eslint commands.

cd frontend
npm run lint -- --fix

DevOps

GitHub Action workflow

Can set condition to only runs if:

  • Push to master branch, OR
  • Scheduled trigger (if enabled)

Does NOT run on pull requests (prevents deploying unreviewed changes)

deploy:
    needs: build
    if: (github.event_name == 'push' && github.ref == 'refs/heads/master') || github.event_name == 'schedule'

Summary of current github workflow:

  • Push to master → Build report → Upload artifacts → Deploy to gh-pages → Live at GitHub Pages
  • Pull request → Build report → Upload artifacts → (stop, don't deploy)

Gradle

Resources:

Hot Reloading

Can make changes to frontend code and see the updates in the browser without needing to manually refresh the page.

alt text

AI in Development

Gemini integration for Java

resources Used:

Copilot

In-line chat: use ctrl + shift + I to open copilot chat window.

Claude

Claude has this Claude Code, which can load folders from your local machine to its interface for code understanding and generation.

It also has a CLI tool.

TEAMMATES

AARON TOH YINGWEI

Tool/Technology 1

Github Copilot

  • The model used matters. Claude models are much more effective and understanding large code bases and the relationships between different components.
  • Context matters. Conversations that drag for too long tend to hallucinate and give inaccurate answers.

List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.

Tool/Technology 2

...

FLORIAN VIOREL TANELY

Github Copilot

Learning points:

  • Provide as much context as possible, using extensions (@), commands (/), and file context (#)
  • Keep each chat focused on a relatively small objective
  • Agent mode is powerful for complex tasks, and also good as an investigative tool compared to ask mode with findings given as a rich document
  • Treat is as a pair programmer, explain and ask things as you would to a human partner
  • It is easy to get lost changing codes since Copilot makes it much easier to debug and try things out quickly that it is worth the time to ask yourself if any changes should be part of another PR

Resources used:

  • Github website
  • Gemini and ChatGPT for additional help List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.

KOH WEE JEAN

Gemini Code Assist

Gemini Code Assist is an AI-powered collaborator integrated directly into the IDE to help developers write, understand, and troubleshoot code.

Learning points:

  • Codebase Navigation: Used the assistant to track execution flow and trace logic across multiple files, which is invaluable for understanding how legacy systems and new implementations interact in a large codebase.
  • Linting and Formatting: Leveraged the tool to pre-check code against strict project style guides, quickly catching and resolving checkstyle and linting errors before pushing commits.
  • Context-Aware Debugging: Learned to feed specific error logs and file context into the prompt to rapidly diagnose complex issues, such as database connection failures or syntax errors.

Resources:

PHOEBE YAP XIN HUI

GitHub Copilot

  • Effective for understanding a large codebase using #codebase, helping me to identify relevant directories and files for E2E test migrations

  • Useful for generating repetitive or boilerplate files (e.g. SQL-specific E2E test JSON) when similar examples already exist

  • Less effective at identifying logical errors, often fixing symptoms (modifying test data) instead of root causes (updating test logic)

  • Struggles with browser-based E2E tests due to lack of awareness of actual UI state and rendered content

  • May ignore constraints in prompts and go off-tangent, requiring careful supervision and iteration

  • Different modes can serve different purposes: Ask/Plan for exploration, Edit/Agent for code generation

  • Undo functionality is useful for restarting cleanly.

  • Output quality can be inconsistent even with similar prompts, requiring manual verification (especially for strict JSON formats).

Data Migration

  • Data migration can be approached either top-down (front to back) or bottom-up (back to front), depending on the situation.
    • Example dependency chain: DeleteInstructorActionTest → InstructorLogic → Logic → InstructorsDb.

    • Top-down approach (front to back): Starts from endpoints of dependency

      • Changes are usually non-breaking initially.
      • Risk of missing dependent components if the call chain is not fully traced.
    • Bottom-up approach (back to front): Starts from database or low-level components.

      • Changes are often breaking during migration and require iterative fixes.
      • Immediately reveals all affected files and dependencies.
    • The choice of approach should be made based on the scope, risk, and complexity of the migration task.

SARIPALLI BHAGAT SAI REDDY

Cursor

Learned how to use Cursor as an AI-assisted development environment for code generation, refactoring, debugging, and understanding unfamiliar codebases.

Resources used:

  • Cursor official documentation
    Learned the main product features, including inline editing, AI chat, and codebase-aware assistance.
  • Hands-on project experience
    Applied Cursor in real coding tasks, which helped me understand how to use it effectively for implementation and debugging rather than just code generation.

agents.md

Learned how to define project-specific agent instructions using agents.md to improve consistency, task delegation, and adherence to coding conventions.

Resources used:

  • Online examples and community discussions
    Learned how developers use agent instruction files to guide AI behaviour in coding workflows.
  • Personal experimentation
    Practised writing instructions tailored to software engineering tasks such as editing files, following project structure, and maintaining style consistency.

OpenAI API

Learned how to integrate the OpenAI API into software applications to build LLM-powered features and automate language-based tasks, such as csv pasing for form submissions.

Resources used:

  • OpenAI API documentation
    Learned the fundamentals of API usage, request formatting, and response handling.
  • Personal implementation work
    Built familiarity with practical integration concerns such as prompt design, output parsing, and feature prototyping.

Prompt Engineering for Software Engineering Tasks

Learned how to design effective prompts for technical tasks such as code generation, debugging, summarization, and implementation planning.

Resources used:

  • Repeated use of Cursor and OpenAI API
    Learned through experimentation how prompt specificity, constraints, and context improve output quality.
  • Community-shared prompt patterns
    Observed effective prompt structures used by other developers for engineering workflows.

AI-Assisted Debugging and Refactoring

Learned how to use AI tools to analyse bugs, explain code behaviour, and support incremental refactoring in existing systems.

Resources used:

  • Cursor in day-to-day development
    Used AI support to trace bugs, understand unfamiliar code, and explore alternative implementations.
  • Real project experience
    Helped me understand the strengths and limitations of AI when working with non-trivial codebases.

Rapid Prototyping of AI Features

Learned how to quickly prototype AI-enabled product ideas by combining LLM APIs with frontend or backend applications.

Resources used:

  • Personal software projects
    Gained experience turning simple ideas into working prototypes using AI components.
  • OpenAI API documentation
    Provided the technical basis for implementing these features correctly.

YONG JUN XI

General Knowledge

Testing

  • Additional logic in test case may introduce issues:
    • More logic to maintain & might diverge.
    • Depending on the frontend may cause tests to pass if the frontend code is buggy (false-positive).
    • Depending on the local test case external logic may mismatch with what the front-end expects.
  • Be very particular about test cases (we want them to fail to spot bugs).
  • Hard-coded test inputs allow full control of the desired outcome.
  • UUID has a different format than long.
  • The server depends on the docker status, so I need to run docker first.
  • Back door APIs for testing aims to improve testing performance by providing a more direct way to perform API calls without going through the UI, for databases it allows direct manipulation to set up the exact db state the test requires. However, it may introduce security risks like allowing unauthenticated users to make API calls and false positives in test outcomes if front door API calling isn’t tested properly.
  • TestNG: @BeforeTest and @BeforeMethod differences: @BeforeTest runs before the entire test class, @BeforeMethod runs before every @Test in the same class, same applies to @AfterTest and @AfterMethod. Source

Design Patterns

  • Builder Design Pattern to allow separation of construction logic from the final product and enables flexibility in building variations of the same product.

Tips

  • You can right click and inspect a web UI element to check its ID for debugging.
  • You can terminate the E2E test early to keep a browser open, so when an E2E test fails, I can reload the previous browser to check the latest state of the failed E2E.
  • Splitting a big PR into multiple smaller ones may cause the codebase to be in an non-ideal state (such as removing tests first will introduce a codebase with un-tested features), but fine if done in one go during the same iteration or migration. Other choices include defining the clear boundary of each small PR using commits within the same PR, or push the PRs into a separate branch before merging that branch into master.
  • It’s possible to create a PR to fork’s master branch to run the CI privately.
  • You can change the base branch that a PR merges into if you have write access.

Tools

Github Copilot

  • Copilot is able to automatically scan similar files and all scripts related to them before writing the json file I need.
  • Non-agentic AI like ChatGPT on browsers can be more useful in catching mistakes in scripts and less prone to hallucinations compared to agentic AI that have to deal with a large context frame.
  • You can ask Copilot to access the git commit history on the current branch to check for potential bugs due to changes made.

VSCode

  • To run the debugger on E2E tests (add –debug-jvm to the gradle command).
  • To print statements when running tests using gradle (add –info to the gradle command).
  • Possible to run “Convert Indentation to Spaces” in the command line in VSCode.
  • Possible to run “Trim Trailing Whitespaces” in the command line in VSCode.

Git

  • Can use keywords like Closes #<issue-number> or Fixes #<issue-number> to automatically close an issue when a PR is merged, but this doesn’t work in issues (an issue cannot close another issue).
  • git switch -c <branch-name> to create and switch to the new branch, which was introduced solely for checking out branches, while ‘git checkout’ can be used for branches, commits, and files.
  • You can create a new branch and make it track an upstream branch using git switch -c <branch-name> <upstream-name>/<upstream-branch-name>.
    • Example: To fetch a branch called chore/new-branch from the upstream repo and work on it locally, simply create a new branch of any name (preferably similar to better distinguish) and make it track that upstream repo branch.
    • Command line: git switch -c chore/new-branch upstream/chore/new-branch.
  • Rebasing with -i (interactive) git rebase -i <target-branch>.
    • Rebasing to move the current commits on top of the HEAD of the target branch (i.e. doesn’t do anything if current branch is already on top).
    • Using interactive tag -i, we are able to edit/refactor the commits on the current branch and rebase will perform the operations.

TestNG

  • Sometimes snapshots can be detected to be obsolete, especially if it doesn’t match the rendering anymore, run ng test --u to update snapshots.

Gradle

  • Gradle runs tests parallelly which is much faster to run everything then a single suite which runs sequentially (Extremely slow).
  • Github CI uses extensive powerful cores that scales dynamically and has fine-tuned resource control, which is why it runs so much faster than locally.

Hibernate

  • Hibernate ignores inserted @UpdateTimestamp annotated attributes.
  • @CreationTimestamp and @UpdateTimestamp are initialized automatically with the same initial value, then @UpdateTimestamp will update itself when the entry is updated. Source