Git-Mastery:
MarkBind:
Open:
RepoSense:
TEAMMATES:
CodeRabbit is an AI-powered code review tool that integrates directly into developers' GitHub Pull Requests (PRs). For team project, it acts as an automated "gatekeeper" that catches common mistakes before a human mentor reviews the code.
Learning Points:
Resources:
Claude Agent Skills is a specialized feature within the Claude ecosystem that allows developers to define and package complex, reusable behaviors as "skills" that the AI can invoke when needed.
Learning Points:
Common Usage:
We will create AGENTS.md file. It will contain high-level overview of the repository. Moreover, .claude folder will also be created, it will contains different skill.md files, which contain a more detailed information about the specific component of the repository.
Ideally, when we give a task to AI Agent, it will read through AGENTS.md file to identify the skills needed to perform the task. Thus, it only need to read the relevant skill.md.
Resources:
Git Worktree is a native Git feature that allows developers to manage multiple working directories (worktrees) attached to a single repository.
Learning Points:
.git directory and object database, making them much faster and lighter than a full git clone.Common Usage:
Suppose we are working on a repo named worktree. We can create a new worktree using
git worktree add ../worktree-testing
which create a directory named worktree-testing at the same level as worktree. A new branch worktree-testing, which is the same as directory name, will be created. If we want to specify the branch name, we can use
git worktree add ../worktree-testing -b hotfix
Note that no two worktrees can work on the same branch. Now, we can open worktree-testing separately and make changes accordingly.
If we want to see the full list of worktrees, we can use
git worktree list
Once we are comfortable with the changes, we can just merge or rebase the branch, which is similar to our usual workflow.
git merge hotfix
git rebase hotfix
To delete the worktree, we can remove the worktree-testing directory, and it will be marked as prunable. Then, we can run
git worktree prune
Besides, we can remove clean worktrees (no untracked files and no modification in tracked files) by running
git worktree remove ../worktree-testing
Resources:
GitHub Copilot is a sophisticated AI-powered developer tool that functions as an intelligent pair programmer, helping developers write code with greater efficiency and less manual effort. Unlike traditional autocomplete, it uses advanced large language models to understand the deep context of the project to suggest everything from single lines of code to entire functional blocks. By automating routine tasks and providing real-time technical guidance, it allows developers to focus more on high-level problem solving and architectural design.
Learning Points:
Resources:
ChatGPT Codex App is an AI coding assistant that helps developers go from idea to working code faster. Instead of only suggesting the next line, it can help with planning, implementation, debugging, and documentation, so it feels more like a practical coding partner in day-to-day work.
Learning Points:
Resources:
GitHub Actions is a CI/CD automation platform built into GitHub that lets us run workflows directly from our repository. It helps automate repetitive engineering tasks like testing, linting, building, and deployment whenever events such as push, pull request, or manual dispatch happen.
Learning Points:
.github/workflows to automatically run checks and scripts, which reduces manual work and human error.push, pull_request, release) so automation happens at the right stage of the development cycle.Common Usage:
Create a workflow file at .github/workflows/ci.yml and define jobs for testing/building.
An Action is a packaged automation step we can use in a workflow. There are lots of prebuilt Actions made by Github or third parties. We can find these reusable Actions from Marketplace
name: CI
on:
pull_request:
push:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm test
Jobs run in parallel unless needs is added. Each job has its own steps, and is run in separated virtual environment. Steps inside one job run in order.
${{}} is Github Actions expression syntax. {{ secrets.USERNAME }} reads an encrypted secret at runtime, and their values are masked in logs. Secrets can be passed through with: or env:.
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- uses: actions/setup-node@v4
with:
username: ${{ secrets.USERNAME }}
password: ${{ secrets.password }}
- run: echo test
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- run: echo deploy
Resources:
List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.
Resources: Introducing Codex · Codex App · Codex Now GA
Codex is OpenAI's agentic coding system (powered by GPT-5.3-Codex) that can receive a task description, write code, run it in a sandboxed cloud environment, debug errors, and open a pull request for review.
Access & Interfaces:
Key Features:
Resources: Claude Code Overview · Claude Code Releases
Claude Code is Anthropic's agentic coding CLI that understands your entire codebase and can work across multiple files and tools to complete tasks end-to-end.
Access & Interfaces:
Key Features:
Resources: githooks.com · Lefthook GitHub · pre-commit vs lefthook comparison · Using Lefthook across a team
Git hooks are scripts that run automatically at specific points in the Git workflow. They are stored in .git/hooks/ but are not tracked by version control — hook managers solve this by committing the config to the repo.
Hook Lifecycle:
| Hook | When | Common Uses |
|---|---|---|
pre-commit | Before commit is created | Lint, format, secret detection |
commit-msg | After commit message is written | Enforce Conventional Commits |
pre-push | Before push to remote | Run tests, check branch policy |
post-commit | After commit | Notifications, doc generation |
Tool Comparison — Lefthook vs pre-commit:
| Lefthook | pre-commit | |
|---|---|---|
| Language | Go | Python |
| Execution | Parallel (faster) | Sequential |
| Config | lefthook.yml | .pre-commit-config.yaml |
| Hook sources | Local scripts | Community hook registry |
| Best for | Polyglot/any project | Python-heavy projects |
Lefthook is preferred in the Git-Mastery project for its performance and language-agnosticism. Configuration example:
# lefthook.yml
pre-commit:
parallel: true
commands:
lint:
glob: "*.py"
run: ruff check {staged_files}
format:
glob: "*.py"
run: ruff format {staged_files}
Install with lefthook install after placing lefthook.yml at the repo root.
Git-Mastery implementation: Lefthook was adopted across the exercises repository (#265) and the app repository (#76).
Resources: Atlassian Guide · DORA Capabilities · trunkbaseddevelopment.com · Harness Complete Guide
A source control branching model where all developers integrate into a single branch (main/trunk), keeping it always releasable and avoiding long-lived branches and merge hell.
Core Principles:
main must be in a deployable state — enforce this via CI gateWhy it works:
Minimum Viable CI/CD pipeline:
Feature Flags (vs long-lived branches):
Resources: GitHub CodeQL docs
CodeQL is GitHub's semantic code analysis engine. It treats code as data and runs queries to find security vulnerabilities automatically.
How it works:
Key points:
github/codeql-action)Git-Mastery implementation: CodeQL was set up alongside main branch hardening (#116).
GitHub Copilot is an AI-powered coding assistant built by GitHub and OpenAI that helps programmers write code faster and with less effort.
Learning points:
Resources:
Claude Code is an AI coding assistant by Anthropic that helps with code generation, refactoring, debugging, and repository-level understanding through natural language instructions.
Learning points:
/review) to get focused outputs for different goals such as code review, planning, or implementation.Resources:
...
A context manager in Python is a structured way to handle resources so setup and cleanup are done safely and automatically, usually through the with statement. Instead of manually opening and closing files or connections, a context manager guarantees that cleanup logic still runs even if an exception occurs inside the block, which reduces bugs and resource leaks. Under the hood, this works through the context manager protocol (__enter__ and __exit__), where __enter__ prepares the resource and __exit__ handles teardown. This pattern makes code cleaner, easier to read, and more reliable because resource handling is localized to one clear block. Common examples include file I/O, database transactions, thread locks, and temporary state changes, and a good rule is to use with whenever something must always be released or restored.
resources:
OpenAI Codex is an AI coding agent, a large language model specialized for software tasks, combined with tools that let it work inside a real project. It can be very useful to read and understand files across a repository, run terminal commands, propose and apply code edits, generate test cases etc.
Agent.md is a project-specific instruction file that tells an AI coding assistant how to behave inside a codebase: what the project does, how the folders are organized, coding conventions, testing commands, and things to avoid. It improves AI performance by informing the LLM exactly how the project’s structure or style from scattered files look like, so it makes more accurate edits, follows the right conventions, avoids breaking assumptions, and produces code that fits the repository better.
Features of a good Agent.md includes:
Ideally, Agent.md should be very specific on the task that this particular AI Agent should carry out, like writing test cases, debug, refactoring, instead of being very generic.
resources
Prompt engineering is the practice of designing clear, specific instructions so an AI can produce more accurate, useful, and consistent outputs.
A good prompt should:
If the task is complicated, it is always recommended to break it into smaller tasks, and prompt the AI to complete a sequence of smaller tasks.
resources:
GitHub Copilot is a code completion and programming AI-assistant that assist users in their IDE. It uses LLMs can assist with code generation, debugging or refactoring through either real-time suggestions or language prompts
Key Learning Points:
Context is King! Providing all relevant files immediately leads to faster, more accurate fixes. The AI performs best when it "sees" the full scope of the bug.
Don't stop at the first working fix. Specifically, we can try asking the AI to improve code quality and clarity after the initial generation helps eliminate "AI-style" clutter and technical debt.
Initial AI suggestions often prioritize the simplest fix. It still requires manually prompts & investigation for edge cases to ensure the solution is robust.
We should treat AI as a collaborator, and not an automated system. Reviewing proposed changes and selectively implementing only what fits the project context prevents the introduction of unnecessary or incorrect logic.
Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently. These skills are designed to be automatically founded and used by AI agents, providing them user-specific context they can load on demand hence extending their capabilities based on the task they’re working on.
Key Learning Points:
Moving instructions from a giant system prompt into a SKILL.md file allows for "progressive disclosure," where the agent only loads the knowledge it needs for the specific task at hand.
Provide a strict directory layout (Instructions in SKILL.md, automation in scripts/, and context in references/) to ensure the agent can reliably find and execute tools.
Include specific trigger phrases & tags, as well as clear and well defined steps. This prevents the agent from guessing, thus ensuring it follows the exact logic required for the most effective application.
So far, I've experimented with many AI coding tools, incorporating them into my workflow and trying them out.
I've tried opencode, Mistral Vibe, Proxy AI, Github Copilot, Cline.
They can be divided into roughly two categories; AI tools that live in the CLI and those that integrate directly into IDEs. While their usage may be similar (many of them can inject context using slash/@ commands, in addition to other common features), their value can be quite different.
I've found that tools that integrated directly into IDEs felt superior for engineering, provided that one does not enable settings such as "YOLO" mode (editing without permission). This way, you may review the AI's work file-by-file, and guiding it if its approach needs changing.
While I've found human-in-the-loop workflows to feel better as a developer (more supervision over work), less hands-on approaches also can be useful for iterating quickly. However, I've also found that the success of these methods are highly contingent on model quality.
On top of that, leveraging plan/act modes, skills, and keeping context focused can improve model performance.
Resources: Cline documentation
So far, I've been working on much of the CLI components of MarkBind. I've done much research on it due to working on PRs such as TypeScript migration and Migrating TypeScript output from CJS to ESM. I would say currently I'm more well-versed in it than an average developer due to my deep involvement in these migration works where I've had to update tooling, workflows, and developer experience regarding TypeScript.
Key Learning Points
Resources: CJS vs ESM (Better Stack)
I learnt and used Langchain to create an AI workflow for the CSV classifier task.
Resources: Langchain documentation
Worked with the team to explore adding repo-specific AI coding skills for common harnesses such as Claude, OpenCode and GitHub Copilot.
Created subagents to handle specific tasks such as writing unit tests, generating documentation, and refactoring code.
Learned about Vue 3's reactivity system, including ref, reactive, and computed properties.
Used reactive and computed to implement dynamic data tag count in CardStack Component.
tscLearned to use the TypeScript compiler (tsc) to check for type errors and compile TypeScript code to JavaScript.
Several useful configs/flags learned:
When making my own course notes with MarkBind, I realised a need to export the entire site to pdf file so that
it could be printed and brought to exams, and distribute easily in general. That's why I started exploring the idea
of creating a @/packages/core-pdf module that achieves this goal.
Considering MarkBind already creates a static site with proper formatting, with appropriate CSS for media print, I decided to leverage on that and use a headless browser to render the site and print it to pdf.
_site directory, the pages are individual .html files, with no particular order in the directory.
.html pages to extract the <site-nav> structure, which contains the correct page order.OpenCode is a CLI program for agentic coding. It allow users to use the same application with different LLM models, supporting many different providers.
A declarative language for designing user interfaces, which can be used through PySide to create GUI applications in python.
Learnt about the internal workings of Git, including how Git stores data, how it manages branches, etc.
Learnt how to develop desktop applications using Electron. In particular, although I had prior experience with Electron, the tech stack was a bit different from what I had used before. For example, I used electron forge before, but this time I used electron builder, which is supposed to be more powerful and flexible. I also had to learn how to set up the project structure and configure the build process for the electron app.
Used React with Typescript for the frontend of the electron app. Learnt core concepts like using hooks for state management and side effects, as well as how to structure a React application. I also had to learn how to integrate React with Electron.
Learnt how to use Tailwind CSS for styling the electron app.
Learnt how to use Redux for state management in the electron app.
Learnt how to use Markbind for creating documentation and websites. In particular, I learnt how to setup the markbind project, since I wasn't the one doing that in my CS2103T team.
Learnt more about Github workflow and CI/CD by setting up Github Actions for my projects. I set up actions for checking linting, building and deploying the pages, building and testing the electron app, etc. The most notable thing I learnt was testing the electron app on mac os environment, which was very useful since I don't have a mac os device to test on.
Practiced context and prompt engineering by comparing self proposed solution with AI generated solution and analyzing the differences. By comparing different prompts and contexts, I learnt how to craft better prompts and optimise context to get better results from AI models.
Found out about Github Copilot Code Review and used it to review my code and get suggestions for improvements.
Collaborative Context Management: Understand how to provide good context for Copilot Agent to work with, such as providing relevant files or documentation snippets (.github/copilot-instructions.md) to helps the agent generate more accurate suggestions.
Verification and Human-in-the-Loop: Understand to treat Copilot’s output as a "first draft" that requires additional prompting and extensive testing regarding logic flow and existing architectural constraints.
AI-Assisted Code Reviews: Use Copilot directly in GitHub to analyze PRs by asking it to explain complex logic, identify potential edge-case bugs, or ensure adherence to project standards. It can also be called directly to a review on its own.
Resources used:
Agentic Workflows: Experimented with using autonomous subagents capable of executing multi-step tasks like major refactoring. (Breaking down into Planning + Model Making + Writing Tests + Executing with TDD)
Efficient Branch Management (Worktrees): Leveraging Git worktrees with Claude Code to perform mutliple concurrent experimental changes in isolated environments, preventing interference with main development tasks.
Resources used:
This module exposed me to multiple AI coding tools that are popular in the market, such as GitHub Copilot, Codex by OpenAI, and Claude Code.
My first major use of Claude Code for RepoSense PRs was to implement a YAML configuration wizard, which revealed both the promise and the limits of AI-assisted development.
The broader takeaway is that working with AI on an existing codebase still demands that the developer be deeply familiar with it. Claude is a powerful collaborator, but it cannot be expected to independently scour a codebase and always arrive at the most contextually appropriate solution. That judgment still has to come from you.
While working on my YAML config wizard, I spent some time researching about terminal UIs, when deciding wheher to implement a TUI or GUI for the wizard.
So after the research detour, I went with a GUI.
A YAML file is a human readable, plain-text file used mainly for configuration. DevOps, and data serialization. It uses indentation-based, hierarchical structures rather than curly braces or bracckets for data storage. Such a key-value pair structure makes it more readable and human friendly compared to JSON or XML. Key aspects:
: separating keys and values.# symbol.Worked with YAML files when exploring the development of a GUI .yaml config filee generation wizard.
GitHub Actions is GitHub's built-in CI/CD platform that automates workflows in response to repository events like pushes and pull requests.
.github/workflows/ and consist of jobs that run on GitHub-hosted virtual machines (runners).actions/checkout, actions/setup-java).Across my contributions to RepoSense, I gained significant practical experience with how GitHub Actions CI interacts with the development workflow.
RepoSense's integration.yml runs a build matrix across six OS variants (Ubuntu, macOS, Windows) and includes separate jobs for backend tests (Gradle checkstyleAll, test, systemTest) and Cypress frontend tests.
One of my earliest lessons came from the temp repo directory PR (#2537), where I refactored repository cloning to use temporary directories. The change worked locally but broke system tests on CI because the test environment assumed cloned repos persisted across test runs for snapshot reuse. I had to add a SystemUtil.isTestEnvironment() guard to skip cleanup in tests, teaching me that CI runners have different lifecycle assumptions than local development.
deleteReposAddressDirectory() cleanup function had to be updated to match the new naming convention, showing me how Gradle build scripts and CI are tightly coupled.The title.md to intro.md rename PR (#2516) and its follow-up removal of backward compatibility (#2566) taught me about the CI pipeline's
breadth. Changes that touched build.gradle, Vue frontend components, Cypress test support files, and documentation all had to pass linting, checkstyle, unit tests, system tests, and frontend tests across the full OS matrix.
The config wizard work gave me the deepest CI experience.
testFrontend task, which uses the ExecFork plugin to start Vite dev servers as background processes before running tests. When I added wizard-specific Cypress tests, they initially failed in CI because the global support.js beforeEach hook visits localhost:9000 before every spec — interfering with wizard tests on port 9002.support.js and using Cypress.env('configWizardBaseUrl') with absolute URLs in intercepts, adding the serveConfigWizard Gradle task with proper dependsOn/mustRunAfter ordering, and then rewriting the tests from scratch.I also encountered multiple rounds of linter failures in CI — Pug lint errors and Java checkstyle violations. That passed locally but were caught by the CI pipeline's stricter enforcement, reinforcing the importance of running the full lint suite locally before pushing.
This activity showed me how to move from using an LLM interactively to embedding it inside a real Python workflow.
I learned how to use AI to call an OpenAI model from code using the SDK, pass structured prompts and dataset rows into the model, and capture the output in a machine-usable form for downstream processing. Here, that meant using an LLM to validate keyword labels in a CSV instead of relying only on manual checking.
I also learned that calling LLMs in programs requires more than just sending prompts. In practice, we had to handle environment setup, API keys, Python package installation, and compatibility issues between models and SDK parameters.
The activity also surfaced operational concerns that matter in scripts: rate limits, batching requests, retries, progress logging, and writing outputs to files like validation_report.csv. I learnt that when LLMs are used programmatically, the surrounding engineering is just as important as the prompt itself.
A major takeaway was that LLMs work well in scripts as one component in a larger pipeline. I used deterministic rules for initial labeling, then layered LLM validation on top as a semantic checker. We can combine traditional code for consistency and speed with LLMs for judgment and language understanding.
Worktrees are a Git feature that allow you to have multiple branches checked out simultaneously on your system, through multiple working directories all associated with a single local repository. This allows users to work on multiple features/branches without constantly switching between them and stashing changes.
git stash when switching tasks, as each has their own dedicated workspace.
Operations like git fetch or creating a new branch are immediately reflected in all linked worktrees.Commands
git worktree listgit worktree add <path> [<branch>]git worktree remove <path>git worktree pruneResources used:
java.io API.java.nio.file package, I explored how to perform common file operations: reading, writing, copying, and deleting, in a more expressive and reliable way.I presented a teachback on Google Stitch, a Google Labs tool that generates high-fidelity UI designs from plain text prompts, and found it impressive for early-stage prototyping.
DESIGN.md file that captures your color tokens, typography scales, and layout rules in a format Claude Code understands natively, so instead of copying and pasting design specs between apps, your agent connects directly and reads them in real time.claude mcp add stitch --transport http https://stitch.googleapis.com/mcp --header "X-Goog-Api-Key: YOUR-API-KEY" -s user, and once that's in you can verify it's live by running /mcp inside Claude Code and checking the server list.create_project, generate_screen_from_text, get_screen_code, and build_site, which means you can describe a screen in natural language, have Stitch generate it, and have Claude Code pull the HTML/CSS and scaffold it into React components, all without ever leaving the terminal.Through building the RepoSense Configuration Wizard, I gained hands-on experience with Vue.js and the Pug templating language.
Vue.js is a progressive JavaScript framework for building user interfaces. It uses a component-based architecture where each .vue file encapsulates a template, script, and style block.
Pug is an indentation-based HTML templating language that eliminates closing tags and angle brackets, producing cleaner and more concise markup. Vue supports Pug natively via <template lang="pug">, allowing developers to write Pug syntax while still using Vue directives like v-model, v-for, and @click.
v-model, v-for, @click, :class), and the importance of running a Pug linter (puglint) to catch formatting issues.I wrote a comprehensive Cypress end-to-end test suite for the config wizard.
Cypress is a JavaScript end-to-end testing framework for modern web applications. Unlike Selenium-based tools that run outside the browser, Cypress executes directly in the browser alongside the application, giving it native access to the DOM, network requests, and browser APIs. It provides a chainable command API (e.g., cy.get().type().blur()) for interacting with elements, cy.intercept() for stubbing and spying on network requests, and cy.wait() for synchronizing on asynchronous operations.
cy.intercept() to stub backend API calls, which allowed the frontend tests to run independently of the Java backend server.setupIntercepts() helper functions that accept overrides, making it easy to test both success and error scenarios. For example, passing { validate: { valid: false, error: 'Repository not found' } } to simulate an invalid repo URL.window:alert with cy.stub().as('alert') to assert on alert messages, using cy.wait('@alias') to synchronize on intercepted network requests, and configuring a custom baseUrl via Cypress.env('configWizardBaseUrl') so the wizard tests can target a different dev server than the main RepoSense app.resources Used:
I familiarised myself with key claude commands that enable me to manage the context window of a conversation and learnt to plug local LLMs into Claude Code.
/compact — manually compress at ~80–90%; run at logical task breakpoints/clear — full wipe; use when switching to an entirely new task/resume and /rewind/resume — returns to a previous session via an interactive picker; or use claude --resume / claude -c from the terminal/rewind — rolls back to any prior turn; choose to revert conversation only, code only, or both. Trigger with /rewind or double-tap Esc/modelSwitch models mid-session without restarting. Use Option+P / Alt+P as a shortcut while composing.
/model opus — most capable, slowest/model sonnet — balanced default/model haiku — fast and lightweightTo use Gemma 4 locally, connect Claude Code to Ollama's Anthropic-compatible API. Gemma 4 (26B MoE, 3.8B active) supports 256K context with zero API cost.
@ File References@<filepath> to inject a file's content directly into context. Claude Code autocompletes the path — faster than describing the file or waiting for Claude to find it.AES-GCM vs AES-ECB: AES-GCM is an authenticated encryption mode providing both confidentiality and integrity in one primitive, whereas AES-ECB is deterministic and unauthenticated — the same plaintext always produces the same ciphertext, making it vulnerable to pattern analysis.
Non-deterministic encryption: AES-GCM uses a random IV per encryption call, so the same plaintext produces a different ciphertext each time. This breaks any code that compares ciphertexts directly and requires a decrypt-then-compare approach instead.
SHA-1 vs SHA-256: SHA-1 is considered cryptographically weak by modern standards. Upgrading to SHA-256 provides stronger collision resistance and a larger 256-bit output.
Key separation: Using the same key for both AES encryption and HMAC is bad practice as cross-algorithm interactions can theoretically leak key material. Separate independently generated keys should be used for each purpose.
update is unreviewable and untraceable.validate — switch from update to validate so Hibernate verifies the schema matches entities on startup instead of silently patching it. Makes schema drift visible immediately rather than masking missing migrations.liquibase-hibernate as referenceUrl — reads the desired schema directly from entity annotations, eliminating the need to spin up a server on a reference branch just to materialise its schema for diffing.Learning points:
Resources used:
Gemini Code Assist is an AI-powered collaborator integrated directly into the IDE to help developers write, understand, and troubleshoot code.
Learning points:
Resources:
Effective for understanding a large codebase using #codebase, helping me to identify relevant directories and files for E2E test migrations
Useful for generating repetitive or boilerplate files (e.g. SQL-specific E2E test JSON) when similar examples already exist
Less effective at identifying logical errors, often fixing symptoms (modifying test data) instead of root causes (updating test logic)
Struggles with browser-based E2E tests due to lack of awareness of actual UI state and rendered content
May ignore constraints in prompts and go off-tangent, requiring careful supervision and iteration
Different modes can serve different purposes: Ask/Plan for exploration, Edit/Agent for code generation
Undo functionality is useful for restarting cleanly.
Output quality can be inconsistent even with similar prompts, requiring manual verification (especially for strict JSON formats).
Example dependency chain: DeleteInstructorActionTest → InstructorLogic → Logic → InstructorsDb.
Top-down approach (front to back): Starts from endpoints of dependency
Bottom-up approach (back to front): Starts from database or low-level components.
The choice of approach should be made based on the scope, risk, and complexity of the migration task.
Treating CI/CD auth as an afterthought is a real risk as deployment credentials have broad cloud access, never expire by default, and are hard to rotate
Short-lived tokens (via Workload Identity Federation) eliminate an entire class of risk. If a token leaks, it's expired within the hour
Workload Identity Federation: Rather than storing a credential, the pipeline proves its identity
IAM makes least privilege concrete: grant a specific role, to a specific identity, on a specific resource. A compromised deploy pipeline should only be able to deploy
Scoping trust by repo and branch goes further because even a fork of the same codebase can't trigger a deploy.
Learned how to use Cursor as an AI-assisted development environment for code generation, refactoring, debugging, and understanding unfamiliar codebases.
Resources used:
Learned how to define project-specific agent instructions using agents.md to improve consistency, task delegation, and adherence to coding conventions.
Resources used:
Learned how to integrate the OpenAI API into software applications to build LLM-powered features and automate language-based tasks, such as csv pasing for form submissions.
Resources used:
Learned how to design effective prompts for technical tasks such as code generation, debugging, summarization, and implementation planning.
Resources used:
Learned how to use AI tools to analyse bugs, explain code behaviour, and support incremental refactoring in existing systems.
Resources used:
Learned how to quickly prototype AI-enabled product ideas by combining LLM APIs with frontend or backend applications.
Resources used:
@BeforeTest and @BeforeMethod differences: @BeforeTest runs before the entire test class, @BeforeMethod runs before every @Test in the same class, same applies to @AfterTest and @AfterMethod. SourceCloses #<issue-number> or Fixes #<issue-number> to automatically close an issue when a PR is merged, but this doesn’t work in issues (an issue cannot close another issue).git switch -c <branch-name> to create and switch to the new branch, which was introduced solely for checking out branches, while ‘git checkout’ can be used for branches, commits, and files.git switch -c <branch-name> <upstream-name>/<upstream-branch-name>.
chore/new-branch from the upstream repo and work on it locally, simply create a new branch of any name (preferably similar to better distinguish) and make it track that upstream repo branch.git switch -c chore/new-branch upstream/chore/new-branch.git rebase -i <target-branch>.
ng test --u to update snapshots.@UpdateTimestamp annotated attributes.@CreationTimestamp and @UpdateTimestamp are initialized automatically with the same initial value, then @UpdateTimestamp will update itself when the entry is updated. Source