Git-Mastery:
MarkBind:
Open:
RepoSense:
TEAMMATES:
GitHub Copilot is an AI-powered coding assistant built by GitHub and OpenAI that helps programmers write code faster and with less effort.
Learning points:
Resources:
...
GitHub Copilot is a code completion and programming AI-assistant that assist users in their IDE. It uses LLMs can assist with code generation, debugging or refactoring through either real-time suggestions or language prompts
Key Learning Points:
Context is King! Providing all relevant files immediately leads to faster, more accurate fixes. The AI performs best when it "sees" the full scope of the bug.
Don't stop at the first working fix. Specifically, we can try asking the AI to improve code quality and clarity after the initial generation helps eliminate "AI-style" clutter and technical debt.
Initial AI suggestions often prioritize the simplest fix. It still requires manually prompts & investigation for edge cases to ensure the solution is robust.
We should treat AI as a collaborator, and not an automated system. Reviewing proposed changes and selectively implementing only what fits the project context prevents the introduction of unnecessary or incorrect logic.
Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently. These skills are designed to be automatically founded and used by AI agents, providing them user-specific context they can load on demand hence extending their capabilities based on the task they’re working on.
Key Learning Points:
Moving instructions from a giant system prompt into a SKILL.md file allows for "progressive disclosure," where the agent only loads the knowledge it needs for the specific task at hand.
Provide a strict directory layout (Instructions in SKILL.md, automation in scripts/, and context in references/) to ensure the agent can reliably find and execute tools.
Include specific trigger phrases & tags, as well as clear and well defined steps. This prevents the agent from guessing, thus ensuring it follows the exact logic required for the most effective application.
resources Used:
.zoom__day(v-for="day in selectedCommits", :key="day.date")
the :key attribute is a special directive used by Vue.js (which is being rendered by Pug) to uniquely identify each element in a looped list. It tells Vue which elements have changed, been added, or removed, allowing it to accurately update the DOM.
| Term | Explanation |
|---|---|
v-for | A Vue.js directive used to render a list of items by iterating over an array or object. |
:key | : is shorthand for v-bind, which binds the value of the attribute to a JavaScript expression. In this case, it binds the key attribute to dynamic variable day.date. |
In Vue, custom events emitted with emit() from a child component can be listened to in the parent component using the @event-name syntax.
But they are only received by the immediate parent component.
Pug's parser is misinterpreting where the attribute value ends when you split it across lines. e.g.
:class="{warn: user.name === '-',
'active-text': ...}"
The parser gets confused by the line break and the quote positioning. It's a quirk of how Pug tokenizes/parses attributes across multiple lines.
In pug, indentation defines parent-child relationships:
tooltip
a.message-title // child of tooltip
.within-border // child of a
span.tooltip-text // child of tooltip (sibling of a)
How does tooltip component know about tooltip-text? In style.css, we have:
&:hover {
.tooltip-text {
opacity: 1;
visibility: visible;
}
}
| Tag | Explanation |
|---|---|
<span> | The <span> tag is an inline container used to mark up a part of a text |
Need cd into the frontend folder to run eslint commands.
cd frontend
npm run lint -- --fix
Can set condition to only runs if:
master branch, ORDoes NOT run on pull requests (prevents deploying unreviewed changes)
deploy:
needs: build
if: (github.event_name == 'push' && github.ref == 'refs/heads/master') || github.event_name == 'schedule'
Summary of current github workflow:
Resources:
Can make changes to frontend code and see the updates in the browser without needing to manually refresh the page.
resources Used:
In-line chat: use ctrl + shift + I to open copilot chat window.
Claude has this Claude Code, which can load folders from your local machine to its interface for code understanding and generation.
It also has a CLI tool.
Learning points:
Resources used:
Effective for understanding a large codebase using #codebase, helping me to identify relevant directories and files for E2E test migrations
Useful for generating repetitive or boilerplate files (e.g. SQL-specific E2E test JSON) when similar examples already exist
Less effective at identifying logical errors, often fixing symptoms (modifying test data) instead of root causes (updating test logic)
Struggles with browser-based E2E tests due to lack of awareness of actual UI state and rendered content
May ignore constraints in prompts and go off-tangent, requiring careful supervision and iteration
Different modes can serve different purposes: Ask/Plan for exploration, Edit/Agent for code generation
Undo functionality is useful for restarting cleanly.
Output quality can be inconsistent even with similar prompts, requiring manual verification (especially for strict JSON formats).
Example dependency chain: DeleteInstructorActionTest → InstructorLogic → Logic → InstructorsDb.
Top-down approach (front to back): Starts from endpoints of dependency
Bottom-up approach (back to front): Starts from database or low-level components.
The choice of approach should be made based on the scope, risk, and complexity of the migration task.