CATcher:
MarkBind:
RepoSense:
TEAMMATES:
Angular components are split into three parts, *.component.ts
, *.component.html
and *.component.css
*.component.ts
@Component({
selector: 'app-auth',
templateUrl: './auth.component.html',
styleUrls: ['./auth.component.css']
})
This segment is found at the top of the *.component.ts
files.
selector
indicates the keyword that will be used in *.component.html
files to identify this component. For example, <app-auth> </app-auth>
templateUrl
indicates the filepath to the *.component.html
file.styleUrls
indicates the filepath(s) to the *.component.css
file(s).*.component.html
This is the template file. Template files use mostly HTML syntax, with a bit of angular specific syntax included. This includes the structural directives such as *ngIf, *ngFor, etc. The documentation is quite sufficient for understanding the angular syntax.
*.component.css
This is a stylesheet, using normal css. There is a ::ng-deep
selector available, which promotes a component style to global style.
Arcsecond is a string parsing library for javascript. An example arcsecond parser is as follows:
export const TutorModerationTodoParser = coroutine(function* () {
yield str(TODO_HEADER);
yield whitespace;
const tutorResponses = yield many1(ModerationSectionParser);
const result: TutorModerationTodoParseResult = {
disputesToResolve: tutorResponses
};
return result;
});
str(TODO_HEADER)
matches the starting of the string with TODO_HEADER
.whitespace
matches the next part of the string with one or more whitespaces.many1(ModerationSectionParser)
applies the ModerationSectionParser
one or more times.GraphQL is a architecture for building APIs like REST. Unlike REST where the server defines the structure of the response, in GraphQL, the client and request the exact data they need.
Apple laptops changed to using ARM64 architecture back in 2020. This meant that Node versions released before then were not directly supported by the ARM64 architecture. This caused issues with the github actions. There is a workaround for this by running arch -x86_64
and manually installing node instead of using the setup-node Github action, but the simpler solution was to upgrade the test to use Node version 16.x.
Tests the application by hosting it on a browser then interacting with html components and checking for expected behaviour. You can use the Playwright extension for chrome and the extension for visual studio code to generate tests and selectors. ...
Issue faced: I faced an issue when setting up the project locally that was related to a default NPM package used by Node to build and compile native modules written in C++. I had to dig deep into how packages were being installed and managed by NPM to resolve the issue and have documented my findings as follows:
Knowledge gained:
Node.js
, helping developers to install and manage packages (i.e. libraries/dependencies) via package.json
node_modules/
folder)Reference: https://docs.npmjs.com/packages-and-modules
Issue faced:
I realised that we were using a npm command that I was very unfamiliar with, that is npm run ng:serve:web
, and I wondered what this command meant
Knowledge gained:
npm run <command>
, e.g. npm run ng:serve:web
, these commands are actually self-defined scripts under the package.json
file.npm build
Reference: https://docs.npmjs.com/cli/v9/commands/npm-run-script
Issue faced: CATcher uses Node 16 while WATcher uses Node 14 to build, it was hard to switch between node versions quickly when working on both projects
Knowledge gained: We can use NVM to easily manage and switch between different node versions locally
A typical component in Angular consists of 3 files:
Each component can have a module file where we can state the components or modules that this component is dependent on (i.e. the imports array) and the components that is provided by this module (i.e. the declarations array). This helps increasing the modularity and scalability of the whole application.
As a developer coming from React, here are some clear differences I have observed:
Reference: https://v17.angular.io/guide/component-overview
While working on issue #1309, I had to delve deep into how the the IssueTablesComponent
is implemented in order to create new tables. A few meaningful observations learnt is summarised as follows:
IssueService
, which is initialized based on IssuesFilter
, and will periodically pull the issues from githubfilters
we inject when creating the IssueTablesComponent
, where the base issues can be filtered down to the issues that we are concerned ofIssueTableComponent
itself, we only specify the action buttons that we want when creating the IssuesTablesComponent
through the actions
input.Issue faced:
While working on the new phase (i.e. bug-trimming phase) for CATcher, the team decided to
use a feature-bug-trimming
branch as the target branch we all merge into. However, I noticed that when we created PRs / merged PRs to that feature branch (see this issue for more details), there are no github workflows/actions being run. This puts us at the risk of failing tests without knowing, I spent some time to learn how github workflows/actions are being triggered.
Knowledge gained:
on:
section within the workflow file (i.e. .yml
file)push
or pull-request
to certain branches that are included:on:
# Automatically triggers this workflow when there is a push (i.e. new commit) on any of the included branches
push:
branches: [sample-branch1, sample-branch2]
# Similar to push:, but for PRs towards the included branches
pull_request:
branches: [sample-branch1]
workflow_dispatch
keyword:on:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
Issue faced: As both CATcher and WATcher involves heavy interaction with the GitHub API(i.e. GitHub acts like our database), I often ran into issues related to the models that we retrieve from the Github API:
Knowledge gained:
Reference:
I learned about the ngx-markdown
library while I was working on a fix to preserve whitespace when converting Markdown to HTML. ngx-markdown
combines multiple different language parsers and renders them in one library. ngx-markdown
supports Marked, Prism.js, Emoji-Toolkit, KaTeX, Mermaid, and Clipboard.js. I learned about configuring the options for the Markdown HTML element.
Marked is the main parser we use for our comment editor in creating/editing issues and responses. I learned that any text that we write in Markdown syntax is converted into HTML elements using Marked. I found out that we can actually override how Marked generates the HTML elements, and we can add more attributes like classes, styles, and even modify the text before rendering it.
WATcher requires node 14 in order to npm install
some of its dependencies. However, instead of having to install and reinstall a different node version between different projects, I can use nvm-windows
to install multiple node versions and switch between them. However, the latest version of nvm-windows
has some issues if youwant to install node 14. After some debugging, I found out that nvm-windows v1.1.11
can install node 14 with no issues.
While working on creating a new phase, I learnt a lot about how phases are managed in CATcher. Every phase has its own phase permissions and phase routing. Phase permissions controls certain tasks. For example, creating a new issue, deleting an issue, editing an issue is only allowed at certain phases. Every phase also has its own routing which is used to load the different pages ranging from, viewing to editing. I also learnt that the repos to hold the issues are generated only at the bug reporting phase.
While I was working on a PR, I was wondering why certain parts of the code are modified after pushing a commit. I then found out that there are commit hooks in place to fix and format and lint issues. Source tree actually allows users to bypass the commit hooks if the changes are irrelevant to the PR that the user is working on.
While working on implementing the feature 'View on github' for WATcher where a user will be able to see the current dashboard in github, I learnt that github searches can actually be done using URL queries.
While working with URL queries, I learnt that some characters are not allowed in URLs. Such characters are "!"$$()" etc. In order to use them, they must be encoded into UTF-8. More information can be found here.
While I was exploring a new feature, I realised that there is no dedicated sandbox for testing the API/Queries. This made it hard for me to understand how the queries work and what the queries response look like. It was very troublesome to have to look at the network tab and look at the response.
I also learnt about the difference GraphQL features like schema and fragments which are important for creating reusable and easily maintable queries.
I also learnt how WATcher uses pagination to perform queries to GitHub using cursor.
Following the exploration of GraphQL, I found that some of my teammates were trying to implement new features that required data from GitHub. However, they were struggling with understanding the GraphQL queries due to the lack of visualization. This has prompted me to create a sandbox for testing the GraphQL queries.
I discovered how to create reusable queries in Postman using collection variables such that anyone can fork the collection and start working on it without having to set up anything other than authorization.
I also learnt how to create environments for workspaces such that sensitive data such as secret keys will not be shared to public.
Angular is the main tool used in both CATcher and WATcher. It is based on TypeScript.
Angular is a component-based framework. Each component is generated with:
Component state is maintained in the .ts file. These state variables can be bound to HTML elements through use of curly braces {{}}.
Angular offers directives such as ngIf, ngFor that allow us to "use" JS in the HTML files.
Services are used for processing, for tasks that don't involve what the user sees. This is different from the .component file, which directly handles things the users see. Services are kept in a separate directory /services/*.
Angular services can be "injected" into other services. This is done in the constructor. Once injected, the service can access of any the injected service methods. But, it's important to not design the code in such a way that it causes a circular dependency. This was something I faced when implementing the presets, as the preset service relied on the filter service but the filter service also relied on the preset service. To fix it, we can redesign the code such that it doesn't have this circular dependency or we can extract out the parts into a 3rd service that is then injected into both.
RxJS
RxJS is the core component required for reactivity in Angular applications. It exposes the idea of "observables", and when the state of that observable changes, it notifies any listeners attached to it.
Observables can be subscribed to and de-subscribed to at any time, using the .subscribe function. It is common practice to dennote observables as variables with
a trailing "
An observable is somewhat similar to a stream. We can register "stream processing functions" such as map
, filter
.
Material Angular
Material Angular is the design library used by CATcher and WATcher. Unfortunately, it is currently using version 11, when the latest version is 19. Despite this, most of the API is similar.
Material Angular allows us to use pre-made components that follow the Material design style, allowing us to have a consistent and coherent UI experience.
...
CATcher and WATcher are both built using the Angular framework, which is a single-page web appliation framework. Angular comes with a CLI tool to accelerate development.
@Component
decorator in the .ts file identifies the class immediately below it as a component class, and specifies its metadata. It associates a template with the component by referencing the .html file (or with inline code).Drawbacks to using a traditional REST API:
GraphQL API is resolved into its schema and resolvers:
GraphQL allows users to manually choose which fields they want to fetch from the API
VueJS is a JavaScript framework for building user interfaces, similar to React. It offers reactive data binding and a component-based architecture, allowing developers to create reusable components that allow for parent-child relationships. Vue is used extensively in MarkBind to create and render website components, such as pages, boxes, code blocks, etc.
TypeScript is a programming language that builds upon JavaScript by adding static typing, enabling developers to catch errors at compile time and write more maintainable code as compared to JavaScript. MarkBind uses TypeScript in its core
package and has documentation describing the migration process from JavaScript to TypeScript.
TODO: JavaScript to TypeScript migration
Regular Expressions (RegEx) are a sequence of characters used to match a patterns in text. They can range from simple exact-word matches to complex patterns using special characters.
RegEx is typically used in MarkBind to validate user inputs andc heck for discrepancies. Some examples include:
serve
command, to detect IP zero addresses and check the validity of IP addressesTODO: What it is How it works How its used in MarkBind Resources
jest.config.js how it works
TODO: What it is How it works How its used in MarkBind Resources
TODO: What it is How it works How its used in MarkBind Resources
TODO: rename, rebase, commit history, CI/CD pipeline config etc.
TODO: What it is How it works How its used in MarkBind Resources
How my code affected it, how codecov is calculated, why it was negatively affected, what indirect/direct changes means, upload tokens, v5 et
In MarkBind, users can specify highlighter rules following the syntax in our User Guide. MarkBind then highlights the code block appropriately when rendering the site.
Markbind's core
package processes highlighter rules in the following steps:
core/src/lib/markdown-it/highlight/HighlightRuleComponent.ts
to match the syntax for rules such as character or word-bounded highlights.computeCharBounds()
to adjust user-specified bounds, ensuring they are valid.<span>
elements (highlighted
or hl-data
) to apply the necessary highlighting when rendered.Previously, the highlighter could not highlight indentation since it automatically adjusted for it during character bound calculation.
I introduced a feature allowing users to specify absolute character positions rather than relative positions.
+
to indicate absolute positioning.computeCharBounds()
was modified to skip indentation length adjustments if absolute bounds were detected.\t
)An issue arose when using absolute bounds with tab characters. Since \t
was considered a single character but visually occupied more space, the highlighting results were inconsistent. To resolve this:
code.replace(/\t/g, ' ')
.MarkBind provides several Command Line Interface (CLI) commands, such as init
, serve
and build
. Details can be found in the User Guide.
MarkBind's CLI functionality lies within the cli
package. It uses the commander
library to create and configure its CLI commands. The library allows developers to customie the commands, such as their aliases, options, descriptions and action. The user's specified root and the options are then passed on to the corresponding action function.
serve
commandMarkBind's serve
command allows users to preview their site live. It follows these steps:
commander
library processes the serve
command, along with user-specified options.serve()
function in cli\src\cmd\serve.js
.serve()
function performs preprocessing to verify that the user-specified root contains a valid MarkBind site. If not, an error is thrown and execution stops.serverConfig
object is created and passed to the Site
instance before being used to configure liveServer.start()
.cli\src\lib\live-server\index.js
, which is a custom patch of the live-server
library.live-server
uses Node.js's http
module to start the web server.error
event, handling errors such as EADDRINUSE
(port already in use) and EADDRNOTAVAIL
(address not available).listening
event, indicating that the server is ready so the site URL can be opened.opn
library is used to automatically open the preview URL.live-server
listens for file system events like add
or change
to trigger a reload
event, updating the preview in real-time.live-server
PatchMarkBind's live-server
patch had some issues, particularly with IPv6 addresses:
Invalid IPv6 URLs
When an IPv6 address is supplied by the user, the opened URL is invalid. IPv6 URLs require square brackets []
, e.g., the address ::1
should be opened with a URL like http://[::1]:8080
instead of http://::1:8080
. As a side note, localhost
resolves to ::1
.
Incorrect Open URL for EADDRNOTAVAIL
When this error occurs (indicating the specified address is unavailable), the patch retries using 127.0.0.1
. However, the opened URL still referenced the original unavailable address, causing confusion for users.
Missing Warning for IPv6 Broadcast Address
serve.js
issues a warning when users specified 0.0.0.0
(IPv4 broadcast address), but the equivalent warning was missing for IPv6 addresses like ::
.
commander
npm pagelive-server
Package - Official live-server
npm pageList the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.
In order to make more well informed changes and tackle deeper issues, I decided to cover the whole codebase of Markbind just so I could have a much fuller understanding of how different parts worked together.
While doing so, I used a MarkBind site to document the architecture and different packages and classes in the MarkBind codebase. The site can be viewed here: https://gerteck.github.io/mb-architecture/
Collection of Title and headings in generation:
Site/index.ts
.Page.collectHeadingsAndKeywords
records headings and keywords inside rendered page into this.headings and this.keywords respectively.Page Generation and Vue Initialization
core-web/src/index.js
, the setupWithSearch()
updates the SearchData by collecting the pages from the site data.
setupWithSearch()
is added as a script in the file template page.njk
used to render the HTML structure of Markbind pages.VueCommonAppFactory.js
provides a factory function (appFactory) to set up the common data and methods for Vue application shared between server-side and client-side, and provides the common data properties and methods.
searchData[]
and searchCallback()
, which are relevant in the following portion.<searchbar/>
, this is where to use MarkBind's search functionality, we set the appropriate values: <searchbar :data="searchData" :on-hit="searchCallback"></searchbar>
Vue Components: Searchbar/SearchbarPageItem.vue Searchbar.vue
searchData[]
in data
, filters and ranks the data based on keyword matches and populates the dropdown with searchbarPageItems
.on-hit
function (which searchCallback
is passed into) when a search result is selected.searchbar-pageitem
vue component.SearchbarPageItem.vue
About PageFind: A fully static search library that aims to perform well on large sites, while using as little of users bandwidth as possible, and without hosting any infrastructure.
Documentation:
It runs after the website framework, and only requires the folder containing the built static files of the website. A short explanation of how it works would be:
id="pagefind-search-input"
, and initialing a default PageFindUI instance on it, not unlike how algolia search works.I got the chance to experience this firsthand.
https://v3-migration.vuejs.org/migration-build
MarkBind (v5.5.3) is currently using Vue 2. However, Vue 2 has reached EOL and limits the extensibility and maintainability of MarkBind, especially the vue-components package. (UI Library Package).
Vue 2 components can be authored in two different API styles: Option API and Composition API. Read the difference here It was interesting to read the difference between the two.
Server-side rendering: the migration build can be used for SSR, but migrating a custom SSR setup is much more involved. The general idea is replacing vue-server-renderer with @vue/server-renderer. Vue 3 no longer provides a bundle renderer and it is recommended to use Vue 3 SSR with Vite. If you are using Nuxt.js, it is probably better to wait for Nuxt 3.
Currently, MarkBind Vue components are authored in the Options API style. If migrated to Vue 3, we can continue to use this API style.
Vue 2 In Vue 2, global configuration is shared across all root instances as concept of "app" not formalized. All Vue instances in the app used the same global configuration, and this could lead to unexpected behaviors if different parts of the application needed different configurations or global directives.
E.g. global API in Vue 2, like Vue.component() or Vue.directive(), directly mutated the global Vue instance.
Some of MarkBind's plugins depend on this specific property of Vue 2 (directives, in particular, which are registered after mounting).
However, the shift to Vue 3 took into consideration the lack of application boundaries and potential global pollution. Hence, Vue 3 takes a different approach that takes a bit of effort to migrate.
Vue 3
In Vue 3, the introduction of the app instance via createApp()
changes how global configurations, directives, and components are managed, offering more control and flexibility.
createApp()
method allows you to instantiate an "app," providing a boundary for the app's configuration -- Scoped Global Configuration: Instead of mutating the global Vue object, components, directives, or plugins are now registered on a specific app instance.Also some particularities with using Vue 3:
Vue uses an HTML based template syntax. All Vue templates
<template/>
are syntactically valid HTML tht can be parsed by browsers. Under the hood, Vue compiles the template into highly optimized JS code. Using reactivity, Vue figures out minimal number of components to re-render and apply minimal DOM manipulations.
SFC stands for Single File Components (*.vue files) and is a special file format thaat allows us to encapsulate the template, logic, styling of a Vue component in a single file.
All *.vue
files only consist of three parts, <template>
where HTML content is, <script>
for Vue code and <style>
.
SFC requires a build step, but it allows for pre-compiled templates without runtime compilation cost. SFC is a defining feature of Vue as a framework, and is the reccomended approach of using Vue for Static Site Generation and SPA. Needless to say, MarkBind uses Vue SFCs.
<style>
tags inside SFCs are usually injected as native style tags during development to support hot updates, but for production can be extracted and merged into a single CSS file. (which is what Webpack does)
Reference: https://vuejs.org/guide/extras/rendering-mechanism
Terms:
virtual DOM (VDOM)
- concept where an ideal 'virtual' DOM representation of UI kept in memory, synced with the 'real' DOM. Adopted by React, Vue, other frontend frameworks.mount
: Runtime renderer walk a virtual DOM tree and construct a real DOM tree from it.patch
: Given two copies of virtual DOM trees, renderer walk and compare the two trees, figure out difference, apply changes to actual DOM.The VDOM gives the ability to programmatically create inspect and compose desired UI structures in a declarative way (and leave direct DOM manipulation to renderer).
Render Pipeline What happens when Vue Component is Mounted:
It is possible to render the Vue components into HTML strings on the server, send directly to the browser and finally 'hydrate' static markup into fully interactive app on the client.
Advantages of SSR:
SSR: The server's job is to:
Client-Side Hydration: Once the browser receives the static HTML from the server, the client-side Vue app takes over. Its job is to:
Vue 3 createApp() vs createSSRApp()
createApp
does not bother with hydration. It assumes direct access to the DOM, creates and inserts its rendered HTML. createSSRApp()
used for creating Vue application instance specifically for SSR, where inital HTML is rendered on the server and sent to client for hydration. Instead of rendering (creating and inserting whole HTML from scratch), it does patching. It also does initialization by setting up reactivity, components, global properties etc, event binding during the mount process (aka Hydration).
live-server
– A simple development server with live reloading functionality, used to automatically refresh the browser when changes are made to MarkBind projects.commander.js
– A command-line argument parser for Node.js, used to define and handle CLI commands in MarkBind.fs
(Node.js built-in) – The File System module, used for reading, writing, and managing files and directories in MarkBind projects.lodash
– A utility library providing helper functions for working with arrays, objects, and other JavaScript data structures, improving code efficiency and readability in MarkBindWhile working on Markbind, I thought that it would definitely be essential to survey other Static Site Generators and the competition faced by MarkBind.
Researching other SSGs available (many of which are open source as well) has allowed me to gain a broader picture of the roadmap of MarkBind.
For example, Jekyll is simple and beginner-friendly, often paired with GitHub Pages for easy deployment. It has a large theme ecosystem for rapid site creation. Hugo has exceptional build speeds even for large sites. Other SSGs offer multiple rendering modes (SSG, SSR, CSR) on a per page basis, support react etc. Considering that the community for all these other SSGs are much larger and they have much more resources and manpower to devote, I thought about how MarkBind could learn from these other SSGs.
Overall, some insights that can be applied to MarkBind would be to:
CommonJS (CJS) is the older type of modules and CJS were the only supported style of modules in NodeJS up till v12.
require
and module.exports = {XX:{},}
.cjs
or by using type commonjs
in package.json.EcmaScript Modules (ESM) standardized later and are the only natively supported module style in browsers. It is the (EcmaScript standard) JS standard way of writing modules/
import { XXX } from YYY
(top of file), const { ZZ } = await import("CCC");
and export const XXX = {}
.Issues I faced:
require
) instead of ES module syntax (import
), and hence import
was not working correctly.tsconfig.json
settings appropriately.JavaScript offers two script types: module and non-module. (For web pages, JavaScript is the Prog. Lang for the web after all).
Module Script Files: use ES Modules (import
/export
), run in strict mode, and have local scope, making them ideal for modern, modular applications. They load asynchronously and are supported in modern browsers and Node.js (with .mjs
or "type": "module"
).
.mjs
or "type": "module"
)Non-Module Script File rely on global scope, lack strict mode by default, and load synchronously. They work in all browsers and use CommonJS (require
) in Node.js, making them suitable for legacy projects or simple scripts.
import
/export
require
)Use modules for modern, scalable apps and non-modules for legacy compatibility or simpler use cases. Transition to modules for better maintainability.
TypeScript has two main kinds of files. .ts
files are implementation files that contain types and executable code. These are the files that produce .js
outputs, and are where you normally write your code. .d.ts
files are declaration files that contain only type information. These files don’t produce .js
outputs; they are only used for typechecking.
DefinitelyTyped / @types
: The DefinitelyTyped repository is a centralized repo storing declaration files for thousands of libraries. The vast majority of commonly-used libraries have declaration files available on DefinitelyTyped.
Declaration Maps: .d.ts.map
Declaration map (.d.ts.map) files also known as declaration source maps, contain mapping definitions that link each type declaration generated in .d.ts files back to your original source file (.ts). The mapping definition in these files are in JSON format.
List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.
A Vue component typically consists of three main sections.
When doing experimental changes, I thought of letting users specify things like font size, font type, etc. Upon looking up the other components and stackoverflow, this is what I found
computed
option. These
properties are automatically updates when the underlying data changes.When writing in Markdown, hyperlinks are created using a specific syntax, but behind the scenes, this Markdown code is converted into HTML.
In Markdown, we use syntax like [Java Docs](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html)
to create a hyperlink. When the Markdown is converted to HTML, it generates an anchor tag in the form of <a href="https://docs.oracle.com/javase/8/docs/api/java/lang/String.html">Java Docs</a>
. This would open the link in the same tab, as no additional attributes are specified.
In contrast, when we write HTML durectly, we can specify additional attributes, such as target="_blank"
, to control how the link behaves. Using the same example, <a href="https://markbind.org/userGuide/templates.html" target="_blank">User Guide: Templates</a>
will ensure that the link opens in a new tab.
In one of my deployment on netlify, some of which did not display the font-awesomes icons properly, leading me to research on them.
Each font awesome (fa-linkedin, fa-github) is mapped to a Unicode character in the font file. For example,
when running the HTML code <i class="fa-house"></i>
, CSS will first apply the fa-solid class based on its
mappings, CSS will also set aside the unicode charater for fa-house. The browser loads the web font fa-solid-900.woff2
and displays the icon.
WOFF2 is a webfont file format, and it is a more compressed version of WOFF and is used to deliver webpage fonts on the fly. In the context of rendering font-awesome, font awesome icons are stored as glyphs in WOFF2 font files, when running <i class="fa-house"></i>
, the browser loads fa-solid-900.woff2
if it is supported.
This page page is pretty useful
CSS (Cascading Style Sheets) is a stylesheet language used to control the presentation of HTML documents.
word-break
property: The word break property provides opportunities for soft wrapping.
slots
API are considered to be owned by the parent component that passes them in and so styles do not apply to them. To apply styles to these components, target the surrounding container and then the style using a CSS selector such as .someClass > *
“virtual” representation of a UI is kept in memory and synced with the “real” DOM
Main benefit of virtual DOM is that it gives the developer the ability to programmatically create, inspect and compose desired UI structures in a declarative way, while leaving the direct DOM manipulation to the renderer
Templates provides easy way to write the virtual dom and get compiled into a render function. However, the virtual dom can directly be created through the render function itself.
The downside of virtual dom is the runtime aspect of it.
the reconciliation algorithm cannot make any assumptions about the incoming virtual DOM tree, so it has to fully traverse the tree and diff the props of every vnode in order to ensure correctness
even if a part of the tree never changes, new vnodes are always created for them on each re-render, resulting in unnecessary memory pressure.
Static hoisting - static codes that are non reactive and never updated are hoisted (removed) from the virtual dom
Patch flags - flags that indicate whether a vnode requires reconciliation. Bitwise checks are used for these flags which are faster
Tree Flattening - Tracked lines of code only applies to those that have patch flags applied
<div> <!-- root block -->
<div>...</div> <!-- not tracked -->
<div :id="id"></div> <!-- tracked -->
<div> <!-- not tracked -->
<div></div> <!-- tracked -->
</div>
</div>
div (block root)
- div with :id binding
- div with binding
Vue component test utilities library: Wrapper
According to my current understanding:
$nextTick()
function of the vm of the wrapper is then called which waits for the next DOM update flush.Markbind utilises several workflow files:
pr-message-reminder.yml
- Extracts out the PR description and checks if a proposed commit message is included.Github Actions is used when writing workflows.
github
context is freuqently used for retrieving useful information of the current workflow run. Some examples used(but not limited to) include :
github.actor
is used to detect the username of the user that triggered the workflow event. It can also be used to detect bots who trigger the events.github.event_name
is used to detect the name of the event that triggered the workflow. In the context of markbind, this is often used to check if the triggered workflow is of a particular event (such as pull request) before running the script.A potential limitation arises when using github.actor
to detect bot accounts. That is, if the bot is a github account that is automated by a user. In this case, github currently has no way to detect such accounts.
Local testing of sites often uses localhost to run up a local server. This often resolves to the IP address of 127.0.0.1.
Markbind allows users to specify the address of localhosts in the IPV4 format. It does not support specifying IPV6 IP addresses.
Markbind uses the all-contributor bot to add contributors to automate the process of adding contributors to the project
{% for %}
and {{ variables }}
, is evaluated and replaced with the corresponding content before moving to the next stage....
TODO: Update
TODO: Update
References:
Familiarised myself with how GitHub Actions work at a high level, and understood basic workflow syntax to debug failing workflow runs.
Issue was discovered to be due to the differences between pull_request
and pull_request_target
. pull_request_target
runs in the context of the base of the pull request, rather than in the context of the merge commit. This causes changes to the workflow in the PR to not be reflected in the runs.
Since the failure was a special case due to the deprecation of certain actions, exception was made to merge with the failing run. Precaution was taken to ensure the change is as intended, but trying it out on personal fork.
References:
The Gradle build typically include three phases: initialization, configuration and execution.
There are four fundamental components in Gradle: Projects, build scripts, tasks and plugin.
A project typically corresponds to a software component that needs to be built, like a library or an application. It might represent a library JAR, a web application, or a distribution ZIP assembled from the JARs produced by other projects. There is a one-to-one relationship between projects and build scripts.
The build script configures the project based on certain rules. It can add plugins to the build process, load dependencies and set up and configure tasks, i.e. individual unit of work that the build process will perform. Plugins can introduce new tasks, object and conventions to abstract duplicating configuration block, increasing the modularity and reusability fo the buld script.
Resources:
CI/CD platform automates build, test and deployment pipeline. There are several main components for Github Actions: workflow, event, job, action and runner
Workflow
configurable automated process that will run one or more jobs. Defined by YAML file in .github/workflows
. A repo can have multiple workflows.
Events
a specific activity that triggers the workflow run, e.g. creating PR and openning issues.
Jobs
A job is a set of steps in the workflow that is executed on the same runner. Each step can be a shell script
or action
Actions
Reusable set of repeated task. This helps reduce the amount of repetative code.
Runners a server that run the workflows when they are triggered. They can be configured with different OS.
Concurrency in github actions By default, GitHub Actions allows multiple jobs within the same workflow, multiple workflow runs within the same repository, and multiple workflow runs across a repository owner's account to run concurrently. This means that multiple instances of the same workflow or job can run at the same time, performing the same steps.
Use concurrency to ensure that only a single job or workflow using the same concurrency group will run at a time. GitHub Actions ensures that only one workflow or job with that key runs at any given time. When a concurrent job or workflow is queued, if another job or workflow using the same concurrency group in the repository is in progress, the queued job or workflow will be pending.
To also cancel any currently running job or workflow in the same concurrency group, specify cancel-in-progress: true
official document
https://docs.docker.com/?_gl=1433w1k_gaMjAxNDMzNDYxNi4xNzE1MzAwOTY4_ga_XJWPQMJYHQ*MTcxNTMxMjY4My40LjEuMTcxNTMxMjY4My42MC4wLjA.
Containerization
https://www.ibm.com/topics/containers
Containerization is a way to deploy application code to run on any physical or virtual environment without changes. Developers bundle application code with related libraries, configuration files, and other dependencies that the code needs to run. This single package of the software, called a container, can run independently on any platform. Containerization is a type of application virtualization.
Use Docker for containerization
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications.
Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security lets you run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application.
Docker architecture
The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface
Another Docker client is Docker Compose, that lets you work with applications consisting of a set of containers
The Docker daemon (dockerd
) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
The Docker client (docker
) is the primary way that many Docker users interact with Docker. When you use commands such as docker run
, the client sends these commands to dockerd
, which carries them out. The docker
command uses the Docker API. The Docker client can communicate with more than one daemon.
A Docker registry stores Docker images. When you use the docker pull
or docker run
commands, Docker pulls the required images from your configured registry. When you use the docker push
command, Docker pushes your image to your configured registry.
Docker objects:
how it works
Dockerfile
A Dockerfile is a text-based document that's used to create a container image. It provides instructions to the image builder on the commands to run, files to copy, startup command, and more.
FROM -> base image to start + image tags WORKDIR -> create source directory and put the source code RUN -> install dependencies USER -> create a non-root user COPY -> copy from local machine to image ENV -> environment variable ...
Build build the docker image based on Docker file .dockerignore -> ignore certain files
Run: create a container based on the image
Kill & Stop: stop a container
define multiple docker applications in single yaml: https://docs.docker.com/compose/gettingstarted/
docker compose for client side with proxy: https://stackoverflow.com/questions/60808048/react-app-set-proxy-not-working-with-docker-compose
docker networking: https://www.geeksforgeeks.org/basics-of-docker-networking/
docker storage: https://www.geeksforgeeks.org/data-storage-in-docker/
apt
package as a job for the Cypress Frontend test and it works, but the former solution is more elegant and concise. Resource referred from GitHub Docs.Learnt about how ESLint ensures a unified style of JS/TS code. Had the chance to go through the ESLint documentation for member-delimiter-style, https://eslint.style/rules/ts/member-delimiter-style, understand how it works, and make the modifications in the ESLint configurations and the codebase to ensure CI job for lintFrontend passes.
Learnt about how Vite build identifies the base directory when serving static assets.
Learnt how to configure Vercel on a GitHub repository.
Learnt about the various aspects to consider when designing and immutable class in Java, such as:
While doing my user experiments on RepoSense, I noticed that the GitHub IDs of contributors were not displayed correctly in the generated contribution dashboards with only the "--repos" flag without the config files. This led me to investigate how RepoSense handles GitHub-specific information and how it differs from Git. Since Git logs only contain commit metadata such as author names and emails, RepoSense is unable to capture GitHub-specific information like GitHub IDs. This is because Git and GitHub, while related, are fundamentally different: Git is a version control system that tracks code changes locally, whereas GitHub is a platform built on top of Git that provides additional features like user profiles and collaboration tools. As a result, the current implementation of RepoSense cannot directly link contributions to GitHub profiles without the config files.
While researching an issue about <hr>
elements in the Markdown files not appearing in the Reposense report, I discovered about the functionality of normalize.css, which provides default styling for this element along with many others. This CSS normalization ensures consistent rendering across different browsers by correcting bugs and browser inconsistencies for more predictable website styling.
I learned how to use the debugger in IntelliJ IDEA to step through the code and inspect variables during runtime. When I continued to work on the Code Portfolio feature, I encountered a behaviour where the absent fields in the config file were not being handled correctly. By using breakpoints and watches, I could trace the flow of the program and understand how the existing code in the PR worked. This allowed me to make the necessary changes to handle the missing fields properly.
I wrote Junit tests for the Code Portfolio feature to ensure that the code changes did not break existing functionalities. I also learned how to use resources in Junit tests to load test data from files. This was particularly useful for testing the parsing of the config file, as I could load the test data from a file and compare the expected results with the actual results.
Stubbing Methods with when(...).thenReturn(...)
:
I learned that this technique lets me define fixed return values for specific method calls. I can instruct Mockito to return a predetermined value when a certain method is invoked with given arguments.
By stubbing methods with thenReturn()
, I isolate the class under test from its real dependencies. For example, if my code calls:
Course course = mockLogic.getCourse(course.getId());
I can specify:
when(mockLogic.getCourse(course.getId())).thenReturn(expectedCourse);
This approach ensures that the tests only focus on the behavior of the class under test without relying on actual implementations or external systems like databases or service layers.
Simulating State Changes Using doAnswer(...)
:
One of the most powerful techniques I learned was using doAnswer()
to simulate side effects and state changes. This method enables me to dynamically alter the behavior of mocked methods based on actions performed within the test.
Syntax:
doAnswer(invocation -> {
// Custom logic to simulate a side effect or state change
// ...
}).when(mockLogic).someMethod(...);
This technique is especially helpful when my method under test changes the state of its dependencies. For example, when simulating the deletion of an instructor, I can use doAnswer()
so that subsequent calls (such as fetching the instructor by email) return null
—mirroring the real-life behavior after deletion.
Advanced Stubbing Techniques with thenAnswer()
:
In addition to doAnswer()
, I learned how to use thenAnswer()
to provide dynamic responses based on the input parameters of the method call. This custom Answer implementation allows for:
Syntax:
when(mockLogic.someMethod(...)).thenAnswer(invocation -> {
// Custom logic to compute and return a value based on the invocation
// ...
});
This method is ideal when I need the stub to return a value that depends on the input. It adds flexibility to my tests, especially when I want my mocked method to behave differently based on its argument.
Mocks vs. Spies:
I learned that the key difference is:
Mocks: Mockito creates a bare-bones instance of the class where every method returns default values (like null
, 0
, or false
) unless explicitly stubbed.
Spies: A spy wraps a real object. By default, a spy calls the actual methods of the object while allowing me to override specific methods if needed.
Examples:
Mocks:
List<String> mockedList = mock(ArrayList.class);
mockedList.add("item");
verify(mockedList).add("item");
assertEquals(0, mockedList.size()); // Returns default value 0 because it’s fully stubbed.
Spies:
List<String> realList = new ArrayList<>();
List<String> spyList = spy(realList);
spyList.add("item");
verify(spyList).add("item");
assertEquals(1, spyList.size()); // Now size() returns 1 because the real method is called.
When to Use Each:
Mocks: I use a mock when I want to completely isolate my class under test from its dependencies.
Spies: I choose a spy when I need most of the real behavior of an object but want to override one or two methods.
Static Mocking:
Mockito allows mocking static methods using MockedStatic<T>
, which is useful when working with utility classes or framework methods that are difficult to inject as dependencies.
try (MockedStatic<ClassName> mockStaticInstance = mockStatic(ClassName.class)) {
mockStaticInstance.when(ClassName::staticMethod).thenReturn(mockedValue);
// Call the static method
ReturnType result = ClassName.staticMethod();
// Verify the static method was called
mockStaticInstance.verify(ClassName::staticMethod, times(1));
}
Advanced Verification Techniques:
Mockito’s advanced verification APIs allow me to check that the correct interactions occur between my class under test and its dependencies—not just that methods were called, but also that they were called in the right order and the correct number of times.
Call Order Verification: Using Mockito’s InOrder API to verify that methods were called in a specific sequence.
InOrder inOrder = inOrder(mockLogic);
inOrder.verify(mockLogic).startTransaction();
inOrder.verify(mockLogic).executeQuery(anyString());
inOrder.verify(mockLogic).commitTransaction();
Invocation Count Verification: Applying verification modes like times()
, atLeast()
, atMost()
, and never()
to assert the precise number of method invocations.
verify(mockLogic, times(2)).processData(any());
verify(mockLogic, never()).handleError(any());
These techniques are crucial when the order and frequency of interactions are essential for the correctness of the code, ensuring that the tested methods not only produce the right results but also follow the intended flow.
I learned these Mockito techniques mainly during the migration of our tests from our previous datastore to Google Cloud PostgreSQL.
The new test classes required a robust mocking framework, so I leveraged a combination of fixed-value stubbing with when(...).thenReturn(...)
, dynamic behavior simulation with doAnswer()
and thenAnswer()
, and careful selection between mocks and spies.
This approach enabled me to write unit tests that are both targeted and reliable.
Although I did not extensively use advanced verification techniques during the migration, I appreciate the potential they offer for validating interactions between components.
These insights have been essential for developing robust tests, and I look forward to applying them in future projects.
Objectify for Google Cloud Datastore (NoSQL):
Objectify is a lightweight Java library that simplifies working with Google Cloud Datastore, a NoSQL database. It provides easy-to-use annotations for mapping Java objects to Datastore entities, while also supporting schema evolution.
Key Features:
@Entity: Marks a class as an entity that will be stored in Datastore.
@Id: Defines the primary key for the entity.
@Index: Defines a Datastore index for a property to optimize querying. This annotation allows specifying custom indexing rules for properties that will be queried frequently.
@Load: An annotation for lazy-loading data, allowing entities to be loaded only when needed, improving efficiency when handling large datasets.
@AlsoLoad: Maps a field to a different property name in Datastore, useful for schema evolution.
Example:
@Entity
public class Course {
@Id
private Long id;
@Index
@AlsoLoad("course_name")
private String name;
// Constructors, getters, setters...
}
Fetching an entity:
Course course = objectify.load().type(Course.class).id(courseId).now();
Jakarta Persistence (JPA) for Relational Databases:
Jakarta Persistence (JPA) is the standard Java API for Object-Relational Mapping (ORM), used for storing Java objects in relational databases such as PostgreSQL. It provides annotations to define how Java objects map to SQL tables and how relationships between entities are managed.
Key Features:
@Entity: Defines a persistent Java object.
@Table: Defines the table in the database that the entity will be mapped to. By default, the table name will be the same as the entity class name, but the @Table
annotation allows you to specify a different table name.
@Id: Marks the field as the primary key of the entity.
@GeneratedValue: Specifies the strategy for auto-generating the primary key (e.g., GenerationType.IDENTITY
, GenerationType.AUTO
).
@Column: Maps a field to a specific column in the database table. It allows specifying additional attributes like column name, nullable, unique constraints, and default values.
@OneToMany, @ManyToOne, @ManyToMany: Establishes relationships between entities.
@JoinColumn: Specifies the column used for joining tables in relationships. This is often used with @ManyToOne
and @OneToMany
annotations to define the foreign key.
Example:
@Entity
@Table(name = "students")
public class Student {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "student_name", nullable = false)
private String name;
@Column(name = "email", unique = true)
private String email;
// Constructors, getters, setters...
}
Fetching an entity:
Student student = entityManager.find(Student.class, studentId);
Jakarta Persistence Documentation
In my experience with Objectify and Jakarta Persistence, I learned how to map Java objects to Datastore entities and relational database tables, respectively.
I was working on standardizing naming conventions for variables and had to modify Java variable names and change the entity/SQL entity names.
One of my mentors pointed out that without using the correct annotations, mismatched entity or column names between the code and the actual database schema could lead to errors.
To address this, I utilized annotations like @AlsoLoad("oldName")
and @Column(nullable = false, name = "<COLUMN_NAME>")
to ensure proper mapping of fields to database columns and to avoid potential issues.
Understanding and applying these annotations correctly was key for me in preventing errors and ensuring smooth database operations.
...
Angular Component Communication:
@Output
and EventEmitter
.Conditional Class Application:
ngClass
directive.[class]
binding syntax.Event Binding:
(event)
binding syntax to handle user interactions.(change)="handleChange($event)"
to trigger functions when events like change
occur, passing the event object as an argument.Angular Official Documentation:
@Output
and EventEmitter
to enable child-to-parent communication.Udemy Course: "Angular - The Complete Guide" by Maximilian Schwarzmüller:
By combining these resources, I was able to implement a basic dark mode feature that functions effectively but still requires refinement. One key area for improvement is ensuring the dark mode state persists when navigating between routes. Currently, when the route changes (e.g., from localhost:4200/web/
to another route), the boolean variable controlling the dynamic CSS class allocation using ngClass
resets to its default light mode, even if dark mode was active prior to the route change.
I suspect this behavior occurs because the page component is re-rendered during navigation, causing the component's state (including the boolean variable) to be re-initialized. To address this, I plan to research and implement a solution to persist the dark mode state. A promising approach might involve using a shared Angular service to store and manage the state globally, ensuring it remains consistent across routes. While I am not yet an expert in Angular, I am confident that further exploration and practice will help me refine this feature.
Argument Matchers and Primitive vs. Boxed Types
One thing that really stood out to me while working with Mockito was how it handles primitive vs. boxed types. I always assumed that since Boolean
is just the boxed version of boolean
, their argument matchers would behave the same way. However, I discovered that:
anyBoolean()
works for both boolean
and Boolean
, but any(Boolean.class)
only works for Boolean
.Handling Null Values in Argument Matchers
Another unexpected challenge was that any()
does not match null
values. I initially thought any()
would work universally, but my tests kept failing when null
was passed in. After some research, I found that I needed to use nullable(UUID.class)
instead. This was an important learning moment because it made me more aware of how Mockito’s matchers handle null
values differently.
Verifying Method Calls
I also gained a deeper understanding of method verification in Mockito.
verify(mockObject, times(n)).methodToBeTested();
times(1)
ensures the method was called exactly once, while never()
, atLeastOnce()
, and atMost(n)
give more flexibility in defining expected call frequency.Difference Between mock()
and spy()
I decided to dive deeper into stubbing with mockito which led me to learn more about the difference between mock()
and spy()
.
mock(Class.class)
: Creates a mock object that does not execute real method logic.spy(object)
: Creates a partial mock where real methods are called unless stubbed.Mockito Official Documentation:
Mockito Series written by baeldung:
Working with Mockito has made me more confident in writing unit tests. I also gained a much deeper appreciation for argument matchers and null handling. Learning that any()
does not match null but nullable(Class.class)
does was an unexpected but valuable insight. These small details can make or break test reliability, so I’m glad I encountered them early on.
Looking ahead, I aim to sharpen my Mockito skills by exploring advanced features like mocking final classes and static methods. I also plan to experiment further with ArgumentCaptor
, as it offers a more structured approach to inspecting method arguments in tests.
Mockito has already helped me write more effective and maintainable unit tests, and I’m excited to continue improving my testing skills with its advanced features!
List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.
Coming from a React background, it was interesting to understand how Angular components work and how it talks to each other. A lot of the features are built in with their custom names like ngFor
and (click)
as compared to using JSX. It was very modular in nature which made the learning easier as I can focus on one component without having to break the rest or needing to learn the codebase of more than the surrounding components.
Angular uses a lot more of observables, emittors and listeners which is based on services to communicate between components. It was very different from React Redux and parent-child that I know of. This was what I had to make use of for one of my first PRs #13203 to deal with dynamic child components listening to a hide/show all button.
Angular Crash Course by Traversy Media: A crash course for learning how Angular works for developers with some frontend experience. It covers the basics of Angular, including components, services, and routing.
The use of when()
was rather cool for me coming from JUnit and CS2103T. I did not expect to be able to mock functions and their return values. when()
overrides a function call when that provided function is called, and returns the values given with chain functions. It allows me to perform unit tests much more easily as we do not need to worry about the implementation of the method being complete.
Mockito Documentation: Official documentation for Mockito
This was my first time using Docker and it made development much easier by containing our backend in its own sandbox environment. It keeps the application standardised by running on one type of environment and ensures smooth development by not worrying about multiple types of environment to cater and develop for during production. ...
mock()
, creating a test stub for Logic
.when()
that allows you to specify a return object
using thenReturn()
without running the actual method. This can reduce the chances of any bugs from dependencies affecting the unit test.verify()
that allows you to verify if a certain method call has been made. I think this helps greatly in debugging especially in a large code base.when()
requires the arguments of the method to mock to be specified, in some cases, we cannot pass in the arguments directly due to equality checks for the different objects, hence we can bypass that by using any(<ClassName>.class)
where any argument of that class will trigger the mock method call.when()
does not call the actual method itself.