CATcher:
MarkBind:
RepoSense:
TEAMMATES:
Without any prior knowledge to Angular, I have quickly gone through the introduction part of TypeScript tutorial and a hands-on practice with Angular by official Angular website to quickly familiarise myself with the framework.
The TypeScript tutorial provides very in-depth explanation of the language as well as listing out the notable difference between TypeScript and other common programming languages. It includes great number of details but can be overwhelming to beginners. I briefly looked through the program and wrote some common algorithms in TypeScript to make sure I roughly knew the basic component before proceeding to read about Angular. This resource is better served as a handbook to check when one encounters complex problems related to TypeScript specifically.
The Official Angular Start Guide provides a walk-through of building a shopping website with Angular which involves component, service, data management and transfer -- essentially everything needed for a basic website. It is a fun experience and the guide is very clear and helpful.
...
I had contributed to CATcher as part of IWM, but I have never really approached the Angular aspects of the project. I only started learning about Angular basics when CS3281 demanded for it.
Essentially, the core ideas behind Angular involves:
@Component
decorator, an HTML template and styles.
The other key concepts include event bindings and property binding that link the template to the TypeScript class. Knowing these essentials allowed me to fix WATcher PR#57.
Another key part of Angular is its Dependency Injection system and services. Angular allows us to provide dependencies at different levels of the application, and how the dependencies are instantiated.
Resources:
List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource. ...
One of the main technology I learned during the course of CS3281 was Vue.js, an open source JavaScript framework for building UI components. Previously, I have dabbled a bit in Vue.js, but not to the point where I could even call myself "familiar" with it. In order to work on some of the components (mainly for the implementation of the new Toasts component), I had to learn Vue.js and how to implement a Vue component in MarkBind, e.g. how the different parts in a Vue component (namely <template>
, <script>
, and <style>
sections) work and interact with each other, what are the different lifecycle hooks and event handling available in Vue.js, the fundamentals of reactivity in Vue.js, etc.
The resources I used consist of:
Of course, as I became slowly more familiar with Vue.js and the Vue components, I started realizing the benefits of using Vue 3 would bring over Vue 2. For instance, Dynamic CSS classes available in Vue 3 but not Vue 2 is something that I encountered the need for during the implementation of the toasts component. As the course progresses, I expect to help out where I can with the currently ongoing Vue 2 to Vue 3 migration.
Nunjucks is a templating engine for JavaScript, developed by Mozilla. I encountered a need to investigate Nunjucks further when I was working on a issue with the {{ raw }}
and {{ endraw }}
tags in MarkBind, which was a way to work around the double curly braces ({{
and }}
) being processed as a Nunjucks variable. While I did not fully learn Nunjucks during this investigation, I nevertheless managed to learn about how variables are processed in Nunjucks, and how the Nunjucks syntax works.
The resources I used consist of:
While I was fairly familiar with TypeScript (along with HTML / CSS / JavaScript) prior to working on MarkBind, contributing to the ongoing TypeScript migration of the core MarkBind package has helped me better understand the strict features (and philosophy) of TypeScript. Hence, I thought that it at least deserves a mention in this section.
The resources I used consist of:
While working on the templates and the CLI aspects of MarkBind, I found that I needed to be at least familiar with how other Static Site generators do things. I ended up spending quite a bit of time looking into 5 of our "competitors" (though they do fulfill different niches) in particular: Hugo, Gatsby, Jekyll, Docusaurus, MKDocs.
What I learnt from their documentations (and subsequently trying them out myself to generate sites) is difficult to list, as it mainly involves learning about the available features as well as how they tackled certain issues. However, some of the comments I have left on MarkBind issues do showcase parts of my learnings:
title
tags are handled in different static site generatorsThe resources I used mainly consist of the documentations for each of the static site generators:
I believe that as I progress through the module, I will learn more about other static site generators (that can help to give further insights into the directions that we want to push MarkBind towards).
Gradle is a build tool designed specifically to meet the requirements of building Java applications. Once it’s setup, building an application is as simple as running a single command on the command line. Gradle performs well and is also useful for managing the dependencies via its advanced dependency management system.
Learned about Gradle through a really helpful tutorial
Learned how to write basic bash scripts via tutorialspoint, and had to implement batch scripts to perform environmental checks for all files tracked by git, to ensure they end with a newline, no prohibited line endings (\r\n
) are present and no trailing whitespaces are present.
Some interesting bugs were encountered when attempting to use pipes in batch files, particularly one that prevents delayed expansion of variables from being properly evaluated as per usual. This is due to variables not being evaluated in the batch context, as the lines are executed only in the cmd-line context. A more detailed analysis of the bug is done by a user of stackoverflow.
In RepoSense, a variety of git commands are utilized to get information about the repository. Through undertaking DevOps tasks, I was also exposed to other interesting git commands. Here are some of the interesting ones that I was not aware of before.
git shortlog
- Summarizes git log
output, where each commit will be grouped by author and title. This is used in RepoSense to easily count the commits by the users.
git grep
- A powerful tool that looks for specified patterns in the tracked files in the work tree, blobs registered in the index file, or blobs in given tree objects. Patterns are lists of one or more search expressions separated by newline characters. An empty string as search expression matches all lines. Utilised to write Reposense scripts to perform environmental checks for all files tracked by git, to ensure they end with a newline, no prohibited line endings (\r\n
) are present and no trailing whitespaces are present. Used git docs to learn how to use git grep
properly and what its various flags do.
.mailmap
- If the file .mailmap exists at the toplevel of the repository, it can be used to map author and committer names and email addresses to canonical real names and email addresses. This is useful to map multiple authors and commenters and provides a way to share the mapping with all other users of the repository. Used git docs to learn how to configure git mailmap properly.
Researched about intresting solutions for free url shortening, looking into 3 main ways to do it. Read about an in-depth writeup in the Github issue here.
The DOM represents the structure of a web document. Its APIs are designed to be independent of any programming language, and can be used to manipulate it in interesting ways.
Cross-browser support
While working on a PR to pin the file title,
an API that would have been useful to me is
Element.scrollIntoViewIfNeeded()
, but since it was a non-standard
feature that does not work on Firefox, I had to implement similar
functionalities with Element.scrollIntoView()
and
Element.getBoundingClientRect()
. This was a reminder that web
developers must consider not only different screen sizes but also
cross-browser compatibility while testing their work.
JavaScript is the most common scripting language used to access/modify DOM. Having done some web development work before, I had experience working with it before. Some interesting capabilities of JavaScript that I have used less commonly include the below:
Use of Dot notation vs Bracket notation when accessing properties:
The dot notation (eg. objectName.propertyName
) is the most common way
to access properties cleanly. However, property identifiers can only
contain alphanumeric characters, _
and $
. On the contrary, bracket
notation (eg. objectName['propertyName']
) can use all UTF-8 characters
in the property name, or even variables that finally resolve to a string.
This is useful when we will only know the property name during runtime,
as in this PR, which
uses this.$refs[file.path]
because the reference file.path
is only
resolved based on which file is being interacted with.
Since we heavily make use of OOP in our Java backend, it would make sense to have similar classes, interfaces and inheritance support in our frontend. Using TypeScript allows for this, along with static typing and type inference. Hence, was the decision to slowly migrate our codebase from JavaScipt to TypeScript. Because I haven’t used TypeScript before, I got to learn some basic concepts while working on my first PR, when I defined classes to be used for declaring Vue prop types explicitly.
I had used Vue.js with Vuetify components and Options API previously, which made it easier for me to get started with the RepoSense frontend. However, working on RepoSense helped me to get more familiar with the software engineering principles related to working with Vue.
Template refs
While Vue has a rendering model that abstracts away direct manipulation
of the DOM, sometimes it is necessary to have access to the DOM to
programmatically control an element. This is why Vue gives us access to
$refs
. These ref
s are similar to document.querySelector('.element')
in JavaScript, but are more efficient since they give direct access to
the element needed rather than returning the first element that matches
the given selector.
Pug is an HTML templating language for Node.js, It makes it easy to write reusable HTML components with cleaner syntax. Such templating engines may be useful while working with data-driven web applications, like RepoSense.
Perhaps something unfortunate is that most online resources for Vue (and others) have their documentation given in HTML by default, with no option to toggle to Pug syntax. This makes it (comparatively) difficult to find good resources to learn how Vue and Pug can be used together.
Cypress is a web testing framework used for E2E testing. Unlike Selenium, it can opearte within the application itself. This gives Cypress high flexibility to access any of the objects in the app, including DOM objects, window, etc., similar to how we do in the code itself. I had written some Cypress tests to verify the newly added front-end features that made use of direct manipulation of DOM.
git log
For working on the PR to include merge commits
in the web dashboard, some backend changes were required as merge
commits were not included in the generated report itself. Hence I had
to look into the docs of git commands, specifically git log
, to
understand what flags I could make use of to include all the desired
commits in the report. Previously, we were using the --no-merges
flag
to remove all merges from the report. However, simply removing this
flag did not help in including all the merge commits in the new report.
This may be because git continues to simplify “uninteresting” merges in
the default mode. Finally, the use of --full-history
helped include
all commits without merging any same content commits together. git log
also had to option to format its output with a <format-string>
, and
this formatted output makes it easy for us to parse the results and
generate our repository analysis reports.
These are some of my learnings that do not fit into a single category from above, but are more general in nature.
Object parameter vs multiple parameters for constructors
While creating a User
object in TypeScript, many arguments (~10) had
to be passed in to construct the object. This made me wonder what the
best way of initialising such objects with large number of attributes
is. I was exploring the use of a single object parameter, as it makes
the code much cleaner. However, there is a tradeoff of whether it would
be type safe to just pass an object without any type as a parameter
into the function. Yet, I decided to continue with the method of using
an object argument. This issue of type safety could be mitigated in the
future by checking that the object being passed in as the argument
implements the UserType
interface, when migrating to TypeScript.
Java provides regular expression through the java.util.regex
package, which consists of three classes: Pattern
, Matcher
and PatternSyntaxException
.
Pattern
is a compiled representation of a regular expression. It must be created via static methods, most commonly Pattern.compile(String regex)
.Matcher
then interprets the compiled pattern and matches against an input String.I would like to touch on the more interesting aspects of Java's implementation of regex that I encountered along the way.
\s
for whitespace characters, we have to first escape the backslash within the String representation of the regex argument (so "\\s"
instead of just "\s"
). While this is consistent with the way Java handles escape characters in String, it caused me some minor confusion and readability issues as it was unlike other major programming language such as JavaScript and Python.X?
, X*
, X+
and more. Special care must be taken while using them due to its greedy nature. In one instance, I was attempting to rewrite a regex that matches using stricter rules. I was under the pretext that my regex was working fine as it matches correctly with positive test cases, however upon further investigation, it only matched because it disregarded the remaining regex sequence due to its greedy nature.X??
, X*?
, X+?
and more, where the extra ?
at the end demarcates it as a reluctant quantifer.X?+
, X*+
, X++
and more, where the extra ?+
at the end demarcates it as a possessive quantifer.This clones only the .git
subfolder, and makes it the main directory cloned.
This allows us to pull down only the latest commits and not the entire repo history. This can be achieved by specifying depth. The benefit of doing shallow clone is that we can clone faster due to fewer files being cloned. In RepoSense's case, we utilize --shallow-since
flag, as it fits our use case better than --depth
flag.
Communication via threads happens primarily through sharing access to fields and the objects reference fields refer to. However, this introduces new kinds of errors in thread interference and memory consistency errors.
Thread interference happens when two operations running in different threads, and acting on the same data interleaves.
Memory consistency errors occurs when different threads have inconsistent views of what should be the same data.
Java provides synchronization as a tool to prevent these new forms of errors. It is an action that creates a happens-before relationship.
Synchronized methods:
final
fields cannot be modified after the object is constructed, so it can be safely read through non-synchronized methods.Intrinsic/monitor lock
Class
object associated with the class.Synchronized blocks
Reentrant synchronization
Although I was already familiar with Vue, I only ever used the newer composition API, and thus had to learn the Options API that is used in the frontend of RepoSense.
I got familiar with the API as I worked through the implementation of this PR, which involved a decent amount of refactoring across multiple Vue files. The main resource that I used was the official Vue docs, as it provides a comprehensive yet easy to understand overview of the different concepts. Additionally, it has a toggle to switch between the Composition API and Options API for each page of the documentation, allowing people who are already familiar with one to easily pick up the other.
Here are some of the main things that I learnt:
In RepoSense, there are many properties that we need to calculate/obtain when other properties are changed. For instance, in the zoom panel, we need to maintain a list of commits to be displayed. This list needs to be re-calculated based on other properties, such as the author that is currently selected, the filters applied to the commits (e.g. only show commits in .js
files), etc. In Vue, such properties should be implemented as a computed property under the computed
object in the export.
The main advantage of computed properties are that they are cached, and are only re-computed when one of their reactive dependencies are changed. In the above example, this would be equivalent to our list of displayed commits only being re-computed when the currently selected author is changed, or a filter is added/removed. This significantly improves performance, as if we were to implement the computation of a list in a normal method, it will be re-computed on every re-render, even if the re-render was triggered by an unrelated reactive item - resulting in the unnecessary re-computation of the same value. In a frontend application like RepoSense's reports where there are many such properties, utilising Vue's computed properties provides a much needed performance boost.
One of the main advantages of using a framework like Vue is that certain aspects relating to modifying the DOM are abstracted away from the user. Vue handles reactivity for the user, by updating the DOM when reactive state is mutated. Hence, problems can arise when users bypass this functionality of Vue and manually modify the DOM within Vue components. This is because Vue has no knowledge of these modifications, resulting in potential modifications clashing with Vue's mutation of the DOM.
This PR involved deprecating the use of a method that manually modified the DOM in order to toggle the show/hide state of commits. This method of toggling commits involved a manual mutation of a CSS class, while there was a synchronous method that calculated and updated the number of shown/hidden commits based on this CSS class, which was stored in a reactive variable. However, since Vue's updates to the DOM are asynchronous, this resulted in the variable always being one action behind the 'true' state, which caused an incorrect display of the show/hide all commit messages text. This problem was fixed by working 'within' Vue - modifying a reactive variable on toggle change, and letting Vue handle the DOM mutation. Hence, we should always try to solve the problem within the framework, and try as best as possible to avoid direct mutations of the DOM.
When passing data between components, care should always be taken with regard to how the data is passed, and the consequences of any mutations of that data. If mutations to data only make sense within the context of a particular component, then it is preferable to pass a deep copy of the data to prevent said mutations from changing behaviour outside of its scope.
RepoSense utilizes Cypress for E2E testing, where the tests run in an actual browser that accesses the entire web page by URL, as opposed to only a particular view/component. The Cypress docs is a great resource for learning how to write tests, and was the main resource that I used when learning.
One of the main things that confused me at first was why Cypress was configured to 'start from scratch' for each test case, i.e. it starts from the beginning of the RepoSense report/from a reload of the entire app for every single test case. After reading the corresponding page of the docs, I learnt that this was important to ensure the consistency & usefulness of each individual test case. By resetting the DOM state before each test, it ensures that each test functions independently, which in turn ensures that the running of any test does not impact the outcome of other tests. Otherwise, there might be a scenario where test case A passes, but causes a change that results in test case B failing. In this case, the results of the tests might be misleading, as the failure was a result of actions not confined within the test case itself.
Along a similar line, testing of functionality should be isolated whenever possible. One of the test cases that I wrote was to test that the toggle state of a file persisted after sort. My original idea was to toggle the state of the first file, then change the sort order from 'descending' to 'ascending' and checking the toggle state of the last file. However, this implementation relies on the correctness of the sort functionality, and hence an error in the sorting function might result in this test case failing, which would be misleading. Therefore, in the actual implementation, the file is tracked by file path and searched for after the sort, which isolates this test case from the correctness of the sort functionality.
Before TEAMMATES, I have only ever used React. To help me get started on Angular, I looked up videos on YouTube, specifically Fireship's Angular playlist, to get an overview. I tried doing a Udemy course too but I thought it was a little far-fetched.
With a background in React, I went onto look for the similarities and differences between these two popular frontend web frameworks which led me to decide to dive into TEAMMATES' codebase.
Similar to passing of props in React, Angular has its way to pass data between parent and child components.
In Angular, we use Output
for sending data to parent and Input
for sending data to child. It took me awhile to get used to the terms of in/output
.
What helped me through this was the Angular docs on this exact matter, it was a perfect read! It starts off with the introduction of Input and Output, and was surprised it's said to be like a pattern. This page was really well written as it goes straight to the subject and it takes a step by step approach with sufficient amount of examples.
I have used spy
before. Ironically, I never knew how to use it properly till I had to write tests on the work done.
I was struggling to figure how to pass a check in a method of this object, let's define object as A. Object A has a method, a()
, that has a condition in it if b(...)
.
Method b()
belongs to object A. I could not set this condition to be true when I was writing the test. However, spy did the trick!
All I had to do was write this powerful line of code:
A spyA = spy(A.class);
doReturn(true).when(spyA).b(...);
And it worked! Sounds pretty trivial and silly I know... But Today I Learned (TIL)!
Spying on an object allows us to dig deep into its methods and intentionally set the outcome of what we expect a variable/object or method outcome to be, we are in control and we define the result.
Here is a good read on spies. Love baeldung!
// TODO
ngDoCheck
allows developers the customise change-detection, meaning when there is a change to the component, this method will detect the changes and perform some operation specified by the developers.@Input()
and @Output()
decorators. More specifically, in the child component, we can decorate the property with @Input()
and in the parent component template, we can use property binding to bind the property of the child component to the property of the parent component using the square bracket, []
. In this way, data can be passed from the parent to the child. Conversly, to send the data from the child to the parent, we can use the @Output()
decorator and an EventEmitter
. The parent component template the normal bracket, ()
, for event binding. When we trigger the EventEmitter
to emit the event, the event will be passed to the parent component along with the data.[()]
. Using two-way binding can listen for events and update values simultaneously between the parent and child components.ngIf
, ngFor
, (which are structural directives) allows developers to write if-else statement logics and for loops in the templates, so that we do not have to write repeated codes.Recourses: Angular documentation
Observable
. Upon receiving the HTTP responses as Observable
, we can call the subscribe
method to manipulate the HTTP response. We can also call the pipe
method to add more custom methods for manipulation. For example, we can pass in some custom finalize
method into the pipe
method to specify the end operation.Observable
is asynchronous. This means the code is not executed sequentially. The line of code right after the subsribe
method may be executed first. This is also a reason why it is desirable to pass in finalize
methods.Observable
and synchronise them, we can use forkJoin
from RxJs. forkJoin
taks in an array of Observable
and process them one by one. In this way, we would not need to worry about the synchronisation between each Observable
.docker-compose.yml
file.5432:2345
, it means inside the container, the application expose the port 5432 and docker will connect the exposed port to the port outside the container which is port 2345. Therefore, applications outside the docker will just need to access port 2345 to access the container application.@Enity
, @Table
, and the fields inside the class are the columns of the table by using @Id
, @Column
, etc.@OneToMany
, @ManyToOne
, etc. When fetching the entities from the database, Hibernate will also fetch the associated entities without developers explicitly doing so.Session
object is not thread-safe, so we should not be sharing a session across multiple threads. On the other hand, creating a new session for each database operation is expensive. Session-per-request model can help alleviate the problem where all database operations to be performed in a single request are wrapped in one transaction. A request can be seen as one atomic unit of operation.List the aspects you learned, and the resources you used to learn them, and a brief summary of each resource.
[()]
notationResources:
Resources:
I learnt Angular from scratch to build the dashboard for the onboarding task. Having previous experience with React, it is interesting to see the differences between frameworks and use the frameworks to implement functionality such as sorting, search etc.
Compared to React, Angular has more built in functionality such as debouncing function already in-built into the EventEmitter whereas in React we would have to use 3rd party libraries such as Lodash.
...
To be updated.