Coder Social home page Coder Social logo

brittle.systems's People

Contributors

nmathew98 avatar

Watchers

 avatar

brittle.systems's Issues

chore: `stripped`

  • motivation for why to setup an api client in this way

files:

  • endpoint
  • checkout

chore: `realworld` api

epic

"That's the secret for making things composable: start with the composition - before you even define how the thing will work" - Eric Normand

go through motivation for setting up an api in this way:

chore: `s.o.o.n`

epic

go through:

Title: The intricacies of making something out of nothing
description: a look into the motivation for and the mechanics behind react through an animation library

"If you don't think managing state is tricky, consider the fact that 80% of all problems in all complex systems are fixed by rebooting." - Stuart Halloway

react:

  • react was borne out of a need: a need to enable a simpler model of data flow in user interfaces - in react data only flows in one way, from the roots to the leaves
    • you might ask why just user interfaces? think about how user interfaces work: say we click on a button which is in one section of a program and upon clicking that button we need data in another section of the application to update
    • contrast that to the api, one section of the api gets updated and only that section gets updated - in fact, we try to avoid an update to another section of the api without us being explicit about it because although we might specify a relationship between data in an api, we are not able compute them: we are not able to tell what Song a Person listened to unless we explicitly specify a song
    • that is to say, the je ne sais quoi between data that flows through user interfaces and an api:
      • in user interfaces, the variables are known: we know the Person which allows us to determine the current playing Song (it is only one fetch away)
      • in apis, the variables are unknown: we know the Person but we do not know the current playing Song unless specified
    • the important distinction between user interfaces and api is this: the data that flows through user interfaces can be computed (it comes up a lot too - the whole reason for react) while the data that flows through an api cannot
  • the ideas behind the mechanics of react are simple:
    • we have state, state is reactive and by reactive:
      • state has dependencies and whenever state changes, its dependencies need to be recalculated
        • dependencies would be the render function
        • each time dependencies are recalculated the render function is invoked again, this gives rise to:
          • the need for useEffect: sometimes we have function calls we don't want running each time dependencies are recalculated:
            • each time dependencies are recalculated useEffect is called again, when this happens, react checks the value of each item in the dependency array and if any of those values has changed, then the effect is ran again
          • the need for useRef: sometimes we want to update a variable but we don't want dependencies to recalculate like with using state:
            • can't use a normal variable with let because its value is reset each time dependencies are recalculated
            • so we move it out of the render function with useRef (it's a little more complicated than that in reality)
    • a function component is no different to the function: (x: number, y: number) => number
      • we take in some input and produce an output, in react this output is always React.ReactNode
        • React.ReactNode is a node in DOM
          • the react tree is just one big tree which captures the HTML (a facade of it) expressed
        • React.ReactElement is a tree, and at the leaves lie native html elements and the attributes for those native html elements (bit more complicated in reality)
    • to determine if state has changed, we need to compare the previous state and the next state:
      • equality in javascript is reference equality, so if two references do not match, then they are not equal
      • now, we know that whenever state changes its dependencies a recalculated, and these dependencies are the render fn, so what if a render fn calls another render fn?
        • the render fn would be recomputed, but what if recomputing that render fn is expensive (it in turn calls many other render fns)? this gives rise to:
          • the need for memo: if the inputs to the render fn do not change, then with memo its output does not either (in js each time the render fn is evaluated, a new object is produced and even if the object's properties are the same as the previous result, it is considered different because equality in js is reference equality)
      • sometimes we have variables which are the result of some combination of state const a = expensiveComputation(b, c) where b and c are state
        • whenever dependencies are recomputed because state changes, expensiveComputation is invoked
        • say we have more states than just b and c, if any of those other states change then a will still be reevaluated and expensiveComputation will be invoked again, but a just depends on b and c and it does not need to be recomputed with each and every state change, this gives rise to:
          • the need for useMemo: sometimes an expression only needs to be recomputed for some particular state change not every state change and useMemo allows us to do that
    • the inputs to a render fn are known as props, props are not reactive but they look reactive because props (can only) change if state changes
      • when a render fn is evaluated it produces a node (React.ReactNode)
      • every render fn has a special prop called children which are the child nodes in the tree produced by the render fn, children is React.ReactNode and it ends up being what is displayed on screen (bit more complicated)
        • when a render fn is evaluated because state changes, it is known as a render and when it happens multiple times it is known as a rerender
        • whenever a child rerenders it does not cause its parent to rerender but if a child node has other children, then if a child node rerenders all its children will rerender

presence

  • We need a component which traps exiting and new children, allowing us to perform some effect on the children and once the effect is complete remove or add the children from the DOM

  • presence has two behaviours:

    • exitBeforeEnter: exit all children first and then enter new children
    • exitWhileEnter = !exitBeforeEnter: exit children and enter new children at the same time
  • https://github.com/nmathew98/s.o.o.n/blob/master/src/Presence/presence.tsx

based on how FramerMotion did their Presence: https://github.com/framer/motion/blob/main/packages/framer-motion/src/components/AnimatePresence/index.tsx

we have a state:

  • forcedRenders: this lets us force a render when the last exiting child is done to remove exited children from the DOM, also good for debugging so that we can keep track of the number of rerenders

we have the refs:

  • isInitialRender: initialised to true
  • exitingChildrenLookupRef: initialised to an empty map, we need this so that we know when the last exiting child is done and we can force a rerender to remove exited children from the DOM
  • currentChildrenRef: initialised to children, this ref is always one render behind children

we have a ref isInitialRender, if isInitialRender then we do not need to do anything we just have to render children and update the flag

  1. we diff children and currentChildren to determine exitingChildren
  2. for each exiting child, we attach animateExit which invokes the exit animation on the child just removes the exiting child from exitingChildrenLookupRef and updates forcedRenders when the last child is done exiting
  3. if exitWhileEnter, then take children and merge it with exitingChildren
    • how do we know where to place exiting children? in the updated children array, in place of all children removed will be false, so we just need to FIFO exitingChildren and replace each of these false values with the exiting child
  4. if exitBeforeEnter, then take currentChildren and update exiting children (we go through currentChildren and then check if the child's key exits in exitingChildrenLookupRef, if it does, then return that value)
    • because currentChildren is always one render behind children, these children are also referencing the old parent state, so, we need to look at children and retrieve the latest copy of the child

important to remember that it goes: Parent -> Presence -> Motion, so for any child to be exited, state in Parent must be updated, which will trigger a rerender of both Presence and Motion

issues/understanding:

  • the order goes: render fn -> layout effect -> effect:

    • for example:
      •  const ref = useRef(0);
         
         console.log(ref.current); // 0
        
          React.useLayoutEffect(() => {
             console.log(ref.current); // 0
             ref.current = 1;
             console.log(ref.current); // 1
          });
        
          React.useEffect(() => {
             console.log(ref.current); // 1
          });
        
          console.log(ref.current); // 0
      • bit obvious in hindsight, the (layout effect -> effect) is explained in the docs, render fn was an 'OH' moment
  • if exiting children depend on state in parent, when they are exited out, they get the old version of the parents state:

    • we can only clone the old version of the exiting child and that child is referencing the old version of the parents state
  • the proxy get needed to also forward ref for things to work correctly, so even though exitingChildren the first time was correct, since ref was not being invoked, exiting children never got exited out. (both the result of proxy get and PolymorphicMotion need to forward ref)

  • when we remove a child from DOM, it loses all of its state so when it gets rendered again, the child renders as new, initially thought this was a mistake on my part but this was just normal behaviour and makes sense if you think about it

  • https://github.com/nmathew98/s.o.o.n/blob/master/src/Presence/use-presence.ts

    • context is so that clickable items can be disabled while exit animations are in progress (it's also optional for Presence to be nested within a PresenceContext)
  • https://github.com/nmathew98/s.o.o.n/blob/master/src/Motion/utils/index.ts

  • https://github.com/nmathew98/s.o.o.n/blob/master/src/Motion/polymorphic-motion.tsx

    • quite simple: on initial render we attach a ref, which plays enter animations, on subsequent renders, we attach a ref which plays transition animations, if initial animations are not provided then transition animations are played on initial render (or else if transition animations are quite stark we get a big jump)
    • transition animations (animate prop): these values are expected to change between subsequent renders (Parent updates animate prop in Motion which causes)
    • with exit animations: we expose an imperative handle which Presence invokes when a Motion child is removed from the DOM
    • we also want to keep track of pending animations and we do this via the pendingAnimation ref because:
      • say an element has both transition animations and hover animations
      • say a user clicks the element and is not hovering on the element anymore, we only want to play the hover out animations after the transition animation is done
  • https://github.com/nmathew98/s.o.o.n/blob/master/src/Motion/factory.tsx

    • PolymorphicMotionFactory is not a react component or a higher order component
      • react components have the fn signature: (props: Record<string, any>) => React.ReactElement
      • PolymorphicMotionFactory has the fn signature: (props: Record<string, any>) => (props: Record<string, any>) => React.ReactElement
      • a higher order component takes a component and returns a component, its fn signature is: (C: (props: Record<string, any> => React.ReactElement) => (props: Record<string, any>) => React.ReactElement
    • the component is tagged with a symbol so that we can determine in Presence which elements are Motion and which are not (Presence only works with Motion elements), since components returned by Motion (Motion.div, Motion.p, etc) are the result of a factory (dynamically constructed), their render fn reference changes so we cannot rely on the reference to the render fn to see if they are Motion so we have to tag them
    • without memoing, the function reference changes on each rerender so it is effectively a different kind of component which gives rise to unexpected behaviour, so we need to stabilise the function
    • contrast to React.memo, it memoizes the result of calling a component (which is an object - React.ReactElement)
      • React.ReactElement has a type property:
        • for native html elements (div, h1, etc): this property is a string
        • for react components this property contains a reference to the render function, if the render function reference between renders is different, then to react it is a different component
    • without memoizing: without-memo
    • memoizing: with-memo

chore: end-to-end api tests

go through (w motivation for) fully random api tests w graphql

context:

  • domain is food delivery
  • we have multiple applications we have not developed ourselves, we ran under the assumption that all is well
    • everything is done in javascript (was and is)
    • initial: 1 main graphql api, 1 react web app, 3 react native apps (iOS & android), db: Mongo
    • final: 1 main graphql api, 5 workers over rabbitmq, 1 react web app, 3 react native apps (iOS & android), db: Mongo
    • deployed on: digital ocean one of the lower end basic droplets for api's (even when workers) and the react web app through docker
    • there were times during dev (not caught during manual testing and pushed to prod) when the schema was not in sync with the objects returned by each endpoint, so sometimes the mobile apps would throw which causes a crash in react native
    • observability: docker logs and sentry for all apps
    • number of vendors: 10 - 20, number of drivers: 20 - 30 (not active), customers: unsure but we had more than drivers in all (not active)
      • each them were querying data in intervals
      • given the number of consumers we had and considering it was still a small scale operation, I feel like we should not have run into the issues around resource util. we did
    • push to production and things start breaking left and right, not many tests
      • things start breaking because: server load, unexpected behaviour
    • bug fixes while developing features along with larger scale architecture changes at the same time (everything is changing, no solid ground) and we were also running in production
      • bug fixes and new features self explanatory
      • larger scale architectural changes because the old setup was not feasible for long (we tried for a while but too many of the same issues recurring)
    • so, we needed to determine the boundaries: new stuff can be unit tested but we also needed to ensure that new stuff is compatible with older stuff
      • also for issues that crop up because of server load, we need to know what that looks like
    • each release we had no idea if even the happy path would succeed api side
  • team initially: one senior, one junior (me). down the road (after at least 15 sprints): one senior, two mid, one junior (me)
  • team backgrounds:
    • senior (CONTR) (Java/NodeJS)
    • both mids (PT) - (React Native)
    • me (FT) - Node, React and React Native
  • timeframe: ~11 months (of which 13 (2 week) sprints, were just getting up to a baseline level of functionality that works in local testing)
  • company ultimately shut down, unsure as to why, the team just got an email one morning saying the company has shut down and is winding down operations

drawbacks:

  • it didn't help as much as I thought it would - it is quite a bit of effort and it did help to make sure things didn't break since we were also migrating from monolith to micro-services
  • as the number of micro-services increased so did the resources required for a test run
  • local development as the number of tests grew became increasingly longer (this is a big one), each test run was a complete teardown and setup, not feasible as we have more collections
    • writing a new test is:
      • this is the current behaviour we have verified is correct, we need to generate random data fitting the constraints of what is expected by the api (not fitting the constraints is fine, it just throws, pretty error messages not so much)
        • this requires running tests a few times which is not so easy
  • test run times initially were about 30 mins, managed to bring it down to 10 minutes
  • fully random (each run, random data setup and inserted into test db) is not necessary at all and makes things very complicated
    • in hindsight what was needed is: fixtures during dev and fully random when the tests are invoked by ci/cd
  • tests became complicated enough that you could argue the utility functions to generate test data needed unit testing themselves (though if the test data was wrong the api threw which is fine)
  • tests were flaky (combined with increasing test times not so good) but ci/cd logs were detailed enough
    • two major ones
      • trying to create restaurants concurrently created a write lock which was very very hard to diagnose and get to the bottom of
      • restaurant locations outside of radius which was also hard to diagnose and get to the bottom of, locations were randomly generated: we had a centre point and the restaurant was supposed to be within x metres of this centre point, points were calculated 2D but API verified using 3rd party and it was 3D
        • learnt this while testing
      • because each test run took so long, dev cycles were very long which is what mainly contributed to how long it took
    • after these were solved, results were consistent

benefits:

  • more confidence about each release, peace of mind is nice
  • biggest benefit was being able to recognise when the issues that came up were because of the server choking, whether it was worth the effort im not so sure (esp as the number of tests increased and the dev time to develop each test increased)
    • it was a system we were not really familiar with and hard to recognise what server choking looks like, sometimes we had just odd behaviour (unable to accept checkouts when nothing else was wrong, accepted checkouts not appearing, sometimes checkout status states displayed were just not possible - still unsure what caused this, cpu util. ~90%, etc) during peak times

method:

  • done in typescript with vitest
  • load simulated by increasing the number of parallel api calls, Promise.all (kind of e2e tests and load test at once)
  • load applied could be controlled with a config

misc:

  • tests were separate and in another repo, invoked on deployment of api
  • we could both test individual endpoints and test entire workflows (including simulating stripe payments) (this was a good thing I think, especially as functionality grew being able to verify api side that entire workflows would succeed was a plus but, blocker was mainly the dev time as tests grew)
  • the system also checked that new checkouts happened within some radius (variable radius) of a vendor:
    • how do we verify that this is working as intended without orchestrating the entire system (we have new microservices and an older main api)?
      • create a vendor
      • setup some listings
      • try to checkout from the vendor with two customers one outside and one inside
  • we started with a main api and migrated towards worker model while these tests were being developed
    • during this migration how do we ensure that things as a whole aren't breaking?
    • not many resources
  • most of the load testing tools are for REST api's not GraphQL which was what we were working with
    • new microservices were workers while the main api was graphql
      • some functionality went to workers so required subscriptions to receive the response and some functionality went under graphql and traditional req/res, not many tools in this space (the intent was to eventually be sure that a workflow would succeed)
  • another gripe I had with maybe using something like postman or one of the load testing tools were the yaml configs: annoying and hard to reuse when what we want to verify was that a workflow would succeed not just an endpoint

biggest takeaways:

  • i think at one point when the server kept choking, I was pushing for an increase in compute resources which the senior turned down which I thought was a bad thing at the time but in hindsight it was good because we took a short term loss for a long term win
    • if we don't let the system break we don't find the problems
  • its really hard to determine the amount of effort to invest into early stage efforts but in a way it is also a self-fulfilling prophecy, do things like you're going to fail and it is guaranteed. at the same time we were not business owners and it is not our responsibility to ensure a projects overall success, just our best recommendations given the situation for what we thought was right under our scope and doing our best ultimately
  • early stage projects change very quickly and investing too much time solidifying technical choices is not worth it
    • during the first 13 weeks wrote a lot of unit tests, only for all of it to be scrapped as we redid our components
    • in a relatively short period of time we went from monolith to micro-services, the api tests couldn't be too close
    • small scope but well done is better than large scope and half working
      • we had validations for customer locations, multiple restaurants under one vendor, restaurants within proximity and other stuff like this
      • all we needed really: a customer being able to buy from a restaurant, drivers being able to update delivery status and receiving order notifications, all the extra fluff was unnecessary and detrimental while we tried to get off the ground
      • when we keep running into the same issues again and again: customer ratings drop, team morale decreases and once the momentum is lost it is hard to reignite it
      • in our case: people just wanted to buy some food, it was invite only and we could manually cancel orders too so the extra fluff that came with the codebase were just things that we had to work around while we figured out how to differentiate ourselves in an already crowded industry
  • finding issues before or when they happen is hard, a lot of the issues we encountered we were blindsided
    • not sure how we would have determined beforehand that cpu util. would jump to 90%
      • a lot of consumers querying their the data in a collection every 5s adds up pretty quickly
    • logs and sentry helped after the fact not before, during even when cpu util. jumped to 90% but not much we could do other than wait
    • feature toggles would have been a good idea immediately after this issue came up i think, or at least some ability to increase the query interval
    • took a few sprints to fix it: first trying to make things more efficient by returning less data, then workers, then querying less data and finally querying less data and returning less data (we still had high cpu util. during peak)
  • do it again:
    • different team composition: 1 product (stakeholders are too close), 1 dev, 1 designer, 2 (manual) testers from day one
      • the problem we had was the we were all focused on delivery of the product but 'product' was a hybrid of what came packaged and what we thought we would need
      • needed a designer down the road anyway
      • one manual tester for regressions and one for new features
      • product duals as support and liaising with stakeholders
      • reality checks with random people (mix of new eyes and old eyes) every 3rd sprint
      • each viewpoint brings a different set of constraints, the product lies in the intersection (what everybody agrees upon)
      • team composition in actuality:
        • initial: 1 dev on pure dev, 1 dev on dev/product/testing/support, stakeholders: main product
        • final: 3 devs on pure dev, 1 dev on dev/product/testing/support, stakeholders: main product

what would it look like?

  • the core fn used to query the api (we only care about the data in the response, not the type of query)

    • we need to be able to treat normal graphql queries and subscription queries equally
    • for subscriptions we need to wait until a response is returned (timing out after some period if a response is not received)
    • to do this, you would need to parse the graphql query and determine the type of query and move from there
    • using the graphql client used in the applications is a good idea, these clients come with caching functionality which you can optionally test for (graphql client caching does not have the expected behaviour sometimes)
    • json-to-graphql-query is a good library
  • we have fn's for each operation in the schema:

    • for create something like faker to generate the required fields and also allow for overrides where needed, some entities need to be linked together
    • have to organise things such that we return a complete graphql type as specified in the schema, allowing us to compose together operations
    • the challenge was in designing the entities such that operations composed together while allowing for manual input values where needed
      • it's not always that input shapes line up correctly and we want to avoid more than one make fn for each entity
      • make would hide both a create and get (depending on if we are creating or just getting an existing)
    • avoid going directly to the database so that we exercise as much of the api as possible
      • in the process of describing one workflow:
        • create a restaurant
        • setup listings
        • create a customer
        • create a payment
        • create an order
        • restaurant accepting, etc
        • driver accepting, etc
      • we were able to cover the most important endpoints that we used day to day and there's always the additional tests for each endpoint
  • load testing this is just running things concurrently (which vitest allows) and then at the core fn used to query we just have to control the rate of requests

the end goal is to produce tests that look like:

const customer = await makeCustomer({ isNewCustomer: true });
const restaurant = await makeRestaurant({ isNewRestaurant: true });

const paymentMethod = await makePaymentMethod({ customer });

const newOrder = await makeOrder({ customer, paymentMethod, restaurant, isNewOrder: true });

expect(newOrder.customer.name).toBe(customer.name);
expect(newOrder.price).toBe(123.45);
const customer = await makeCustomer({ email: "[email protected]" });

const existingOrder = await makeOrder({ id: 12345 });

expect(existingOrder.customer.name).toBe(customer.name);

doing things this way allows for composing workflows encountered in real world scenarios which I thought was extremely beneficial along with allowing us for test an endpoint using the same fn's

chore: `realworld` react

epic

files:

chore: misc

  • the clj way of pure data vs using classes (in other languages)
    • disjoint set example
    • why classes for complex data structures

"So as my friend always says: "static typing doesn't solve problems I have""

Timothy Baldridge

  • I think being a good developer is much like being a good citizen: the whole is greater than the sum of its parts; a society grows great when old men plant trees whose shade they will never know, an institution shrinks when its members are more concerned with their scope of work than the objective - a culture is not formed by directives and instructions, it is the summation of the little actions of the people who partake in it daily, and a culture that seeks to create leaders culminates in the whole being greater than the sum of its parts
    • The American Dream: rule for thee but not for me
      • Rockefellers
        • John D. Rockefeller had humble origins as an assistant bookkeeper and modest ambitions of making $100,000 ($3.14m in 2022), in stark contrast to being the most prominent in American history who spearheaded the first monopoly, with a personal wealth that net almost 3% of the US GDP in 1913. He was a frugal and pious man who donated generously with modern day philanthropy being his brainchild (he founded the University of Chicago), Luke 6:38: "Give, and it will be given to you. A good measure, pressed down, shaken together and running over, will be poured into your lap. For with the measure you use, it will be measured to you."
        • Titan - John D  Rockefeller
        • He believed his task to be that of the Lord's fiduciary, responsible for seeing the wealth that was measured to him well invested and he believed that nonprofit institutions should be even more risk averse with money than business organisations... "but a university invests the funds of those who are seeking to make an investment of money for the good of humanity, which shall last, if possible, as long as the world stands." (P176 Titan)
        • His legacy includes: The University of Chicago, Rockefeller University, Central Philippine University, ExxonMobil, Chevron, BP
        • Shady business and anticompetitive practices and a lack of process around transparency
        • The values Rockefeller lived by were in direct opposition to the values embodied by the industry he helped create, why?
        • The leaders who were a product of Standard Oil culture and the Standard Oil culture which was a product of Rockefeller, there are people buying into the culture
        • "Free markets, if left completely to their own devices, can wind up terribly unfree"
        • The lack of accountability on the part of employees of Standard Oil, what was grey became black as the crude oil they refined
    • The view is not that great
      • Patagonia (todo)
        • Yvon Chouinard as a person
        • Patagonia embodies the values of its founder Yvon Chouinard, why??
        • The leaders who are a product of Patagonia culture and the Patagonia culture which is a product of Yvon Chouinard (do they?)
          • what was done differently? people at Patagonia were taught the importance of their mission, not just directives on what to achieve but contextual information that highlighted the pressing why (foreword)
        • Controversies and how they were handled
    • another section on the connection to being a good developer
      • a developer who is the Standard Oil culture, what does that look like
      • a developer who is the Patagonia culture, what does that look like
      • why is one 'better' than the other?
        • slow moving currents puddle up into a lake, so with knowledge, a slow moving industry results in knowledge puddling up, software engineering is not a slow moving industry
    • Necessary skills
    • Demonstrable ability: but when does it apply?
    • Force multipliers
    • A never-ending marathon
    • Stewards to the future

chore: `serve`

epic

"You can put the same function on a bunch of different servers. And it's all running just as fine as opposed to trying to take this big, massive, clunky system that depends on a bunch of other services running, and try and get it to scale up."

Anjana Vakil

go through serve (w motivation for the project in the first place):

  • src/serve/serve.ts

  • route stuff

  • untested functionality for federated schema

  • dependency injection mechanism

  • authorization: is the interface enough for every case?

  • bundling

    • bundling enables portability across environments (node, Deno, Cloudflare workers, bun) with the use of unenv
    • with the use of pkg we allow for compiled executables (bytecode generated)
    • we did not have to include unenv or pkg as part of core, we can inject it in as necessary as rollup build options are exposed so we can keep serve light
  • if we can adhere to Web API standards in the application layer as much as possible we can switch environments as they come without much hassle and keeping the most of the environment specific code in the core (serve) allows for all dependents to be updated for environment agnosticism easily along with keeping platform specific issues in one place

  • plugin system

  • switching between GraphQL and rest style routes with a single config flag

  • support for subscriptions with SSE

  • issues:

    • most of server side is cjs there was an issue with bundling the stripe package which breaks apps using it as cjs is transformed to es
      • never got around to determining the core issue, but preliminary investigation was the dependency on nodes event emitter
    • strict separation of concerns: separation of concerns is only useful when concretions cannot be depended upon (unit testing, swapping out implementations, etc), defensively putting an interface between everything was a bad idea. better to have a mechanism in place (DI) but depend on concretions (direct imports or inferred types) initially until the dust settles
      • bad idea because until the dust settles any change is going to propagate upwards, the interface is effectively useless: it does not help stabilise higher level modules, unit tests will still be affected and there will probably be only one implementation
      • https://preslav.me/2023/12/15/golang-interfaces-are-not-meant-for-that/
      • interfaces are levers but there is no point in having a lever at every point
    • being able to swap out databases is not possible (199-event-sourcing), at least not for big migrations like non-relational -> relational, api's are tightly coupled to databases. switching between databases in the same family should be accounted for but using an ORM for 'ease of use' (in a sense what I was trying to do) solves nothing in the end
      • https://codedamn.com/news/product/dont-use-prisma
      • take Prisma, there's so many layers that the performance cost is not worth it and any devex benefits are nullified when you run into issues
        • node objects FFI'd to rust sent over http to the database
      • query builders are good, ORM's not so much (LINQ is an exception I think its pretty good)

possible roadmap:

  • by bundling things up, we can also leverage developments in the react native space, see hermes

https://developers.google.com/closure/compiler

bets:

  • keep betting on javascript, in web development it will be lingua franca
    • js is only going to keep getting faster, see maglev
  • the language as a whole is moving towards a more functional style, see recent developments: toSpliced
    • with the move towards a functional style, we allow for being declarative, being declarative means js can start serving as an intermediary language before being optimised
    • important for the language as a whole to move towards enabling more functional styles, it dictates the culture and frame of reference by which everyone attacks the problems they face
  • language capabilities are increasing without breaking backward compat (and backward compat cannot be broken)
    • Records & Tuples
    • Pattern Matching
    • Sets
    • Maps
    • Arrays: fromAsync, with
    • Objects: groupBy
    • when it comes to backward compat js is in a unique position, only language which cannot break backward compat: python for example is allowed to break it, with older python code eventually not benefiting from optimisations, old js code will always be able to benefit from optimisations from future developments
    • Atwood's Law: "Any application that can be written in JavaScript, will eventually be written in JavaScript"
  • js does not have parallelization, but it does have workers which enables parallelisation in the form of goroutines, see: piscina
  • a lot of languages support transpiling to javascript: nim, clojurescript, scala, typescript

escape latch from the imposed structure

chore: maintainability

epic

  • why consistency is king
  • when & how to break consistency (examples)
  • keeping cyclomatic complexity low (examples)
  • reduce cognitive load (examples)
  • how to group things (examples)
    • rate of change (how to determine)
  • what is stable, what is simple (vids)
  • graphql deprecating fields and types when client does not auto-update
  • bringing business rules front & centre (but don't make baklava)
  • web applications & keeping things mouldable
  • the importance of constraints (in dependencies, in what is allowed): clarity. by constraining we aid clarification of requirements and what is absolutely necessary
    • constraints allow for seams to show themselves
  • i think a tool is best suited to a task based on how much friction is felt, if you feel the same amount of friction then the objective benefits of one over the other is negligible
    • how many lines does it take to concisely express the use case? (more lines = more friction)
    • variables that go into friction
      • scale
      • team size
      • need for modularity (for example: if we are building a monolith then a language like clojure, c# or go might be more suitable over the long term, if we are building microservices then node is perfectly fine because as the need arises we can migrate easily)
      • ecosystem and community size (when it comes to languages especially)
      • deployments
        • is there a need agility for where/how the application is deployed (yes -> go with one language, no -> you have options)
        • take https://github.com/nmathew98/realworld/tree/the-common
          • multi cloud, multi language, multi database, multi pita
          • impossible to maintain something like this in one repo
      • reaction time in industry/domain
    • for languages in the context of web applications:
      • heavy data processing: c#, clojure
      • prototypes, experiments & form processing: node
      • form processing at scale: clojure, c#, go
    • frontend frameworks:
      • react vs vue vs svelte: not much of a difference
      • react is least opinionated so there is work to be done to setup and evolve a structure
        • there are already good ways to setup a react app
        • can sprinkle a bit of react on mostly static sites
      • vue: ability to lay a structure much like react or there is a recommended structure to things much like svelte
      • svelte: comes with a predefined structure
    • databases:
      • most of the time relational is best
      • nosql for one off stuff

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.