Coder Social home page Coder Social logo

taas-apis's Introduction

Topcoder TaaS API

Tech Stack

Local Setup

Requirements

Steps to run locally

  1. Make sure to use Node v12+ by command node -v. We recommend using NVM to quickly switch to the right version:

    nvm use
  2. 📦 Install npm dependencies

    npm install
  3. ⚙ Local config

    1. In the taas-apis root directory create .env file with the next environment variables. Values for Auth0 config should be shared with you on the forum.

      # Auth0 config
      AUTH0_URL=
      AUTH0_AUDIENCE=
      AUTH0_AUDIENCE_UBAHN=
      AUTH0_CLIENT_ID=
      AUTH0_CLIENT_SECRET=
      # If you would like to test Interview Workflow then Config Nylas as per ./docs/guides/Setup-Interview-Workflow-Locally.md
      NYLAS_CLIENT_ID=
      NYLAS_CLIENT_SECRET=
      NYLAS_SCHEDULER_WEBHOOK_BASE_URL=
      # Locally deployed services (via docker-compose)
      ES_HOST=http://dockerhost:9200
      DATABASE_URL=postgres://postgres:postgres@dockerhost:5432/postgres
      BUSAPI_URL=http://dockerhost:8002/v5
      TAAS_API_BASE_URL=http://localhost:3000/api/v5
      # stripe
      STRIPE_SECRET_KEY=
      CURRENCY=usd
      • Values from this file would be automatically used by many npm commands.
      • ⚠️ Never commit this file or its copy to the repository!
    2. Set dockerhost to point the IP address of Docker. Docker IP address depends on your system. For example if docker is run on IP 127.0.0.1 add a the next line to your /etc/hosts file:

      127.0.0.1       dockerhost
      

      Alternatively, you may update .env file and replace dockerhost with your docker IP address.

  4. 🚢 Start docker-compose with services which are required to start Topcoder TaaS API locally

    npm run services:up

    Wait until all containers are fully started. As a good indicator, wait until taas-es-processor successfully started by viewing its logs:

    npm run services:logs -- -f taas-es-processor
    Click to see a good logs example
    • first it would be waiting for kafka-client to create all the required topics and exit, you would see:
    tc-taas-es-processor  | Waiting for kafka-client to exit....
    
    • after that, taas-es-processor would be started itself. Make sure it successfully connected to Kafka, you should see 9 lines with text Subscribed to taas.:
    tc-taas-es-processor | [2021-04-09T21:20:19.035Z] app INFO : Starting kafka consumer
    tc-taas-es-processor | 2021-04-09T21:20:21.292Z INFO no-kafka-client Joined group taas-es-processor generationId 1 as no-kafka-client-076538fc-60dd-4ca4-a2b9-520bdf73bc9e
    tc-taas-es-processor | 2021-04-09T21:20:21.293Z INFO no-kafka-client Elected as group leader
    tc-taas-es-processor | 2021-04-09T21:20:21.449Z DEBUG no-kafka-client Subscribed to taas.role.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.450Z DEBUG no-kafka-client Subscribed to taas.role.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.451Z DEBUG no-kafka-client Subscribed to taas.role.requested:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.452Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.create:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.455Z DEBUG no-kafka-client Subscribed to taas.job.create:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.456Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.457Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.458Z DEBUG no-kafka-client Subscribed to taas.jobcandidate.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.459Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.create:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.461Z DEBUG no-kafka-client Subscribed to taas.job.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.463Z DEBUG no-kafka-client Subscribed to taas.workperiod.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.466Z DEBUG no-kafka-client Subscribed to taas.workperiod.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.468Z DEBUG no-kafka-client Subscribed to taas.workperiod.create:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.469Z DEBUG no-kafka-client Subscribed to taas.workperiodpayment.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.470Z DEBUG no-kafka-client Subscribed to taas.workperiodpayment.delete:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.471Z DEBUG no-kafka-client Subscribed to taas.workperiodpayment.create:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.472Z DEBUG no-kafka-client Subscribed to taas.action.retry:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.473Z DEBUG no-kafka-client Subscribed to taas.job.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | 2021-04-09T21:20:21.474Z DEBUG no-kafka-client Subscribed to taas.resourcebooking.update:0 offset 0 leader kafka:9093
    tc-taas-es-processor | [2021-04-09T21:20:21.475Z] app INFO : Initialized.......
    tc-taas-es-processor | [2021-04-09T21:20:21.479Z] app INFO : common.error.reporting,taas.job.create,taas.job.update,taas.job.delete,taas.jobcandidate.create,taas.jobcandidate.update,taas.jobcandidate.delete,taas.resourcebooking.create,taas.resourcebooking.update,taas.resourcebooking.delete,taas.workperiod.create,taas.workperiod.update,taas.workperiod.delete,taas.workperiodpayment.create,taas.workperiodpayment.update,taas.interview.requested,taas.interview.update,taas.interview.bulkUpdate,taas.role.requested,taas.role.update,taas.role.delete,taas.action.retry
    tc-taas-es-processor | [2021-04-09T21:20:21.480Z] app INFO : Kick Start.......
    tc-taas-es-processor | ********** Topcoder Health Check DropIn listening on port 3001
    tc-taas-es-processor | Topcoder Health Check DropIn started and ready to roll
    

    If you want to learn more about docker-compose configuration
    see more details here

    This docker-compose file starts the next services:

    Service Name Port
    PostgreSQL postgres 5432
    Elasticsearch elasticsearch 9200
    Zookeeper zookeeper 2181
    Kafka kafka 9092
    tc-bus-api tc-bus-api 8002
    taas-es-processor taas-es-processor 5000
    • as many of the Topcoder services in this docker-compose require Auth0 configuration for M2M calls, our docker-compose file passes environment variables AUTH0_CLIENT_ID, AUTH0_CLIENT_SECRET, AUTH0_URL, AUTH0_AUDIENCE, AUTH0_PROXY_SERVER_URL to its containers. docker-compose takes them from .env file if provided.

    • docker-compose automatically would create Kafka topics which are used by taas-es-processor listed in local/kafka-client/topics.txt.

    • To view the logs from any container inside docker-compose use the following command, replacing SERVICE_NAME with the corresponding value under the Name column in the above table:

      npm run services:log -- -f SERVICE_NAME
    • If you want to modify the code of any of the services which are run inside this docker-compose file, you can stop such service inside docker-compose by command docker-compose -f local/docker-compose.yml stop <SERVICE_NAME> and run the service separately, following its README file.

    NOTE: In production these dependencies / services are hosted & managed outside Topcoder TaaS API.

  5. ♻ Init DB, ES

    npm run local:init

    This command will do 3 things:

    • create Database tables (drop if exists)
    • create Elasticsearch indexes (drop if exists)
    • import demo data to Database and index it to ElasticSearch (clears any existent data if exist)
  6. 🚀 Start Topcoder TaaS API

    npm run dev

    Runs the Topcoder TaaS API using nodemon, so it would be restarted after any of the files is updated. The Topcoder TaaS API will be served on http://localhost:3000.

Working on taas-es-processor locally

When you run taas-apis locally as per "Steps to run locally" the taas-es-processor would be run for you automatically together with other services inside the docker container via npm run services:up.

To be able to change and test taas-es-processor locally you can follow the next steps:

  1. Stop taas-es-processor inside docker by running docker-compose -f local/docker-compose.yml stop taas-es-processor
  2. Run taas-es-processor separately from the source code. As npm run services:up already run all the dependencies for both taas-apis and for taas-es-processor. The only thing you need to do for running taas-es-processor locally is clone the taas-es-processor repository and inside taas-es-processor folder run:
    • nvm use - to use correct Node version

    • npm run install

    • Create .env file with the next environment variables. Values for Auth0 config should be shared with you on the forum.

      # Auth0 config
      AUTH0_URL=
      AUTH0_AUDIENCE=
      AUTH0_CLIENT_ID=
      AUTH0_CLIENT_SECRET=
      • Values from this file would be automatically used by many npm commands.
      • ⚠️ Never commit this file or its copy to the repository!
    • npm run start

NPM Commands

Command                    Description
npm run lint Check for for lint errors.
npm run lint:fix Check for for lint errors and fix error automatically when possible.
npm run build Build source code for production run into dist folder.
npm run start Start app in the production mode from prebuilt dist folder.
npm run dev Start app in the development mode using nodemon.
npm run test Run tests.
npm run init-db Initializes Database.
npm run create-index Create Elasticsearch indexes. Use -- --force flag to skip confirmation
npm run delete-index Delete Elasticsearch indexes. Use -- --force flag to skip confirmation
npm run data:import <filePath> Imports data into ES and db from filePath (./data/demo-data.json is used as default). Use -- --force flag to skip confirmation
npm run data:export <filePath> Exports data from ES and db into filePath (./data/demo-data.json is used as default). Use -- --force flag to skip confirmation
npm run index:all Indexes all data from db into ES. Use -- --force flag to skip confirmation
npm run index:jobs <jobId> Indexes job data from db into ES, if jobId is not given all data is indexed. Use -- --force flag to skip confirmation
npm run index:job-candidates <jobCandidateId> Indexes job candidate data from db into ES, if jobCandidateId is not given all data is indexed. Use -- --force flag to skip confirmation
npm run index:resource-bookings <resourceBookingsId> Indexes resource bookings data from db into ES, if resourceBookingsId is not given all data is indexed. Use -- --force flag to skip confirmation
npm run index:roles <roleId> Indexes roles data from db into ES, if roleId is not given all data is indexed. Use -- --force flag to skip confirmation
npm run services:up Start services via docker-compose for local development.
npm run services:down Stop services via docker-compose for local development.
npm run services:logs -- -f <service_name> View logs of some service inside docker-compose.
npm run services:rebuild -- -f <service_name> Rebuild service container ignoring cache (useful when pushed something to the Git repository of service)
npm run local:init Recreate Database and Elasticsearch indexes and populate demo data for local development (removes any existent data).
npm run local:reset Recreate Database and Elasticsearch indexes (removes any existent data).
npm run cov Code Coverage Report.
npm run migrate Run any migration files which haven't run yet.
npm run migrate:undo Revert most recent migration.
npm run demo-email-notifications Listen to the Kafka events of email notification and render all the emails into ./out folder. See its readme for details.
npm run emsi-mapping mapping EMSI tags to topcoder skills

Import and Export data

📤 Export data

To export data to the default file data/demo-data.json, run:

npm run data:export

If you want to export data to another file, run:

npm run data:export -- --file path/to-file.json
  • List of models that will be exported are defined in scripts/data/exportData.js.

📥 Import data

⚠️ This command would clear any existent data in DB and ES before importing.

During importing, data would be first imported to the database, and after from the database it would be indexed to the Elasticsearch index.

To import data from the default file data/demo-data.json, run:

npm run data:import

If you want to import data from another file, run:

npm run data:import -- --file path/to-file.json
  • List of models that will be imported are defined in scripts/data/importData.js.

Kafka commands

If you've used docker-compose with the file local/docker-compose.yml during local setup to spawn kafka & zookeeper, you can use the following commands to manipulate kafka topics and messages: (Replace TOPIC_NAME with the name of the desired topic)

Create Topic

docker exec tc-taas-kafka /opt/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --topic TOPIC_NAME

List Topics

docker exec tc-taas-kafka /opt/kafka/bin/kafka-topics.sh --list --zookeeper zookeeper:2181

Watch Topic

docker exec tc-taas-kafka /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic TOPIC_NAME

Post Message to Topic (from stdin)

docker exec -it tc-taas-kafka /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic TOPIC_NAME
  • Enter or copy/paste the message into the console after starting this command.

Email Notifications

We have various email notifications. For example many emails are sent to support Interview Scheduling Workflow. All email templates are placed inside the .data/notification-email-templates folder.

Add a new Email Notification

To add a new email notification:

  1. Each email notification need to have a unique topic identifier of the shape taas.notification.{notification-type}. Where {notification-type} is unique for each email notification type.
  2. Create a new HTML template inside folder .data/notification-email-templates. (You may duplicate any existent template to reuse existent styles.). Name it the same as topic: taas.notification.{notification-type}.html.
  3. Create a corresponding config in file ./config/email_template.config.js, section notificationEmailTemplates.
  4. Name environment variable the same as topic, but uppercase and replace all special symbols to _ and add suffix _SENDGRID_TEMPLATE_ID:
    • For example topic taas.notification.job-candidate-resume-viewed would have corresponding environment variable TAAS_NOTIFICATION_JOB_CANDIDATE_RESUME_VIEWED_SENDGRID_TEMPLATE_ID.
  5. When deploying to DEV/PROD someone would have to create a new Sendgird template and fill inside Sendgrid UI subject and email HTML template by copy/pasting HTML file from the repo. And then set environment variable with the value of template if provided by Sendgrid.

Test/Render Email Notifications Locally

To test and render email notification locally run a special script npm run demo-email-notifications. Before running it, follow it's README as you would have to set some additional environment variables first (add them into .env file).

  • This script first would update demo data to create some situations to trigger notifications.
  • And it would listen to Kafka events and render email notification into ./out folder.

DB Migration

  • npm run migrate: run any migration files which haven't run yet.
  • npm run migrate:undo: revert most recent migration.

Configuration for migration is at ./config/config.json.

The following parameters can be set in the config file or via env variables:

  • url: set via env DATABASE_URL; datebase url

Testing

  • Run npm run test to execute unit tests
  • Run npm run cov to execute unit tests and generate coverage report.

📋 Code Guidelines

General Requirements

  • Split code into reusable methods where applicable.
  • Lint should pass.
  • Unit tests should pass.

Documentation and Utils

When we add, update or delete models and/or endpoints we have to make sure that we keep documentation and utility scripts up to date.

  • Swagger
  • Postman
  • ES Mapping
  • Reindex
    • NPM command index:all should re-index data in all ES indexes.
    • There should be an individual NPM command index:* which would re-index data only in one ES index.
  • Import/Export
    • NPM commands data:import and data:export should support importing/exporting data from/to all the models.
  • Create/Delete Index
    • NPM commands create-index and delete-index should support creating/deleting all the indexes.
  • DB Migration
    • If there are any updates in DB schemas, create a DB migration script inside migrations folder which would make any necessary updates to the DB schema.
    • Test, that when we migrate DB from the previous state using npm run migrate, we get exactly the same DB schema as if we create DB from scratch using command npm run init-db force.

EMSI mapping

mapping EMSI tags to topcoder skills Run npm run emsi-mapping to create the mapping file It will take about 15 minutes to create the mapping file script/emsi-mapping/emsi-skils-mapping.js

taas-apis's People

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

taas-apis's Issues

Avoid conflicts for Topcoder/ubahn users

Follow up from #23 (comment)

Sometimes we get conflict error during ensuring user exists in ubahn (when we create it):

image

It is possible the handle stored in your token was already used to create an ubahn user and associated with an userId other than the userId stored in your token.
So when the getUserId function tried to use the userId stored in your token to find an existing ubahn user, it got no record. It then attempted to create new ubahn user with the handle stored in your token, unsurprisingly, ending up conflict error.

Currently the data on Topcoder Dev are not reliable and stable, IMO. To avoid the conflict error, I would
suggest create a new tc member account with different username so you won't be messed up with any existing data.

Alternatively, You could find the conflicting ubhan user and correct its associated v3 userId(the externalId property) with the one in your user token.

Unit tests

Currently, 24 unit tests are broken:

image

  1. [done] We have to fix them.
  2. We have to also decide what level of unit test coverage we would like to target for this repository.
  3. After deciding on the target coverage we have to create unit tests for existent code.
  4. All new tasks/fixes should be covered with unit tests.

@nkumar-topcoder do you have any preferences here?

Cannot get team "9050" by id

Using our DEV API I can get a list of teams:

image

But when I'm getting this team by id 9050, I get an error "Not Found", though I can get all other teams by id.

image

I try with admin user TonyJ and with pshah_manager (with bookingmanager role).

@nkumar-topcoder do you have any idea why it could happen? This happens only for this particular team with id 9050.

[$75] 500 Internal Server error returned for the GET/taas-teams endpoints for some users

500 Internal server error displayed for the following users/ endpoints
Users: dan_developer, tester1234 (role: topcoder member)
1.GET /taas-teams
2. GET /taas-teams/:id
3. GET /taas-teams/:id/jobs/:jobId
Screenshot 2020-12-17 at 2 15 53 PM

User: topcoderconnect (role: booking manager)
2. GET /taas-teams/:id
3. GET /taas-teams/:id/jobs/:jobId

Screenshot 2020-12-17 at 4 07 56 PM

Screenshot 2020-12-17 at 4 08 00 PM

Note: for user tonyj (role:admin) all the three endpoints work correctly)
Screenshot 2020-12-17 at 6 07 20 PM
Screenshot 2020-12-17 at 6 07 24 PM
Screenshot 2020-12-17 at 6 07 31 PM

[$100] Add new endpoint for getting Topcoder Skills

Motivation

We need an endpoint to allow any Topcoder user to retrieve Topcoder Skills. We already have an endpoint in U-Bahn API but it has restrictions which don't allow any user to retrieve Topcoder Skills. To bypass this restistiction we will create a proxy endpoint in TaaS Api which would be allowed for any topcoder user, while internally it would call U-Bahn using M2M token.

Requirements

Add a new endpoint GET /v5/taas-teams/skills which would return a list of Topcoder Skills. (We use such URL with taas-teams so we don't have to add a new config to Topcoder Gateway).

  • Any logged-in Topcoder User should be able to call it
  • Internally, this endpoint should call ednpint GET /v5/skills?skillProviderId=${TOPCODER_SKILL_PROVIDER_ID} using M2M token
  • It should support query params: perPage, page, orderBy (pass them from the request)
  • TOPCODER_SKILL_PROVIDER_ID should be configured. We would set different value for DEV and PROD envs.
  • Update Postman and Swagger

Permissions

Let's sum up permission rules here.

Endpoint Topcoder User Booking Manager Connect Manager
GET /taas-teams ☑️ Only when member of the project ✅ All ✅ All
GET /taas-teams/:teamId ☑️ Only when member of the project
GET /taas-teams/:teamId/jobs/:jobId ☑️ Only when member of the project
Jobs Topcoder User Booking Manager Connect Manager
GET /jobs ☑️❗ Only if filter by "projectId" and is member of that project
GET /jobs/:id ☑️ Only when member of the project
POST /jobs/ ☑️ Only when member of the project
PUT/PATCH /jobs/:id ☑️ Only when member of the project AND if they created particular job
DELETE /jobs/:id
JobsCandidates Topcoder User Booking Manager Connect Manager
GET /jobsCandidates ☑️❗ Only if filter by "jobId" and member of the project of that Job
GET /jobsCandidates/:id ☑️ Only when member of the project
POST /jobs/
PUT/PATCH /jobs/:jobId ☑️ Only when member of the project
DELETE /jobs/:jobId
ResourceBookings Topcoder User Booking Manager Connect Manager
GET /resourceBookings ☑️❗ Only if filter by "projectId" and member of that project
GET /resourceBookings/:id ☑️ Only when member of the project
POST /jobs/
PUT/PATCH /resourceBookings/:id
DELETE /resourceBookings/:id
WorkPeriods Topcoder User Booking Manager Connect Manager
GET /workPeriods ☑️❗ Only if filter by "projectId" and member of that project
GET /workPeriods/:id ☑️ Only when member of the project
POST /workPeriods/
PUT/PATCH /workPeriods/:id
DELETE /workPeriods/:id
WorkPeriodPayments Topcoder User Booking Manager Connect Manager
GET /workPeriods
GET /workPeriods/:id
POST /workPeriods/
PUT/PATCH /workPeriods/:id
DELETE /workPeriods/:id not supported not supported not supported

NOTES

  • We can also perform these operations using M2M token with corresponding scopes, as per #40
  • administrator users should have all the permissions like Booking Manager users.

[$50] Add new field "title" to the "Job" model

We need to add a new field to the "Job" model to keep the job title:

  • type: string, required, max length 64
  • update swagger/postman
  • create a migration script
    • for existent records populate it with the value from description
  • update all related endpoints to support it
  • update ES processor / es-mapping if needed

Github ticket rules

How to work with git tickets

The basic flow for handling a ticket is as follows:

  1. Assign the ticket to yourself, change the label to "tcx_Assigned", remove the "tcx_OpenForPickup" label. Please only assign tickets to yourself when you are ready to work on it. I don't want tickets assigned to someone and then not have them work on a ticket for 24 hours. The goal here is a quick turnaround for the client. If you can't work on a ticket immediately, leave it for someone else.

  2. Complete the ticket and create a merge request within 24 hours. Please ensure your merge request can be merged automatically and that it's against the latest commit in Git when you create it.

  3. Change the label on the ticket to "tcx_ReadyForReview"

After seeing a ticket marked as "tcx_ReadyForReview", the copilot will review that ticket, usually within 24 hours.

Note that you are expected to keep your changes in-sync with Git - make sure to do a pull before you push changes to make sure there aren't any merge issues.

Accepted fix

If a fix is accepted, a payment ticket will be created on the Topcoder platform within 5-10 minutes of the issue being closed. You should see the payment in your PACTs within 24 hours.

Rejected fix

If a fix is rejected, a comment, and possibly a screenshot, will be added to the ticket explaining why the fix was rejected. The status will be changed to "tcx_Feedback".

If a fix is rejected, that ticket is your priority. You should not assign yourself any more tickets until you complete the required additional fixes!

Payment amounts

Each ticket in GitLab has a dollar value. That is the amount you will be paid when the ticket is completed, merged, and verified by the copilot. Note that there is still a 30 day waiting period as the payment will be treated as a regular TopCoder challenge payment.

Important Rules:

  • You can assign any unassigned issue to yourself with an "Open for pick up" label (first come first serve)

  • You can only assign ONE AT A TIME. The nature of it being assigned will indicate it is not available to anyone else.

  • You will fix the ticket by committing changes to the master branch.

  • After marking a ticket "tcx_ReadyForReview" you are eligible to accept another. You do NOT need to wait for the copilot to validate your fix.

  • You can do as many tickets as you want, as long as you follow the rules above.

  • If an assigned task is not done in 24 hours, you will need to explain why it is not completed as a comment on the ticket.

  • You can ask questions directly on the GitLab ticket.

ANYONE NOT FOLLOWING THE RULES ABOVE WILL BE WARNED AND POTENTIALLY LOSE THEIR GITLAB ACCESS!

Populate "resourceBooking" response with "job.description"

When we are showing the list of "resourceBooking" in UI, we are also showing the "job.description" and need "job.skills" to calculate the percentage of matching skills:

image

As resources might belong to different jobs, getting multiple jobs client-side might cause too many requests. Instead, can we populate "resourceBooking" objects with "job.description" and "job.skills" field values?

There are 2 ways to go:

  1. Just add 2 fields to the response jobDesctipion = job.description and jobSkills = job.skills.
  2. Or more general way. Add a query param fields which would define the list fields to return.
    • by default only all "resourceBooking" fields would be returned
    • if we define fields query param then return only listed fields
    • we can define "resourceBooking" fields like fields=id,jobId,status and we also can define fields of the corresponding job like this: fields=id,jobId,status,job.description,job.skills,job.description which should return a "resourceBooking" booking object with populated fields:
    {
       id: "id",
       jobId: "jobId",
       status:  "...",
       job: {
         description: "Developer",
         skills: [...]
       }
    }

@nkumar-topcoder what do you think?

[$150] Automatic status updates

  1. When we add/update any ResourceBooking we have to check if the corresponding Job has numPositions === length(ResourceBooking with status === "assigned"). And if so, then update the status of the Job to assigned.

  2. If we change Job status to cancelled. Then we should change the status of all ResourceBooking and (updated: don't change status for ResourceBookings) JodCandidate related to this job to cancelled.

[$60] TaaS Teams return not all resources.

  1. Create a demo data using Postman Create Demo Data For Team folder:

    image

  2. It creates 6 assigned resources.

  3. But if we load the team, it shows some arbitrary number of resources from 0 till 6.

Example team: https://mfe.topcoder-dev.com/taas/myteams/16771

Only is returning 3 resources:

image

Though if calling resourcebooking endpoint we can see all 6 resources:

image

Also, we have to fix Swagger, GET reousrceBookings endpoint does support filtering by projectId which is not shown in swagger
image

I think the issue is here:

image

[$50]Validate for Valid Skills

Before Job POST Check skills from Jobs post body if skills (via v5/skills) exists, If skills don’t then respond back with “not valid skills ” else continue posting for jobs. Verify for PUT/PATCH also.

[$30] Remove "skillMatched" property from all the responses

Currently, we are calculating skillMatched property server-side and return it in endpoints:

  • GET /taas-teams/:teamId
  • GET /taas-teams/:teamId/jobs/:jobId

Though I think this property is unnecessary.

Both these endpoints return the list of job skills and list of user skills. So we can calculate the number of matched skills client-side.

I suggest removing this from API (including swagger and postman).

@nkumar-topcoder what do you think?

[$50] Add "Workload" field to v5/jobs

The following field to be added to v5/jobs model and all relevant API definitions:

Field Name: Workload
Field Type: string

This field will function as a picklist in the UI. Available values: Full-time, Fractional

Updates required to API & supporting documentation.

[$100] Support M2M tokens

We need to allow calling TaaS API using M2M tokens.

[$60] [config] M2M token should be allowed to create v5 users

When I used M2M token for DEV env locally. In the situations when user doesn't exists in V5 we create it by calling:

  • POST /v5/users
  • POST /v5/users/${userId}/externalProfiles

At the moment the first request returns Forbidden.

We have to make sure that M2M config that we use on DEV and PROD allowing creating users in V5.

[$75]Validate for Valid Users

Currently, JobCandidate POST fails If a user issue exists on v5/users with below message

{
    "message": "userId: 887xxxxx \"user\" not found"
}

This means the userId (who is posting the job) is created and part of v3 API but not available on v5/user.
Hence follow the below steps for creating v5/users and then create a job candidate. Verify for PUT/PATCH too.

Steps need to do

  1. Create user on Ubahn
  2. Map the user with org

See https://github.com/topcoder-platform/u-bahn-bulk-processor/blob/develop/src/services/ProcessorService.js#L86-L92

[$30] Change status when user is booked

image

When we are updating the status of ResourceBooking to assigned the corresponding JobCandidate record (with the same userId and jobId should be updated with the status selected.

[$60] allow normal topcoder members to create jobs

a) Currently, v5/jobs validates for bookingmanager role for job creation. This should be changed to allow for all Topcoder Members "Topcoder User" roles.
b) update POST /v5/jobs to allow all auth user access(include bookingmanager and connectmember as well as normal auth user)
c) update PATCH/PUT/DELETE /v5/jobs/:jobId to allow normal auth user to access the job he/she created. bookingmanager and connectmember can access all jobs as before.
d) update swagger doc

Improve Local Setup Process

Requirements

  1. Create docker-compose file which would run all dependencies for taas-apis using one command. It should include:

    Notes:

  2. Create a script for pupulating demo data in DB and ES.

    • This should also include creating script to index data from DB to ES, which would be also very helpful during updates to the data model. So we could change the scheme of DB and just reindex all the data, rather than trying to run update queries in ES (which are not well supported by AWS ES).
  3. Update README with clear guide for local deployment. It shouldn't contain unnecessary steps/info, and cover all possible issues with local setup. See tc-project-service setup guide as an example.

References

tc-project-service is a good example where we improved the local setup process.

For references, here are the tasks we did in Projects API for this:

[UI] Jobcandidates status updates

enhancement:

  1. At TaaS app update jobCandidate status based on the question.
    Interested in this candidate?
    Yes means “Shortlisted”, no means “rejected” status.

image

  1. Hide schedule interview button.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.