Coder Social home page Coder Social logo

kdd-cup-2022-amazon's Introduction

KDD-Cup-2022-Amazon

This repository is the team ETS-Lab's solution for Amazon KDD Cup 2022. You can find our code submission here or check the solution paper here.

General solution

  • We trained 6 cross encoder models for each language which differs in the pertained models, training method (e.g., knowledge distillation), and data splitting. In total, six identical models (2 folds x 3 models) are used to produce the initial prediction (4 class probability) of the query-product pair. In use those models only, the public set score for task 2 is around 0.816.

  • For Task 1, we used the output 4 class probability with some simple features to train a lightgbm model, calculate the expected gain ($P_e1 + P_s0.1 + P_c*0.01$), and sort the query-product list by this gain.

  • For task 2 and Task 3, we used lightgbm to fuse those prediction with some important features. Most important features are designed based on the potential data leakage from task 1 and the behavior of the query-product group:

    • The stats (min, medium and max) of the cross encoder output probability grouped by query_id (0.007+ in Task 2 Public Leaderboard)
    • The percentage of product_id in Task 1 product list grouped by query_id (0.006+ in Task 2 Public Leaderboard)

Small modification towards Cross Encoder architecture

  • As the product context has multiple fields (title, brand, and so on), we use neither the cls token nor mean (max) pooling to get the latent vector of the query-product pair. Instead, we concatenate the hidden states of a predefined token (query, title, brand color, etc.). The format is:
    [CLS] [QUERY] <query_content> [SEP] [TITLE] <title_content> [SEP] [Brand] <brand_content> [SEP] ...
    
    where [TEXT] is the special token and <text_content> is the text contents.

Code submission speed up

  1. Pre-process product token and saved as a HDF5 file.
  2. Transfer all model to ONNX with FP16 precision.
  3. Pre-sorted the product id to reduce the side impact of batch zero padding.
  4. Use relative small mini-batch size when inference.

How to run code

  • You need to write down some config.yaml file

  • For training:

    python train.py -c config/us-bigbird-kd-0.yaml
  • For inference:

    python inference.py -c config/us-bigbird-kd-0.yaml -w last -ds test

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.