Coder Social home page Coder Social logo

mgeo's Introduction

MGeo

MGeo: Multi-Modal Geographic Language Model Pre-Training

Release Note

  • 2023.01.16 Pretrained model and datasets are released in modelscope
  • 2023.04.05 Paper is accepted by SIGIR2023. Paper link: https://arxiv.org/abs/2301.04283
  • 2023.05.17 Pretraining and finetune codes in paper are released
  • 2023.07.06 Pretrained model used to reproduce paper is released
  • 2023.07.25 dataset.zip download url is provided
  • 2023.09.14 Finetuned model download is provided

Download

Reproduce results in paper

Prepare environment

conda create -n mgeo python=3.7
pip install -r requirements.txt

Download resources

cd data
unzip datasets.zip
cd ../prepare_data
download_pretrain_models.sh

Generate pretrain data

We only provide samples of pretrain data. To produce your own pretrain data, you simply need text-geolocation pairs which can be genrate by various way (e.g., user click, POI data, position of delivery clerks). The geolocation and text just need to be related, no need to be exactly precise.

Having text-geolocation pairs, you can follow steps below to generate pretrain data. Demo pairs are saved in resources/text_location_pair.demo for testing.

  • Download proper map from OpenStreetMap. For example, our geolocations used in paper are in HangZhou. Thus, we download China map.
  • Import map and your text-geolocation data to a GIS database, like PostGIS. Every text is assigned with an ID. In demo case, ID is row number.
  • Find COVERED and NEARBY relations between map and geolocation using GIS database. Export to resources/location_in_aoi.demo, resources/location_near_aoi.demo, resources/location_near_road.demo.
  • Export needed AOIs and roads to resources/hz_aoi.txt and resources/hz_roads.txt. In our paper's setting, only elements in HangZhou are included.
  • Generate pretrain data using command: cd prepare_data && python calculate_geographic_context.py . Use the length of geom_ids in calculate_geographic_context.py to replace vocab_size in resources/gis_config.json.

Pretrain geographic encoder

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 sh run_gis_encoder_pretrain.sh

Pretrain multimodal interaction module

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 sh run_mm_pretrain.sh

Finetune on rerank task

CUDA_VISIBLE_DEVICES=0,1,2,3 sh run_rerank.sh

Finetune on retrieval task

CUDA_VISIBLE_DEVICES=0,1,2,3 sh run_retrieval.sh

Contact

Please contact [email protected] to get pretrained model or other resources.

Reference

@article{ding2023multimodal,
  title={MGeo: A Multi-Modal Geographic Pre-Training Method},
  author={Ruixue Ding and Boli Chen and Pengjun Xie and Fei Huang and Xin Li and Qiang Zhang and Yao Xu},
  journal={Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2023}
}

mgeo's People

Contributors

phantomgrapes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

mgeo's Issues

获取dataset.zip数据集异常

batch response: This repository is over its data quota. Account responsible for LFS bandwidth should purchase more data packs to restore access.

你好,请教如下问题

1、MGeo论文中似乎测评了GeoGlue的两个任务,ranking对应query-poi相关性任务,retrieval对应query-poi库召回任务,但是在ModelScope的网站没有放出query-poi库召回任务的模型,请问这个模型应该如何获得,我是否可以直接使用'damo/mgeo_backbone_chinese_base'底座,结合Retrieval.py来复现得到论文的结果呢?
2、’'damo/mgeo_backbone_chinese_base'‘这个底座是基于全国数据还是杭州数据呢

How to improve the inference performance of the MGeo model?

A classification model has been developed using MGeo:

Foundation Model: mgeo_backbone_chinese_base
Number of Classes: 270,000
Trained Model Size: about 1.2 GB
GPU Employed: A800
Inference Performance: the response time for a single request is roughly 238ms.
Request: Seeking support for converting the MGeo model to ONNX format, or exploring alternative methods to enhance the model's inference speed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.