Yahoo India Web Search

Search results

  1. Mi pišemo: drugačije, smelo, sa stavom. Nezavisne dnevne novine INFORMER. Bulevar Peka Dapčevića 17, Voždovac, 11000 Beograd, Srbija.

    • NOVO

      Nezavisne Dnevne Novine - Drugi glasno ćute. Mi pišemo:...

    • Vesti

      Vesti - Informer - Nezavisne Dnevne Novine

    • Hronika

      Utvrđuju se detalji SAZNAJEMO! Evo ko je ubijeni starac (85)...

    • Sport

      Najnovije vesti o sportskim dešavanjima širom sveta....

    • Mundobasket

      Mundobasket - Informer - Nezavisne Dnevne Novine

    • Planeta

      Najnovije vesti o svetskim političkim i ekonomskim...

    • Zabava

      Snežana Đurišić za Informer o gostima na koncertima u...

    • Magazin

      Magazin - Informer - Nezavisne Dnevne Novine

    • Overview
    • ProbSparse Attention
    • Requirements
    • Data
    • Reproducibility
    • Usage
    • Results
    • FAQ
    • Citation
    • Contact
    • GeneratedCaptionsTabForHeroSec

    This is the origin Pytorch implementation of Informer in the following paper: Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. Special thanks to Jieqi Peng@cookieminions for building this repo.

    🚩News(Mar 27, 2023): We will release Informer V2 soon.

    🚩News(Feb 28, 2023): The Informer's extension paper is online on AIJ.

    🚩News(Mar 25, 2021): We update all experiment results with hyperparameter settings.

    🚩News(Feb 22, 2021): We provide Colab Examples for friendly usage.

    🚩News(Feb 8, 2021): Our Informer paper has been awarded AAAI'21 Best Paper [Official][Beihang][Rutgers]! We will continue this line of research and update on this repo. Please star this repo and cite our paper if you find our work is helpful for you.

    The self-attention scores form a long-tail distribution, where the "active" queries lie in the "head" scores and "lazy" queries lie in the "tail" area. We designed the ProbSparse Attention to select the "active" queries rather than the "lazy" queries. The ProbSparse Attention with Top-u queries forms a sparse Transformer by the probability distribution. Why not use Top-u keys? The self-attention layer's output is the re-represent of input. It is formulated as a weighted combination of values w.r.t. the score of dot-product pairs. The top queries with full keys encourage a complete re-represent of leading components in the input, and it is equivalent to selecting the "head" scores among all the dot-product pairs. If we choose Top-u keys, the full keys just preserve the trivial sum of values within the "long tail" scores but wreck the leading components' re-represent.

    Figure 2. The illustration of ProbSparse Attention.

    •Python 3.6

    •matplotlib == 3.1.1

    •numpy == 1.19.4

    •pandas == 0.25.1

    •scikit_learn == 0.21.3

    •torch == 1.8.0

    The ETT dataset used in the paper can be downloaded in the repo ETDataset. The required data files should be put into data/ETT/ folder. A demo slice of the ETT data is illustrated in the following figure. Note that the input of each dataset is zero-mean normalized in this implementation.

    Figure 3. An example of the ETT data.

    The ECL data and Weather data can be downloaded here.

    •Google Drive

    To easily reproduce the results you can follow the next steps:

    1.Initialize the docker image using: make init.

    2.Download the datasets using: make dataset.

    3.Run each script in scripts/ using make run_module module="bash ETTh1.sh" for each script.

    Colab Examples: We provide google colabs to help reproduce and customize our repo, which includes experiments(train and test), prediction, visualization and custom data.

    Commands for training and testing the model with ProbSparse self-attention on Dataset ETTh1, ETTh2 and ETTm1 respectively:

    More parameter information please refer to main_informer.py.

    We provide a more detailed and complete command description for training and testing the model:

    We have updated the experiment results of all methods due to the change in data scaling. We are lucky that Informer gets performance improvement. Thank you @lk1983823 for reminding the data scaling in issue 41.

    Besides, the experiment parameters of each data set are formated in the .sh files in the directory ./scripts/. You can refer to these parameters for experiments, and you can also adjust the parameters to obtain better mse and mae results or draw better prediction figures.

    Figure 4. Univariate forecasting results.

    Figure 5. Multivariate forecasting results.

    If you run into a problem like RuntimeError: The size of tensor a (98) must match the size of tensor b (96) at non-singleton dimension 1, you can check torch version or modify code about Conv1d of TokenEmbedding in models/embed.py as the way of circular padding mode in Conv1d changed in different torch versions.

    If you find this repository useful in your research, please consider citing the following papers:

    If you have any questions, feel free to contact Haoyi Zhou through Email (zhouhaoyi1991@gmail.com) or Github issues. Pull requests are highly welcomed!

    Informer is a paper accepted by AAAI 2021 that proposes a novel architecture for efficient time-series forecasting. It uses ProbSparse Attention to select the active queries and reduce the computation cost.

  2. Dec 14, 2020 · Informer is a new method for predicting long sequence time-series, such as electricity consumption planning, using a transformer-based architecture. It improves the efficiency and accuracy of the model by using a sparse self-attention mechanism, a self-attention distilling technique, and a generative style decoder.

    • Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, Wancai Zhang
    • arXiv:2012.07436 [cs.LG]
    • 2020
  3. huggingface.co › docs › transformersInformer - Hugging Face

    To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O (L logL) in time complexity and memory usage, and has comparable performance on sequences’ dependency alignment.

  4. website-cloudfront.informer.comWebsite Informer

    Website.informer: complete data lookup & free aggregated report on any domain including whois, visitors, IP & DNS details, competitors, owners, etc.

  5. Mar 10, 2023 · Informer is a variant of the Transformer that improves its efficiency and scalability for long sequence forecasting tasks. Learn how Informer works, its advantages, and how to use it with 🤗 Transformers.

  6. Learn the meaning of informer, a person who gives information in secret, especially to the police, and see synonyms and examples. Find out how to pronounce informer and translate it in different languages.

  1. People also search for