Title: |
Learning Better Representation for Tables by Self-Supervised Tasks |
Authors: |
Li, Liang, Ma, Can, Yue, Yinliang, Shou, Linjun, Hu, Dayong |
Publication Year: |
2020 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computation and Language |
More Details: |
Table-to-text generation aims at automatically generating natural text to help people to conveniently obtain the important information in tables. Although neural models for table-to-text have achieved remarkable progress, some problems still overlooked. The first is that the values recorded in many tables are mostly numbers in practice. The existing approaches do not do special treatment for these, and still regard these as words in natural language text. Secondly, the target texts in training dataset may contain redundant information or facts do not exist in the input tables. These may give wrong supervision signals to some methods based on content selection and planning and auxiliary supervision. To solve these problems, we propose two self-supervised tasks, Number Ordering and Significance Ordering, to help to learn better table representation. The former works on the column dimension to help to incorporate the size property of numbers into table representation. The latter acts on row dimension and help to learn a significance-aware table representation. We test our methods on the widely used dataset ROTOWIRE which consists of NBA game statistic and related news. The experimental results demonstrate that the model trained together with these two self-supervised tasks can generate text that contains more salient and well-organized facts, even without modeling context selection and planning. And we achieve the state-of-the-art performance on automatic metrics. Comment: This article is writing messy, and some of the experiments are inadequate, which may mislead the reader about our work |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2010.07606 |
Accession Number: |
edsarx.2010.07606 |
Database: |
arXiv |