Skip to content
This repository has been archived by the owner on Jan 10, 2023. It is now read-only.

Commit

Permalink
Camera-ready version of SLING paper (#76)
Browse files Browse the repository at this point in the history
  • Loading branch information
ringgaard committed Oct 19, 2017
1 parent f1cc447 commit d945635
Showing 1 changed file with 29 additions and 3 deletions.
32 changes: 29 additions & 3 deletions doc/report/sling.tex
Expand Up @@ -18,7 +18,7 @@

\restylefloat{figure}

%\aclfinalcopy
\aclfinalcopy
\def\confidential{DRAFT COPY. DO NOT DISTRIBUTE.}

\title{SLING: A framework for frame semantic parsing}
Expand Down Expand Up @@ -472,7 +472,9 @@ \section{Experiments}
grid search with a dev corpus was: $\mbox{learning\_rate} = 0.0005$,
$\mbox{optimizer} = \mbox{Adam}$~\cite{kingma2014} with $\beta_1 = 0.01$, $\beta_2 = 0.999$,
$\epsilon = 1e-5$, no dropout, gradient clipping at $1.0$, exponential moving
average, no layer normalization, and a training batch size of $8$.
average, no layer normalization, and a training batch size of $8$. We use
$32$ dimensional word embeddings, single layer LSTMs with $256$ dimensions,
and a $128$ dimensional hidden layer in the feed-forward unit.

We stopped training after $120,000$ steps, where each step corresponds to
processing one training batch, and evaluated on the dev corpus
Expand Down Expand Up @@ -529,7 +531,7 @@ \section{Evaluation}
Augmenting this with more features should help improve ROLE quality, as we will
investigate in future work.

Finally, we took the best checkpoint, with SLOT F1 $= 79.95\%$ at $118,000$ steps),
Finally, we took the best checkpoint, with SLOT F1 $= 79.95\%$ at $118,000$ steps,
and evaluated it on the test corpus.
Table~\ref{tab:eval} lists the quality of this model on the test and dev
corpora.
Expand Down Expand Up @@ -601,6 +603,8 @@ \section{Evaluation}
\label{tab:eval}
\end{table}

We have tried increasing the sizes of the LSTM dimensions, hidden layers,
and embeddings, but this did not improve the results significantly.

\section{Parser runtime}
\label{sec:runtime}
Expand Down Expand Up @@ -670,6 +674,28 @@ \section{Parser runtime}
{\bf EVOKE(/pb/predicate, 1)} action, it would use a secondary classifier to
predict the predicate type. This could almost double the speed of the parser.

\section{Conclusion}
\label{sec:conclusion}

We have described SLING, a framework for parsing natural language into
semantic frames. Our experiments show that it is feasible to build a
semantic parser that outputs frame graphs directly without any intervening
symbolic representation, only using the tokens as inputs.
We illustrated this on the joint task of predicting entity mentions, entity types,
measures, and semantic role labeling.
While the LSTMs and TBRUs are expensive to compute, we can achieve acceptable
parsing speed using the Myelin JIT compiler.
We hope to make use of SLING in the future for further exploration into
semantic parsing.

\section*{Acknowledgements}
\label{sec:ack}

We would like to thank Google for supporting us in this project and allowing us
to make SLING available to the public community. We would also like to thank the
Tensorflow and DRAGNN teams for making their systems publicly available.
Without it, we could not have made SLING open source.

\bibliography{sling}
\bibliographystyle{acl_natbib}

Expand Down

0 comments on commit d945635

Please sign in to comment.