Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Math is not being rendered correctly. #461

Open
guihcs opened this issue Jun 16, 2023 · 1 comment
Open

Math is not being rendered correctly. #461

guihcs opened this issue Jun 16, 2023 · 1 comment

Comments

@guihcs
Copy link

guihcs commented Jun 16, 2023

I have tried to display the following markdown text with latex equations and some are not being displayed:

Efficient training is therefore possible by optimizing random terms of $L$ with stochastic gradient descent. Further improvements come from variance reduction by rewriting $L$ (3) as:

$$\mathbb{E}_q \left [ D_{KL}(q(x_T|x_0) \| p(x_T)) + \sum_{t>1} D_{KL}(q(x_{t-1}|x_t, x_0) \| p_\theta (x_{t-1}|x_t)) - \log p\theta (x_0|x_1) \right ]$$

(See Appendix A for details. The labels on the terms are used in Section 3.) Equation (5) uses KL divergence to directly compare $p_\theta(x_{t-1}|x_t)$ against forward process posteriors, which are tractable when conditioned on $x_0$:

$$q(x_{t-1}|x_t, x_0) = \mathcal{N}(x_{t-1};  \bar{\mu}_t(x_t, x_0), \bar{\beta}_t\textbf{I}),$$ 

$$\text{where} \hspace{5mm} \bar{\mu}_t(x_t, x_0) := \frac{\sqrt{\bar{\alpha}_{t-1}}\beta_t}{1 - \bar{\alpha}_t} x_0 + \frac{\sqrt{\alpha_t}(1 - \bar{\alpha}_{t-1})}{ 1 - \bar{\alpha}_t}  x_t \hspace{5mm} \text{and} \hspace{5mm}  \bar{\beta}_t := \frac{1 - \bar{\alpha}_{t-1}}{1 - \bar{\alpha}_t} \beta_t$$

Here is a print of how it's being rendered on my application:

image

This is a bug or I'm missing some configuration?

@BlvckBytes
Copy link

Hey there, @guihcs!

Are you still facing the problem, or have you found a way to resolve it in the mean time? I've found the underlying cause, and could most definitely whip up a band-aid solution, but I'm really speechless on how everybody else managed to not run into this exact same unexpected behaviour.

When adding a markdown component to my page, as follows

<markdown
  katex
  [src]="'...'"
>
</markdown>

the calls seem to be as follows (paraphrased):

MarkdownComponent -> constructor -> this.handleSrc() -> this.render() -> this.parse() -> markdownService.render()

Where parse() trims, decodes html (if applicable), parses emojis, parses markdown, and sanitizes (if applicable). markdownService.render() then renders Clipboard, Katex and Mermaid. Nowhere in this chain could I locate any TeX marker detection, as in the $-symbols, so markdown did what one would expect markdown to do. Here's a simple example:

$V = \frac{1}{3}*\pi*h*r_1^2*(3-3*\frac{h}{h + h_s} + (\frac{h}{h + h_s})^2)$

becomes

<p>$V = \frac{1}{3}<em>\pi</em>h<em>r_1^2</em>(3-3*\frac{h}{h + h_s} + (\frac{h}{h + h_s})^2)$</p>

facepalm

Katex is rendered by calling window.renderMathInElement() on the root element, which then searches recursively for any math that's to be rendered. Of course, it won't render the mess from above, and the expression will stay visible as is.

The same holds true for your example:

$$\mathbb{E}_q \left [ D_{KL}(q(x_T|x_0) \| p(x_T)) + \sum_{t>1} D_{KL}(q(x_{t-1}|x_t, x_0) \| p_\theta (x_{t-1}|x_t)) - \log p\theta (x_0|x_1) \right ]$$

becomes

<p>$$\mathbb{E}<em>q \left [ D</em>{KL}(q(x_T|x_0) | p(x_T)) + \sum_{t&gt;1} D_{KL}(q(x_{t-1}|x_t, x_0) | p_\theta (x_{t-1}|x_t)) - \log p\theta (x_0|x_1) \right ]$$</p>

One trick to solve this is to pad symbols which could be interpreted as formatting by the markdown renderer (*, _) with a space, but that's unacceptable in my opinion...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants