LaTeX practical experience: how to write algorithms

>> Click here to view [LaTeX Practical Experience: Instructions for Beginners]

There are two main ways to present algorithms in LaTeX:

  • Using a macro package algorithm2e, this macro package has many options to set.

  • It seems that many people like to use macro packages algorithm and  algorithmicthe algorithm description in Mr. Zhou Zhihua's book <<Machine Learning>> should be the two macro packages used.

Use the macro package algorithm2e:

\usepackage[linesnumbered,boxed,ruled,commentsnumbered]{algorithm2e}%%算法包,注意设置所需可选项

algorithm

\IncMargin{1em} % keeps line numbers from protruding outwards
\begin{algorithm}

    \SetAlgoNoLine % Do not use vertical lines in the algorithm
    \SetKwInOut{Input}{\textbf{input}}\SetKwInOut{Output}{\textbf{output}} % replace keywords

    \Input{
        \\
        The observed user-item pair set $S$\;\\
        The feature matrix of items $F$\;\\
        The content features entities $A := \{A^u,A^v\}$\;\\}
    \Output{
        \\
        $\Theta \ := \{Y^u,Y^v\}$\;\\
        $W := \{W^u,W^v\}$\;\\}
    \ BlankLine

    initialize the model parameter $\Theta$ and $W$ with uniform $\left(-\sqrt{6}/{k},\sqrt{6}/{k}\right)$\; % semicolon\; distinguish end of line
    standarized $\Theta$\;
    Initialize the popularity of categories $\rho$ randomly\;
    \Repeat
        {\text{convergence}}
        {Draw a triple $\left(m,i,j\right)$ with 算法\ref{al2}\;
            \For {each latent vector $\theta \in \Theta$}{
                $\theta \leftarrow \theta - \eta\frac{\partial L}{\partial \theta}$
            }
            \For {each $W^e \in W$}{
                Update $W^e$ with the rule defined in Eq.\ref{equ:W}\;
            }   
        }
    \caption{Learning paramters for BPR\label{al3}}
\end{algorithm}
\DecMargin{1em}

Use the macro package algorithmwithalgorithmic

\usepackage{algorithm, algorithmic}

write picture description here

\begin{algorithm}
	\renewcommand{\algorithmicrequire}{\textbf{Input:}}
	\renewcommand{\algorithmicensure}{\textbf{Output:}}
	\caption{Bayesian Personalized Ranking Based Latent Feature Embedding Model}
	\label{alg:1}
	\begin{algorithmic}[1]
		\REQUIRE latent dimension $K$, $G$, target predicate $p$
		\ ENSURE $ U ^ {p} $, $ V ^ {p} $, $ b ^ {p} $
		\STATE Given target predicate $p$ and entire knowledge graph $G$, construct its bipartite subgraph, $G_{p}$
		\STATE $m$ = number of subject entities in $G_{p}$
		\STATE $n$ = number of object entities in $G_{p}$
		\STATE Generate a set of training samples $D_{p} = \{(s_p, o^{+}_{p}, o^{-}_{p})\}$ using uniform sampling technique
		\STATE Initialize $U^{p}$ as size $m \times K$ matrix with $0$ mean and standard deviation $0.1$
		\STATE Initialize $V^{p}$ as size $n \times K$ matrix with $0$ mean and stardard deviation $0.1$
		\STATE Initialize $b^{p}$ as size $n \times 1$ column vector with $0$ mean and stardard deviation $0.1$
		\FORALL{$(s_p, o^{+}_{p}, o^{-}_{p}) \in D_{p}$}
		\STATE Update $U_{s}^{p}$ based on Equation~\ref{eq:sgd1}
		\STATE Update $V_{o^{+}}^{p}$ based on Equation~\ref{eq:sgd2}
		\STATE Update $V_{o^{-}}^{p}$ based on Equation~\ref{eq:sgd3}
		\STATE Update $b_{o^{+}}^{p}$ based on Equation~\ref{eq:sgd4}
		\STATE Update $b_{o^{-}}^{p}$ based on Equation~\ref{eq:sgd5}
		\ENDFOR
		\STATE \textbf{return} $U^{p}$, $V^{p}$, $b^{p}$
	\end{algorithmic}  
\end{algorithm}
This article is reproduced from: https://blog.csdn.net/simple_the_best/article/details/52710794

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324520256&siteId=291194637