[Experiência] para escrever a versão necessária código atualiza o hábito de ler e README

Interpretação do código usado para escrever as atualizações de versão necessárias e README

anuário

Interpretação do código usado para escrever as atualizações de versão necessárias e README

 prefácio

1.CHANGELOG-hdck interpretação

interpretação 2.CHANGELOG-Oprofile

interpretação 3.README-FIO

interpretação 4.README-Oprofile



 prefácio

   Nós inevitavelmente surgirão no processo de escrever código para escrever a versão das mudanças de arquivos e arquivos README (manual pequena),

Se nós escrevemos de acordo com seus próprios hábitos, pode em alguns lugares não abrangentes ou não até o padrão, não parece bom, e assim por diante.

Para fazer isso, a fim de encontrar uma adequada preparação de especificações e internacional, precisamos ler e especificação do método de escrita de outros.

Nós olhamos

1.CHANGELOG-hdck interpretação

hdck versão escrita das mudanças de arquivo, o nome do arquivo é chamado CHANGELOG, ou seja, para mudar a média log.

HDCK é um disco rígido teste de ferramentas de teste e, em seguida, o principal sistema operacional Linux, compilar e executar. Ele pode detectar a qualidade de cada bloco, bom ou mau, há uma avaliação final do disco rígido, na minha opinião, este software é grande.

Vamos olhar o que está dentro

CHANGELOG

v 0.5.0 - 2011-02-06
--------------------

  * add support for ATA VERIFY command to allow testing drives over USB, 
    FireWire and behind SAS backplanes
  * fix crash with specific sized drives

v 0.4.0 - 2010-10-07
--------------------

  * change to quantile based detection
  * fix a bug in with min-reads == max-reads
  * tweaks to quick mode -- confirm only the 64 worst
  * fix bug with long options --log, --read-sectors and --bad-sectors not 
    working
  * reduced CPU utilisation

v 0.3.0 - 2010-08-25
--------------------

  * reduce number of global variables
  * quick mode
  * more verbose on-line status reporting
  * printing 10 worst (slowest) blocks in result
    std dev, min, max, avg, truncated avg, no of samples

v 0.2.6 - 2010-08-22
--------------------

  * fix printing of Individual block statistics

v 0.2.5 - 2010-08-22
--------------------

  * compare block times to fractions and multiples of rotational delay, 
    not fixed values
  * print simple diagnosis (disk condition)
  * printing status message on multiple lines and on stdout, not stderr
  * log output

v 0.2.4 - 2010-08-11
--------------------

 * make output more human readable
 * fix the problem with maximum number of block reads being 
   min-reads + max-reads

v 0.2.3 - 2010-08-11
--------------------

 * reduce number of unnecessery re-reads
 * add licences to files

v 0.2.2 - 2010-04-10
--------------------

 * bugfix -- fix overreading re-read blocks
 * make the program 64 bit compatible (not tested!)

v 0.2.1 - 2010-04-09
--------------------

 * bugfix -- not rereading blocks that need rereading
 * code clean up -- removed dead code

v 0.2 - 2010-04-08
------------------

 * reading and writing uncertian block ranges to file
 * writing detailed statistics to file
 * `background' mode -- lower priority and less confidence for the results
 * works with single whole disk read, rereads only sectors that are much
   slower than the neighbours
 * uses disk RPM for detecting blocks that are consistently reread by the disk
 * on rereads reads disk cache worth of sectors before the scoring blocks to
   improve quality of the results

Podemos ver que as atualizações de versão são atualizações do-se inferior, isto é, a parte superior é as últimas instruções da versão e atualizar, o fundo é a menor versão e descrição. Então, quando você pode abrir o arquivo ver claramente o quanto a versão mais recente, e você pode ver o quão pequena a data, quando escritos. Versão Data dividida com uma linha abaixo, e a linha de ar, que pode ser distinguida da versão acima e região de operação abaixo. A frente de cada item antes de alterar as instruções de operação esvaziar uma grade, e use os asteriscos, para que as pessoas ver uma mudança documento breve separado e alterar a área vários itens de conteúdo, além do uso de asteriscos fora aqui nós podemos usar os números marcados ver mais claramente várias mudanças. Na escrita muda para tentar fazer itens que ação simples e clara, olhar para este documento para que as pessoas possam entender algo mudar.

 

interpretação 2.CHANGELOG-Oprofile

Em seguida, olhar para oprofile

oprofile é um desempenho testando ferramenta no Linux, fornecido pelo hardware CPU amostragem contador de desempenho evento, a partir de um nível de consumo do programa de análise de código desempenho, encontrar o ponto onde o desempenho do programa.

 

log de alterações oprofile é escrito

Ele registra por ano é dividido em vários bom, com a seguinte redacção

ChangeLog-2002, ChangeLog-2003 ...... ChangeLog-2011

Nós olhamos para esse arquivo ChangeLog-2010

2010-12-16  John Villalovos <[email protected]>

	* events/Makefile.am:
	* libop/op_cpu_type.c:
	* libop/op_cpu_type.h:
	* libop/op_hw_specific.h:
	* libop/op_events.c:
	* utils/ophelp.c:
	* events/i386/westmere/events (new):
	* events/i386/westmere/unit_masks (new): Add support for Intel
	  Westmere micro-architecture processors

2010-12-15  Will Cohen  <[email protected]>

	* libop/op_cpu_type.c:
	* libop/op_cpu_type.h:
	* libop/op_hw_specific.h: User-space identification of processors
	  that support Intel architectural events

2010-12-14  Suravee Suthikulpanit <[email protected]>

	* oprofile/daemon/opd_ibs_trans.c: Fix non-x86 build issue 
	  due to cpuid instruction

2010-10-15  Roland Grunberg  <[email protected]>

	* libop/op_xml_events.c:
	* libop/op_xml_out.c:
	* libop/op_xml_out.h:
	* doc/ophelp.xsd: Add unit mask type attribute for an event in
	  the ophelp schema

2010-10-15  Maynard Johnson  <[email protected]>

	* doc/ophelp.xsd:
	* libop/op_xml_events.c: Fix schema validation issues and error in
	  xml generation

2010-10-13  Maynard Johnson  <[email protected]>

	* libabi/opimport.cpp: Fix uninitialized variable warning when
	  building with gcc 4.4

2010-10-13  Maynard Johnson  <[email protected]>

	* events/mips/Makefile.am: Correction to 8/26 MIPS patch
	  to add events and unit_masks to makefile

2010-10-07  William Cohen  <[email protected]>

	* events/i386/arch_perfmon/events: Correct filter values.

2010-08-02  Maynard Johnson  <[email protected]>

	* utils/opcontrol:
	* libpp/profile_spec.cpp:
	* pp/oparchive.cpp:  Moved the copying of stats to opcontrol::do_dump_data
	  and removed the unnecessary and confusing message that indicated
	  when overflow stats were not available.

2010-06-11  William Cohen <[email protected]>

        * libregex/stl.pat.in: Avoid machine specific configuration.

2010-05-18  Daniel Hansel  <[email protected]>

	* doc/oprofile.xml: Document that only kernel versions 2.6.13 or
	  later provide support for anonymous mapped regions

2010-04-13  Maynard Johnson  <[email protected]>

	* libutil++/bfd_support.cpp: Fix up translate_debuginfo_syms
	  so it doesn't rely on section index values being the same
	  between real image and debuginfo file (to resolve problem
	  reported by Will Cohen on Fedora 12)

2010-03-25  Oliver Schneider  <[email protected]>

	* libpp/parse_filename.cpp:  Catch case where a basic_string::erase
	  error can occur in opreport when parsing an invalid sample file name

2010-03-25  Maynard Johnson  <[email protected]>

	* events/mips/loongson2/events: New File
	* events/mips/loongson2/unit_masks: New File
	   I neglected to do 'cvs add' for these new two new files
	   back on Nov 25, 2009 when I committed the initial
	   loongson2 support.  This change corrects that error.

2010-03-01  Gabor Loki  <[email protected]>

	* daemon/opd_pipe.c: Fix memory leak
	* utils/opcontrol: Fix messages sending method to opd_pipe

2010-01-20  Maynard Johnson  <[email protected]>

	* m4/qt.m4: Fix qt lib check so it works on base 64-bit system


See ChangeLog-2009 for earlier changelogs.

Podemos ver e como a primeira versão é escrita a partir da parte inferior para cima, na parte inferior, há uma visão em 2009. Descrição da Mudança. Esta formulação é muito bom, porque é lá todos os anos para o log de alterações. Alterar a primeira versão no meio de um tipo diferente, que não existe número de versão, mas alterar a data, ea caixa de correio e de mudança, ea mudança que acrescentou uma caixa de correio, você pode ver claramente as alterações de código Who se o código, além do problema, você pode ir para as pessoas de contato que escrevem o código por correio, esta formulação muito bom. Deve ser promovido. Instruções e uma mudança bem. Adicionar um espaço e asterisco na frente de cada item muda.

 

 

Documentação introduziu duas alterações, vamos dar uma olhada nos outros ficheiros README como escrever

interpretação 3.README-FIO

fio Linux é uma ferramenta mainstream para testar disco rígido io

 

arquivo README da FIO

Overview and history
--------------------

Fio was originally written to save me the hassle of writing special test case
programs when I wanted to test a specific workload, either for performance
reasons or to find/reproduce a bug. The process of writing such a test app can
be tiresome, especially if you have to do it often.  Hence I needed a tool that
would be able to simulate a given I/O workload without resorting to writing a
tailored test case again and again.

A test work load is difficult to define, though. There can be any number of
processes or threads involved, and they can each be using their own way of
generating I/O. You could have someone dirtying large amounts of memory in an
memory mapped file, or maybe several threads issuing reads using asynchronous
I/O. fio needed to be flexible enough to simulate both of these cases, and many
more.

Fio spawns a number of threads or processes doing a particular type of I/O
action as specified by the user. fio takes a number of global parameters, each
inherited by the thread unless otherwise parameters given to them overriding
that setting is given.  The typical use of fio is to write a job file matching
the I/O load one wants to simulate.


Source
------

Fio resides in a git repo, the canonical place is:

	git://git.kernel.dk/fio.git

When inside a corporate firewall, git:// URL sometimes does not work.
If git:// does not work, use the http protocol instead:

	http://git.kernel.dk/fio.git

Snapshots are frequently generated and :file:`fio-git-*.tar.gz` include the git
meta data as well. Other tarballs are archives of official fio releases.
Snapshots can download from:

	http://brick.kernel.dk/snaps/

There are also two official mirrors. Both of these are automatically synced with
the main repository, when changes are pushed. If the main repo is down for some
reason, either one of these is safe to use as a backup:

	git://git.kernel.org/pub/scm/linux/kernel/git/axboe/fio.git

	https://git.kernel.org/pub/scm/linux/kernel/git/axboe/fio.git

or

	git://github.com/axboe/fio.git

	https://github.com/axboe/fio.git


Mailing list
------------

The fio project mailing list is meant for anything related to fio including
general discussion, bug reporting, questions, and development. For bug reporting,
see REPORTING-BUGS.

An automated mail detailing recent commits is automatically sent to the list at
most daily. The list address is [email protected], subscribe by sending an
email to [email protected] with

	subscribe fio

in the body of the email. Archives can be found here:

	http://www.spinics.net/lists/fio/

and archives for the old list can be found here:

	http://maillist.kernel.dk/fio-devel/


Author
------

Fio was written by Jens Axboe <[email protected]> to enable flexible testing of
the Linux I/O subsystem and schedulers. He got tired of writing specific test
applications to simulate a given workload, and found that the existing I/O
benchmark/test tools out there weren't flexible enough to do what he wanted.

Jens Axboe <[email protected]> 20060905


Binary packages
---------------

Debian:
	Starting with Debian "Squeeze", fio packages are part of the official
	Debian repository. http://packages.debian.org/search?keywords=fio .

Ubuntu:
	Starting with Ubuntu 10.04 LTS (aka "Lucid Lynx"), fio packages are part
	of the Ubuntu "universe" repository.
	http://packages.ubuntu.com/search?keywords=fio .

Red Hat, Fedora, CentOS & Co:
	Starting with Fedora 9/Extra Packages for Enterprise Linux 4, fio
	packages are part of the Fedora/EPEL repositories.
	https://apps.fedoraproject.org/packages/fio .

Mandriva:
	Mandriva has integrated fio into their package repository, so installing
	on that distro should be as easy as typing ``urpmi fio``.

Arch Linux:
        An Arch Linux package is provided under the Community sub-repository:
        https://www.archlinux.org/packages/?sort=&q=fio

Solaris:
	Packages for Solaris are available from OpenCSW. Install their pkgutil
	tool (http://www.opencsw.org/get-it/pkgutil/) and then install fio via
	``pkgutil -i fio``.

Windows:
	Rebecca Cran <[email protected]> has fio packages for Windows at
	https://bsdio.com/fio/ . The latest builds for Windows can also
	be grabbed from https://ci.appveyor.com/project/axboe/fio by clicking
	the latest x86 or x64 build, then selecting the ARTIFACTS tab.

BSDs:
	Packages for BSDs may be available from their binary package repositories.
	Look for a package "fio" using their binary package managers.


Building
--------

Just type::

 $ ./configure
 $ make
 $ make install

Note that GNU make is required. On BSDs it's available from devel/gmake within
ports directory; on Solaris it's in the SUNWgmake package.  On platforms where
GNU make isn't the default, type ``gmake`` instead of ``make``.

Configure will print the enabled options. Note that on Linux based platforms,
the libaio development packages must be installed to use the libaio
engine. Depending on distro, it is usually called libaio-devel or libaio-dev.

For gfio, gtk 2.18 (or newer), associated glib threads, and cairo are required
to be installed.  gfio isn't built automatically and can be enabled with a
``--enable-gfio`` option to configure.

To build fio with a cross-compiler::

 $ make clean
 $ make CROSS_COMPILE=/path/to/toolchain/prefix

Configure will attempt to determine the target platform automatically.

It's possible to build fio for ESX as well, use the ``--esx`` switch to
configure.


Windows
~~~~~~~

On Windows, Cygwin (https://www.cygwin.com/) is required in order to build
fio. To create an MSI installer package install WiX from
https://wixtoolset.org and run :file:`dobuild.cmd` from the :file:`os/windows`
directory.

How to compile fio on 64-bit Windows:

 1. Install Cygwin (http://www.cygwin.com/). Install **make** and all
    packages starting with **mingw64-x86_64**. Ensure
    **mingw64-x86_64-zlib** are installed if you wish
    to enable fio's log compression functionality.
 2. Open the Cygwin Terminal.
 3. Go to the fio directory (source files).
 4. Run ``make clean && make -j``.

To build fio for 32-bit Windows, ensure the -i686 versions of the previously
mentioned -x86_64 packages are installed and run ``./configure
--build-32bit-win`` before ``make``. To build an fio that supports versions of
Windows below Windows 7/Windows Server 2008 R2 also add ``--target-win-ver=xp``
to the end of the configure line that you run before doing ``make``.

It's recommended that once built or installed, fio be run in a Command Prompt or
other 'native' console such as console2, since there are known to be display and
signal issues when running it under a Cygwin shell (see
https://github.com/mintty/mintty/issues/56 and
https://github.com/mintty/mintty/wiki/Tips#inputoutput-interaction-with-alien-programs
for details).


Documentation
~~~~~~~~~~~~~

Fio uses Sphinx_ to generate documentation from the reStructuredText_ files.
To build HTML formatted documentation run ``make -C doc html`` and direct your
browser to :file:`./doc/output/html/index.html`.  To build manual page run
``make -C doc man`` and then ``man doc/output/man/fio.1``.  To see what other
output formats are supported run ``make -C doc help``.

.. _reStructuredText: http://www.sphinx-doc.org/rest.html
.. _Sphinx: http://www.sphinx-doc.org


Platforms
---------

Fio works on (at least) Linux, Solaris, AIX, HP-UX, OSX, NetBSD, OpenBSD,
Windows, FreeBSD, and DragonFly. Some features and/or options may only be
available on some of the platforms, typically because those features only apply
to that platform (like the solarisaio engine, or the splice engine on Linux).

Some features are not available on FreeBSD/Solaris even if they could be
implemented, I'd be happy to take patches for that. An example of that is disk
utility statistics and (I think) huge page support, support for that does exist
in FreeBSD/Solaris.

Fio uses pthread mutexes for signalling and locking and some platforms do not
support process shared pthread mutexes. As a result, on such platforms only
threads are supported. This could be fixed with sysv ipc locking or other
locking alternatives.

Other \*BSD platforms are untested, but fio should work there almost out of the
box. Since I don't do test runs or even compiles on those platforms, your
mileage may vary. Sending me patches for other platforms is greatly
appreciated. There's a lot of value in having the same test/benchmark tool
available on all platforms.

Note that POSIX aio is not enabled by default on AIX. Messages like these::

    Symbol resolution failed for /usr/lib/libc.a(posix_aio.o) because:
        Symbol _posix_kaio_rdwr (number 2) is not exported from dependent module /unix.

indicate one needs to enable POSIX aio. Run the following commands as root::

    # lsdev -C -l posix_aio0
        posix_aio0 Defined  Posix Asynchronous I/O
    # cfgmgr -l posix_aio0
    # lsdev -C -l posix_aio0
        posix_aio0 Available  Posix Asynchronous I/O

POSIX aio should work now. To make the change permanent::

    # chdev -l posix_aio0 -P -a autoconfig='available'
        posix_aio0 changed


Running fio
-----------

Running fio is normally the easiest part - you just give it the job file
(or job files) as parameters::

	$ fio [options] [jobfile] ...

and it will start doing what the *jobfile* tells it to do. You can give more
than one job file on the command line, fio will serialize the running of those
files. Internally that is the same as using the :option:`stonewall` parameter
described in the parameter section.

If the job file contains only one job, you may as well just give the parameters
on the command line. The command line parameters are identical to the job
parameters, with a few extra that control global parameters.  For example, for
the job file parameter :option:`iodepth=2 <iodepth>`, the mirror command line
option would be :option:`--iodepth 2 <iodepth>` or :option:`--iodepth=2
<iodepth>`. You can also use the command line for giving more than one job
entry. For each :option:`--name <name>` option that fio sees, it will start a
new job with that name.  Command line entries following a
:option:`--name <name>` entry will apply to that job, until there are no more
entries or a new :option:`--name <name>` entry is seen. This is similar to the
job file options, where each option applies to the current job until a new []
job entry is seen.

fio does not need to run as root, except if the files or devices specified in
the job section requires that. Some other options may also be restricted, such
as memory locking, I/O scheduler switching, and decreasing the nice value.

If *jobfile* is specified as ``-``, the job file will be read from standard
input.

A partir deste arquivo README, podemos ver que ele é dividido em 10, 1. <Visão geral e história> Introdução e História 2. <source> Fonte 3. <lista de Divulgação> Lista de distribuição 4. <Author> Autor 5. <pacotes binários > binários 6. <Construção> compilador 7. <Windows> 8. <documentation> documento 9. <plataformas> internet 10. <correndo fio> fio correr

O fio da gravação README bastante detalhado.

 

interpretação 4.README-Oprofile

Vejamos o oprofile arquivo README

OProfile provides a low-overhead profiler (operf) capable of both
single-application profiling and system-wide profiling.  There is
also a simple event counting tool (ocount).

You can find some documentation in the doc/ directory.

Please visit the oprofile website at : http://oprofile.sf.net/

oprofile was originally written by John Levon <[email protected]>
and Philippe Elie <[email protected]>.  The operf and ocount
tools were developed by Maynard Johnson <[email protected]>, who
is the current maintainer.

Dave Jones <[email protected]> provided bug fixes and support for
the AMD Athlon, and AMD Hammer families of CPUs. [email protected]
<[email protected]> contributed various AMD-related patches,
including Instruction-Based-Sampling support (available only in
pre-1.0 releases).

Bob Montgomery <[email protected]> provided bug fixes, the initial RTC
driver and the initial ia64 driver.

Will Cohen <[email protected]> integrated the ia64 driver into the
oprofile release, and contributed bug fixes and several cleanups.

Will Deacon <[email protected]> has contributed patches as well as
his time to support the ARM architecture.

Graydon Hoare <[email protected]> provided P4 port, bug fixes and cleanups.

Ralf Baechle <[email protected]> provided the MIPS port.

Other contributors can be seen via 'git log'.

Building
--------

Please read the installation instructions in doc/oprofile.html or
http://oprofile.sourceforge.net/doc/install.html.
Only 2.6 kernels (or later) are supported.

Quick start :

(If using git: ./autogen.sh first. You need automake 1.5 or higher. You
can specify a different version, e.g.
ACLOCAL=aclocal-1.5 AUTOMAKE=automake-1.5 AUTOCONF=autoconf-2.13 AUTOHEADER=autoheader-2.13 ./autogen.sh)

Then run the following commands
	./configure [options]  (use './configure --help' to see options)
	make

oprofile o arquivo README sem fio o documento README escrito em detalhe.

 

 

Podemos olhar para a escrita redação sua própria documentação, obrigado!

Publicado 201 artigos originais · ganhou elogios 46 · vê 90000 +

Acho que você gosta

Origin blog.csdn.net/rong11417/article/details/104813852
Recomendado
Clasificación