[Experience] to write the code necessary version updates the habit of reading and README

Interpretation of the code used to write the necessary version updates, and README

table of Contents

Interpretation of the code used to write the necessary version updates, and README

 Foreword

1.CHANGELOG-hdck interpretation

2.CHANGELOG-Oprofile interpretation

3.README-FIO interpretation

4.README-Oprofile interpretation



 Foreword

   We will inevitably arise in the process of writing code to write the version of the file changes and README files (small user manual),

If we write according to their own habits, it may in some places not comprehensive or not up to standard, does not look good, and so on.

To do this in order to find a suitable preparation of specifications and international, we need to read and write method specification of others.

We look at

1.CHANGELOG-hdck interpretation

hdck written version of the file changes, the file name is called CHANGELOG, that is, to change the log mean.

HDCK is a test hard drive testing tools, and then the main Linux operating system, compile and run. It can detect the quality of each block, good or bad, there is a final assessment of the hard drive, in my opinion, this software is great.

Let's look at what's inside

CHANGELOG

v 0.5.0 - 2011-02-06
--------------------

  * add support for ATA VERIFY command to allow testing drives over USB, 
    FireWire and behind SAS backplanes
  * fix crash with specific sized drives

v 0.4.0 - 2010-10-07
--------------------

  * change to quantile based detection
  * fix a bug in with min-reads == max-reads
  * tweaks to quick mode -- confirm only the 64 worst
  * fix bug with long options --log, --read-sectors and --bad-sectors not 
    working
  * reduced CPU utilisation

v 0.3.0 - 2010-08-25
--------------------

  * reduce number of global variables
  * quick mode
  * more verbose on-line status reporting
  * printing 10 worst (slowest) blocks in result
    std dev, min, max, avg, truncated avg, no of samples

v 0.2.6 - 2010-08-22
--------------------

  * fix printing of Individual block statistics

v 0.2.5 - 2010-08-22
--------------------

  * compare block times to fractions and multiples of rotational delay, 
    not fixed values
  * print simple diagnosis (disk condition)
  * printing status message on multiple lines and on stdout, not stderr
  * log output

v 0.2.4 - 2010-08-11
--------------------

 * make output more human readable
 * fix the problem with maximum number of block reads being 
   min-reads + max-reads

v 0.2.3 - 2010-08-11
--------------------

 * reduce number of unnecessery re-reads
 * add licences to files

v 0.2.2 - 2010-04-10
--------------------

 * bugfix -- fix overreading re-read blocks
 * make the program 64 bit compatible (not tested!)

v 0.2.1 - 2010-04-09
--------------------

 * bugfix -- not rereading blocks that need rereading
 * code clean up -- removed dead code

v 0.2 - 2010-04-08
------------------

 * reading and writing uncertian block ranges to file
 * writing detailed statistics to file
 * `background' mode -- lower priority and less confidence for the results
 * works with single whole disk read, rereads only sectors that are much
   slower than the neighbours
 * uses disk RPM for detecting blocks that are consistently reread by the disk
 * on rereads reads disk cache worth of sectors before the scoring blocks to
   improve quality of the results

We can see that the version updates are updates from the bottom up, that is, the top is the latest version and update instructions, the bottom is the lowest version, and description. So when you can open the file clearly see how much the latest version, and you can see how small the date, when written. Version Date divided with a line below, and the air line, which can be distinguished from the above version and the operating region below. The front of each item before changing the operating instructions empty a grid, and use the asterisks, so that people see a separate document soon change, and change the area several content items, in addition to the use of asterisks outside here we can use the numbers marked more clearly see several changes. In writing changes to try to make it simple and clear action items, look at this document so that people can understand change something.

 

2.CHANGELOG-Oprofile interpretation

Next we look at oprofile

oprofile is a performance testing tool on Linux, provided by the CPU hardware performance counter event sampling, from a performance level of consumption of the code analysis program, find the point where the program performance.

 

oprofile change log is written

It logs by year is divided into several good, worded as follows

ChangeLog-2002 、ChangeLog-2003 ...... ChangeLog-2011

We look at this file ChangeLog-2010

2010-12-16  John Villalovos <[email protected]>

	* events/Makefile.am:
	* libop/op_cpu_type.c:
	* libop/op_cpu_type.h:
	* libop/op_hw_specific.h:
	* libop/op_events.c:
	* utils/ophelp.c:
	* events/i386/westmere/events (new):
	* events/i386/westmere/unit_masks (new): Add support for Intel
	  Westmere micro-architecture processors

2010-12-15  Will Cohen  <[email protected]>

	* libop/op_cpu_type.c:
	* libop/op_cpu_type.h:
	* libop/op_hw_specific.h: User-space identification of processors
	  that support Intel architectural events

2010-12-14  Suravee Suthikulpanit <[email protected]>

	* oprofile/daemon/opd_ibs_trans.c: Fix non-x86 build issue 
	  due to cpuid instruction

2010-10-15  Roland Grunberg  <[email protected]>

	* libop/op_xml_events.c:
	* libop/op_xml_out.c:
	* libop/op_xml_out.h:
	* doc/ophelp.xsd: Add unit mask type attribute for an event in
	  the ophelp schema

2010-10-15  Maynard Johnson  <[email protected]>

	* doc/ophelp.xsd:
	* libop/op_xml_events.c: Fix schema validation issues and error in
	  xml generation

2010-10-13  Maynard Johnson  <[email protected]>

	* libabi/opimport.cpp: Fix uninitialized variable warning when
	  building with gcc 4.4

2010-10-13  Maynard Johnson  <[email protected]>

	* events/mips/Makefile.am: Correction to 8/26 MIPS patch
	  to add events and unit_masks to makefile

2010-10-07  William Cohen  <[email protected]>

	* events/i386/arch_perfmon/events: Correct filter values.

2010-08-02  Maynard Johnson  <[email protected]>

	* utils/opcontrol:
	* libpp/profile_spec.cpp:
	* pp/oparchive.cpp:  Moved the copying of stats to opcontrol::do_dump_data
	  and removed the unnecessary and confusing message that indicated
	  when overflow stats were not available.

2010-06-11  William Cohen <[email protected]>

        * libregex/stl.pat.in: Avoid machine specific configuration.

2010-05-18  Daniel Hansel  <[email protected]>

	* doc/oprofile.xml: Document that only kernel versions 2.6.13 or
	  later provide support for anonymous mapped regions

2010-04-13  Maynard Johnson  <[email protected]>

	* libutil++/bfd_support.cpp: Fix up translate_debuginfo_syms
	  so it doesn't rely on section index values being the same
	  between real image and debuginfo file (to resolve problem
	  reported by Will Cohen on Fedora 12)

2010-03-25  Oliver Schneider  <[email protected]>

	* libpp/parse_filename.cpp:  Catch case where a basic_string::erase
	  error can occur in opreport when parsing an invalid sample file name

2010-03-25  Maynard Johnson  <[email protected]>

	* events/mips/loongson2/events: New File
	* events/mips/loongson2/unit_masks: New File
	   I neglected to do 'cvs add' for these new two new files
	   back on Nov 25, 2009 when I committed the initial
	   loongson2 support.  This change corrects that error.

2010-03-01  Gabor Loki  <[email protected]>

	* daemon/opd_pipe.c: Fix memory leak
	* utils/opcontrol: Fix messages sending method to opd_pipe

2010-01-20  Maynard Johnson  <[email protected]>

	* m4/qt.m4: Fix qt lib check so it works on base 64-bit system


See ChangeLog-2009 for earlier changelogs.

We can see and like the first version is written from the bottom up, at the bottom there is a view in 2009. Description of Change. This wording is very good, because it is there each year into the change log. Change the first version in the middle of a different kind, that there is no version number, but change the date, and the mailbox and change, and the change that added a mailbox, you can clearly see the code changes Who if the code in addition to the problem, you can go to contact people who write the code by mail, this wording very good. It should be promoted. Operating instructions and a change as well. Add a space and asterisk in front of each item changes.

 

 

Documentation introduced two changes, let's take a look at the README file others how to write

3.README-FIO interpretation

fio Linux is a mainstream tool for testing hard disk io

 

FIO's README file

Overview and history
--------------------

Fio was originally written to save me the hassle of writing special test case
programs when I wanted to test a specific workload, either for performance
reasons or to find/reproduce a bug. The process of writing such a test app can
be tiresome, especially if you have to do it often.  Hence I needed a tool that
would be able to simulate a given I/O workload without resorting to writing a
tailored test case again and again.

A test work load is difficult to define, though. There can be any number of
processes or threads involved, and they can each be using their own way of
generating I/O. You could have someone dirtying large amounts of memory in an
memory mapped file, or maybe several threads issuing reads using asynchronous
I/O. fio needed to be flexible enough to simulate both of these cases, and many
more.

Fio spawns a number of threads or processes doing a particular type of I/O
action as specified by the user. fio takes a number of global parameters, each
inherited by the thread unless otherwise parameters given to them overriding
that setting is given.  The typical use of fio is to write a job file matching
the I/O load one wants to simulate.


Source
------

Fio resides in a git repo, the canonical place is:

	git://git.kernel.dk/fio.git

When inside a corporate firewall, git:// URL sometimes does not work.
If git:// does not work, use the http protocol instead:

	http://git.kernel.dk/fio.git

Snapshots are frequently generated and :file:`fio-git-*.tar.gz` include the git
meta data as well. Other tarballs are archives of official fio releases.
Snapshots can download from:

	http://brick.kernel.dk/snaps/

There are also two official mirrors. Both of these are automatically synced with
the main repository, when changes are pushed. If the main repo is down for some
reason, either one of these is safe to use as a backup:

	git://git.kernel.org/pub/scm/linux/kernel/git/axboe/fio.git

	https://git.kernel.org/pub/scm/linux/kernel/git/axboe/fio.git

or

	git://github.com/axboe/fio.git

	https://github.com/axboe/fio.git


Mailing list
------------

The fio project mailing list is meant for anything related to fio including
general discussion, bug reporting, questions, and development. For bug reporting,
see REPORTING-BUGS.

An automated mail detailing recent commits is automatically sent to the list at
most daily. The list address is [email protected], subscribe by sending an
email to [email protected] with

	subscribe fio

in the body of the email. Archives can be found here:

	http://www.spinics.net/lists/fio/

and archives for the old list can be found here:

	http://maillist.kernel.dk/fio-devel/


Author
------

Fio was written by Jens Axboe <[email protected]> to enable flexible testing of
the Linux I/O subsystem and schedulers. He got tired of writing specific test
applications to simulate a given workload, and found that the existing I/O
benchmark/test tools out there weren't flexible enough to do what he wanted.

Jens Axboe <[email protected]> 20060905


Binary packages
---------------

Debian:
	Starting with Debian "Squeeze", fio packages are part of the official
	Debian repository. http://packages.debian.org/search?keywords=fio .

Ubuntu:
	Starting with Ubuntu 10.04 LTS (aka "Lucid Lynx"), fio packages are part
	of the Ubuntu "universe" repository.
	http://packages.ubuntu.com/search?keywords=fio .

Red Hat, Fedora, CentOS & Co:
	Starting with Fedora 9/Extra Packages for Enterprise Linux 4, fio
	packages are part of the Fedora/EPEL repositories.
	https://apps.fedoraproject.org/packages/fio .

Mandriva:
	Mandriva has integrated fio into their package repository, so installing
	on that distro should be as easy as typing ``urpmi fio``.

Arch Linux:
        An Arch Linux package is provided under the Community sub-repository:
        https://www.archlinux.org/packages/?sort=&q=fio

Solaris:
	Packages for Solaris are available from OpenCSW. Install their pkgutil
	tool (http://www.opencsw.org/get-it/pkgutil/) and then install fio via
	``pkgutil -i fio``.

Windows:
	Rebecca Cran <[email protected]> has fio packages for Windows at
	https://bsdio.com/fio/ . The latest builds for Windows can also
	be grabbed from https://ci.appveyor.com/project/axboe/fio by clicking
	the latest x86 or x64 build, then selecting the ARTIFACTS tab.

BSDs:
	Packages for BSDs may be available from their binary package repositories.
	Look for a package "fio" using their binary package managers.


Building
--------

Just type::

 $ ./configure
 $ make
 $ make install

Note that GNU make is required. On BSDs it's available from devel/gmake within
ports directory; on Solaris it's in the SUNWgmake package.  On platforms where
GNU make isn't the default, type ``gmake`` instead of ``make``.

Configure will print the enabled options. Note that on Linux based platforms,
the libaio development packages must be installed to use the libaio
engine. Depending on distro, it is usually called libaio-devel or libaio-dev.

For gfio, gtk 2.18 (or newer), associated glib threads, and cairo are required
to be installed.  gfio isn't built automatically and can be enabled with a
``--enable-gfio`` option to configure.

To build fio with a cross-compiler::

 $ make clean
 $ make CROSS_COMPILE=/path/to/toolchain/prefix

Configure will attempt to determine the target platform automatically.

It's possible to build fio for ESX as well, use the ``--esx`` switch to
configure.


Windows
~~~~~~~

On Windows, Cygwin (https://www.cygwin.com/) is required in order to build
fio. To create an MSI installer package install WiX from
https://wixtoolset.org and run :file:`dobuild.cmd` from the :file:`os/windows`
directory.

How to compile fio on 64-bit Windows:

 1. Install Cygwin (http://www.cygwin.com/). Install **make** and all
    packages starting with **mingw64-x86_64**. Ensure
    **mingw64-x86_64-zlib** are installed if you wish
    to enable fio's log compression functionality.
 2. Open the Cygwin Terminal.
 3. Go to the fio directory (source files).
 4. Run ``make clean && make -j``.

To build fio for 32-bit Windows, ensure the -i686 versions of the previously
mentioned -x86_64 packages are installed and run ``./configure
--build-32bit-win`` before ``make``. To build an fio that supports versions of
Windows below Windows 7/Windows Server 2008 R2 also add ``--target-win-ver=xp``
to the end of the configure line that you run before doing ``make``.

It's recommended that once built or installed, fio be run in a Command Prompt or
other 'native' console such as console2, since there are known to be display and
signal issues when running it under a Cygwin shell (see
https://github.com/mintty/mintty/issues/56 and
https://github.com/mintty/mintty/wiki/Tips#inputoutput-interaction-with-alien-programs
for details).


Documentation
~~~~~~~~~~~~~

Fio uses Sphinx_ to generate documentation from the reStructuredText_ files.
To build HTML formatted documentation run ``make -C doc html`` and direct your
browser to :file:`./doc/output/html/index.html`.  To build manual page run
``make -C doc man`` and then ``man doc/output/man/fio.1``.  To see what other
output formats are supported run ``make -C doc help``.

.. _reStructuredText: http://www.sphinx-doc.org/rest.html
.. _Sphinx: http://www.sphinx-doc.org


Platforms
---------

Fio works on (at least) Linux, Solaris, AIX, HP-UX, OSX, NetBSD, OpenBSD,
Windows, FreeBSD, and DragonFly. Some features and/or options may only be
available on some of the platforms, typically because those features only apply
to that platform (like the solarisaio engine, or the splice engine on Linux).

Some features are not available on FreeBSD/Solaris even if they could be
implemented, I'd be happy to take patches for that. An example of that is disk
utility statistics and (I think) huge page support, support for that does exist
in FreeBSD/Solaris.

Fio uses pthread mutexes for signalling and locking and some platforms do not
support process shared pthread mutexes. As a result, on such platforms only
threads are supported. This could be fixed with sysv ipc locking or other
locking alternatives.

Other \*BSD platforms are untested, but fio should work there almost out of the
box. Since I don't do test runs or even compiles on those platforms, your
mileage may vary. Sending me patches for other platforms is greatly
appreciated. There's a lot of value in having the same test/benchmark tool
available on all platforms.

Note that POSIX aio is not enabled by default on AIX. Messages like these::

    Symbol resolution failed for /usr/lib/libc.a(posix_aio.o) because:
        Symbol _posix_kaio_rdwr (number 2) is not exported from dependent module /unix.

indicate one needs to enable POSIX aio. Run the following commands as root::

    # lsdev -C -l posix_aio0
        posix_aio0 Defined  Posix Asynchronous I/O
    # cfgmgr -l posix_aio0
    # lsdev -C -l posix_aio0
        posix_aio0 Available  Posix Asynchronous I/O

POSIX aio should work now. To make the change permanent::

    # chdev -l posix_aio0 -P -a autoconfig='available'
        posix_aio0 changed


Running fio
-----------

Running fio is normally the easiest part - you just give it the job file
(or job files) as parameters::

	$ fio [options] [jobfile] ...

and it will start doing what the *jobfile* tells it to do. You can give more
than one job file on the command line, fio will serialize the running of those
files. Internally that is the same as using the :option:`stonewall` parameter
described in the parameter section.

If the job file contains only one job, you may as well just give the parameters
on the command line. The command line parameters are identical to the job
parameters, with a few extra that control global parameters.  For example, for
the job file parameter :option:`iodepth=2 <iodepth>`, the mirror command line
option would be :option:`--iodepth 2 <iodepth>` or :option:`--iodepth=2
<iodepth>`. You can also use the command line for giving more than one job
entry. For each :option:`--name <name>` option that fio sees, it will start a
new job with that name.  Command line entries following a
:option:`--name <name>` entry will apply to that job, until there are no more
entries or a new :option:`--name <name>` entry is seen. This is similar to the
job file options, where each option applies to the current job until a new []
job entry is seen.

fio does not need to run as root, except if the files or devices specified in
the job section requires that. Some other options may also be restricted, such
as memory locking, I/O scheduler switching, and decreasing the nice value.

If *jobfile* is specified as ``-``, the job file will be read from standard
input.

From this README file, we can see that it is divided into 10, 1. <Overview and history> Introduction and History 2. <Source> Source 3. <Mailing list> mailing list 4. <Author> Author 5. <Binary packages > binaries 6. <Building> compiler 7. <Windows> 8. <documentation> document 9. <platforms> internet 10. <running fio> run fio

The fio the README write quite detailed.

 

4.README-Oprofile interpretation

Let us look at the README file oprofile

OProfile provides a low-overhead profiler (operf) capable of both
single-application profiling and system-wide profiling.  There is
also a simple event counting tool (ocount).

You can find some documentation in the doc/ directory.

Please visit the oprofile website at : http://oprofile.sf.net/

oprofile was originally written by John Levon <[email protected]>
and Philippe Elie <[email protected]>.  The operf and ocount
tools were developed by Maynard Johnson <[email protected]>, who
is the current maintainer.

Dave Jones <[email protected]> provided bug fixes and support for
the AMD Athlon, and AMD Hammer families of CPUs. [email protected]
<[email protected]> contributed various AMD-related patches,
including Instruction-Based-Sampling support (available only in
pre-1.0 releases).

Bob Montgomery <[email protected]> provided bug fixes, the initial RTC
driver and the initial ia64 driver.

Will Cohen <[email protected]> integrated the ia64 driver into the
oprofile release, and contributed bug fixes and several cleanups.

Will Deacon <[email protected]> has contributed patches as well as
his time to support the ARM architecture.

Graydon Hoare <[email protected]> provided P4 port, bug fixes and cleanups.

Ralf Baechle <[email protected]> provided the MIPS port.

Other contributors can be seen via 'git log'.

Building
--------

Please read the installation instructions in doc/oprofile.html or
http://oprofile.sourceforge.net/doc/install.html.
Only 2.6 kernels (or later) are supported.

Quick start :

(If using git: ./autogen.sh first. You need automake 1.5 or higher. You
can specify a different version, e.g.
ACLOCAL=aclocal-1.5 AUTOMAKE=automake-1.5 AUTOCONF=autoconf-2.13 AUTOHEADER=autoheader-2.13 ./autogen.sh)

Then run the following commands
	./configure [options]  (use './configure --help' to see options)
	make

oprofile the README file no fio the README document written in detail.

 

 

We can look at the wording write their own documentation, thank you!

Published 201 original articles · won praise 46 · views 90000 +

Guess you like

Origin blog.csdn.net/rong11417/article/details/104813852