Windows下Makefile

参考文档:http://www.cnblogs.com/repository/archive/2011/05/18/2050546.html

一、安装:

      MinGW-3.1.0-1.exe       Windows下的gcc,编译C语言的工具(下载MinGW-3.1.0-1.exe,然后双击安装到C:\MinGW)

      make.exe                     按照Makefile规则编译工程的工具(下载mingw32-make-3.80.0-3.exe,然后双击安装到C:\MinGW;在C:\MinGW\bin下,将mingw32-make.exe改名或复制改名为make.exe)

      将C:\MinGW\bin添加到环境变量path中

二、Makefile流程

1. Main.c

#include "stdio.h"
main(){
       func();
       printf("this is main\n");
       getch();
}

2. func.c

#include "stdio.h"
func(){
       printf("in func\n");
       getch();
}

3. Makefile(或者makefile)——这个文件名是必须的,因为make默认查找该文件

test:main.o func.o      
       gcc -o test main.o func.o      

func.o:func.c   
       gcc -c func.c      

main.o:main.c      
       gcc -c main.c

1行:生成test可执行文件,它的依赖于main.o func.o,也就是说生成test你得先生成它们,
2行:gcc前边必须是tab也就是7个空格,表示编译生成test

后边是依赖项的生成规则

然后运行compile.bat就可以生成test.exe文件了,同时会附加生成func.o和main,o两个中间代码的文件,类似于VC中的obj文件

4. compile.bat

make
cmd
 

三、gcc参数

      同VC,TC等编译器不同,GCC其实是可以很方便的在提示符下编译程序的。GCC在提示符下编译程序,并没有如同VC那样的冗长而晦涩的编译参数。相反,却有着比VC更灵活且简短的参数。
      不得不承认,不懂GCC编译参数的人,确实会损失一些GCC的强大功能。所以,我下面简单介绍一下GCC的一些基本编译参数。这里,我以C编译器为例。

编译二进制代码
$gcc -c yours.c -o yours.o
使用这段指令,GCC将会把yours.c编译成yours.o的二进制代码。其中,yours.o就类似于VC,TC中的.obj文档。

 
编译最简单的小程序
$gcc -o yours yours.c
通过这条指令,GCC将会把yours.c源代码编译成名为yours的可执行程序。当然,您也可以将yours.c改成我们刚才介绍的yours.o文件。这样,gcc将使用编译刚才编译好的二进制文档来链接程序。这里,格式的特点是,-o 后面是一串文件列表,第一个参数是所编译程序的文件名,从第二个开始,就是您编译和连接该可执行程序所需要的二进制文档或者源代码。
 
编译时将自己的头文件目录设为默认头文件目录
$gcc -I”Your_Include_Files_Document_Path” -c yours.c -o yours.o
这条指令中的-I参数将会把Your_Include_Files_Document_Path添加到你默认的头文件目录中。这样您将可以使用 #include <your_include.h>来导入头文件。

编译时使用自己的静态库存放目录
$gcc -L”Your_Lib_Files_Document_Path” -o yours yours.o
这条指令将会让GCC在连接时除了在默认Lib存放目录中搜索指定的静态库以外,还会在Your_Lib_Files_Document_Path中搜索。

编译时使用静态连接库
$gcc -lyour_lib -o yours yours.o
这条指令将会让GCC在连接时把 libyour_lib.a中您所用到的函数连接到可执行程序中。此处注意,GCC所使用的静态连接库是lib*.a格式的。在连接时,只且仅需要提供*的内容就可以了。

编译时使用优化
$gcc -O2 -c yours.c -o yours.o
使用优化方式编译程序,其中除了-O2以外,还有-O3 -O1等等。他们代表不同的优化等级。最常用的,是-O2优化。当然,还有针对特殊CPU的优化,这里就不介绍了。

编译时显示所有错误和警告信息
$gcc -Wall -c yours.c -o yours.o
GCC在默认情况下,将对一些如变量申请未使用这样的问题或者申请了没有给予初始值的问题忽略。但是,如果使用了-Wall参数,编辑器将列出所有的警告信息。这样,您就可以知道您的代码中有多少可能会在其他操作系统下出错的地方了。(用这个指令看看你的代码有多少地方写的不怎么合适。)

编译连接时,加入调试代码
$gcc -g -o yours yours.c
正如同VC有debug编译模式一样,GCC也有debug模式。添加了-g 参数编译的可执行程序比普通程序略为大一些,其中添加了一些调试代码。这些代码将被gdb所支持。

连接时缩小代码体积
$gcc -s -o yours yours.o
这个参数,似乎我没有在Unix环境下看到过。也不知道具体什么作用。因为有人说Visual-MinGW生成的代码小,于是研究了一下她的编译参数,发现release模式的编译参数就加了这一项。貌似编译后的代码的确缩小了很多。

获得帮助
$gcc --help
这条指令从意思上就能看出,获得gcc的帮助信息。如果您有什么特殊需要,也许这个指令能帮上点小忙。

四、Makefile实战——使用Newman's Fast Algorithm

      Newman's Fast Algorithm是我在试验中用到的一个“Community Detection”的算法,作者慷慨地以c工程的形式提供了代码,且用到了Makefile工具。Newman先生的主页在此 http://www-personal.umich.edu/~mejn/ , 仔细看发现有一条“Information and code for the fast community structure algorithm is here .” 点击进入如下页面可以得到Newman's Fast Algorithm代码:

      http://cs.unm.edu/~aaron/research/fastmodularity.htm

      为了防止这个页面失效,我将其完整的截取如下——按照下面的步骤就可以使用算法的代码了。但很不幸,在我执行make命令时出现了以下错误:

      原因是make程序将其中一些标记误认为是文件了——使用.phony方式就可以解决问题。具体方法是在Makefile中加入.phony,截图如下:

"Fast Modularity" Community Structure Inference Algorithm

This page documents and supports the fast modularity maximization algorithm I developed jointly with Mark Newman and Cristopher Moore . This algorithm is being widely used in the community of complex network researchers, and was originally designed for the express purpose of analyzing the community structure of extremely large networks (i.e., hundreds of thousand or millions of vertices). The original version worked only with unweighted, undirected networks. I've recently posted a version that works on weighted, undirected networks.

Update February 2007 : Please see my recent blog entry about this algorithm.

Update May 2007 : See also the igraph library , a pretty comprehensive library of network utility functions (generation and analysis), including an implementation of the fast modularity algorithm described here (along with a few other nice clustering heuristics).

Update October 2008 : I've finally gotten around to posting a version that works with weighted networks, which is available here . This version wants a .wpairs file as input, which is an edge list with integer weights, e.g., "54\t91\t3\n" would be an edge with weight 3. Otherwise, it should work just the same as the unweighted version.

Update December 2009 : If you're going to use this algorithm, you should also read this paper, which describes the performance (pro and con) of modularity maximization in practical contexts.

B. H. Good, Y.-A. de Montjoye and A. Clauset, "The performance of modularity maximization in practical contexts ."
Phys. Rev. E 81 , 046106 (2010).

Journal Reference
A. Clauset, M.E.J. Newman and C. Moore, "Finding community structure in very large networks ."
Phys. Rev. E 70 , 066111 (2004).

My request
For the past year, I have passed out the code personally and have asked each person who uses it to please send me a pre-print of any papers they produce with the code. I ask the same of each of you, as it's nice to know what projects this algorithm is being used in. You may email me directly at [email protected] with the rest of the address being obvious.

 

Get the code
The code is available in both .tgz and .zip formats. It contains the following files (with corresponding descriptions):

gpl.txt GPL (version 2)
Makefile makefile for compiling the executable
fastcommunity_mh.cc main algorithm file - this has the heart of the algorithm described in the paper.
maxheap.h max-heap datastructure for storing the dQ values (linked to vektor data structure)
vektor.h sparse matrix row datastructure for storing dQ values (linked to maxheap data structure)


Update October 2008 : A version that works with weighted networks is available here . This version wants a .wpairs file as input, which is an edge list with integer weights, e.g., "54\t91\t3\n" would be an edge with weight 3. Otherwise, it should work just the same as the unweighted version.

 

Compiling
The Makefile provided should be sufficient to compile the executable (built in ANSI C/C++ with platform independent code).

 

Input file requirements
The program was written to handle networks in the form of a flat text file containing edge adjacencies. I call this file format a .pairs file. Specifically, the network topology must conform to the following requirements:
• .pairs is a list of tab-delimited pairs of numeric indices, e.g., "54\t91\n"
• the network described is a SINGLE COMPONENT
• there are NO SELF-LOOPS or MULTI-EDGES in the file
• the MINIMUM NODE ID = 0 in the input file
• the MAXIMUM NODE ID can be anything, but the program will use less memory if nodes are labeled sequentially

Obviously, this file format is a bit peculiar, but it was sufficient for the demonstration that the algorithm performs as promised. You are free to alter the file import function readInputFile() to fit your needs.

An example input file, for Zachary's karate club network is here .

Running the program
The Makefile provided should be sufficient to compile the executable (built in ANSI C/C++ with platform independent code). This usage information can be retrieved by running the executable with no arguments:

-f <filename> give the target .pairs file to be processed
-l <text> the text label for this run; used to build output filenames
-t <int> timer period for reporting progress of file input to screen
-s calculate and record the support of the dQ matrix
-v --v ---v differing levels of screen output verbosity
-c <int> record the aglomerated network at step <int> (typically, this is the step location at which the modularity is known to be maximum)

(Please see the notes in the .cc file for the most up-to-date version of this information.) The typical usage will be to first create the .pairs file containing your network, to run the program like

./fastcommunity_mh -f myNetwork.pairs -l firstRun

and then consult the file outputs as described below. If you want to then examine the communities that have been built by the algorithm, you would run the algorithm a second time like so

./fastcommunity_mh -f myNetwork.pairs -l secondRun -c X

where X is the integer value for the maximum modularity that is reported in the .info file. Again, this could probably be automated, either by modifying the code, or wrapping it all in another script.

File outputs
Running the program on some input network will produce a set of files, each with a common naming convention, where the file type is encoded by the suffix. This information can also be retrieved by running the executable with the argument -files .

Mandatory file outputs

.info Various information about the program's running. Includes a listing of the files it generates, number of vertices and edges processed, the maximum modularity found and the corresponding step (you can re-run the program with this value in the -c argument to have it output the contents of the clusters, etc. when it reaches that step again (not the most efficient solution, but it works)), start/stop time, and when -c is used, it records some information about the distribution of cluster sizes.
.joins The dendrogram and modularity information from the algorithm. The file format is tab-delimited columns of data, where the columns are:
1. the community index which absorbs
2. the community index which was absorbed
3. the modularity value Q after the join
4. the time step of the join

Optional file outputs (generated at step t=C when the -c C argument is used:

.wpairs The connectivity of the clustered graph in a .wpairs file format (i.e., weighted edges). The edge weights should be the dQ values associated with that clustered edge at time C. From this format, it's easy to convert into another for visualization (e.g., pajek's .net format).
.hist The size distribution of the clusters.
.groups A list of each group and the names of the vertices which compose it (this is particularly useful for verifying that the clustering makes sense - tedious but important).

Bugs and incompleteness
All of the features documented on this page work as advertised (although, if you find any bugs, please let me know). There are some other, undocumented features that may not work fully (if you read through the code, you may spot some of these), so use them at your own risk.

 

Code maintenence
I am not actively working on this code since this project complete with its publication in 2004. However, I'll try to give as much verbal support to interested users as is reasonable.

猜你喜欢

转载自chuanwang66.iteye.com/blog/1705531