postgresql小计

1.

Use odbc time, pay attention to coding and coding database odbc configuration of the same, or garbled. It is proposed, character encoding tables, coding database, odbc connector such as coding is utf8, avoid the garbage problem

2.

In Windows, after installing mysql, suggested data directory (data directory) to a non-system directory, to avoid problems when the system is restored.

  • Create a mysqldata drive d
  • To increase the new file permissions NetworkService
  • The full copy of the data in the last data
  • Modify my.ini options in datadir = D: \ mysqldata (Note that to \, use / may fail to start)
  • Restart the service or reboot the system directly

3.

On 64-bit machines can be installed 64 database may also be installed 32 database, but ODBC (i.e. connector) and the program has a relationship, if the program is a 32-bit, you must use the 32-bit ODBC (connector), but 32odbc installation on 64-bit systems, accessible via the menu bar odbc management tool and do not see, because that is 64, it is necessary to the system windows directory to find syswow64, then run odbcad32.exe, add 32, if it is 64 bit in windows / system32 / odbcad32.exe. The names of these two directories note, is compatible with Microsoft 32 programs on 64-bit systems design.

4.

When using otl connection data need to look at, it does not seem to support utf8

5.

user dsn name added to the database with the same name in odbc, as used in the program name is actually the name, and if the database is not the same, confusing

6.

Installation odbc there can not be loaded, can not find the specified module problems that may be installed odbc too new, there is no corresponding windows c ++ libraries on the server, install the latest vc ++ libraries can

7.

The database is utf8, not utf8 but after reading the encoded within the code, configuration issues might be the odbc, odbc problem we encounter is ansi driver, the database is utf8, read garbled, detail- in need of odb> connection-> character set to latin1, utf8 arranged also problematic
here because the property which is encoded 65001 Navicat connection (utf8) corresponds odbc inside latin1, corresponding to the character set used is mysql odbc in utf8

ps. Using the latest navicat and mysql own mysql connector c ++ can solve this problem, all of them connected configuration option is selected in accordance with the utf8 character design, navicat database to automatically

8.

mysql connect c ++ installation should correspond with our application, the application is 32-bit, connect will be 32-bit, and 32 or 64 is independent of the database

9.

mysql connect c ++ is not compatible when using the sqlstring with string, there is a crash, because mysql net compiled official code c / c ++ -> code generation-> runtime library -> / MD, if not this, then it can not be universal, will complain, put their projects into the same.

If you change your project too much trouble, you can get the source code to compile. But windows compiled under very troublesome.

10.

使用mysql c++ connector需要boost库
c/c++->General->Additional Include Directories添加
boost
c:/program files (x86)/MySQL/MySQL Connector C++ 1.1.9/include

Linker->General->Additional Library Directories添加
c:/program files (x86)/mysql/mysql connector c++ 1.1.9/lib/opt

linker->input->additional dependencies添加
mysqlcppconn.lib

11.

Use mysql c ++ connector needs to reference the following header file

#include "mysql_connection.h"
#include <cppconn/driver.h>
#include <cppconn/exception.h> #include <cppconn/resultset.h> #include <cppconn/statement.h> #include <cppconn/prepared_statement.h>

 

12.

linux compiler mysql connector error
CMAKE_CXX_COMPILER
because gcc-c ++ is not installed

13.

cmake. -DBOOST_ROOT = / mnt / dbbackup / boost_1_65_1
specified boost directory, it must be an absolute path

14.

When the trial mysql connector c ++, and engineering debug and release to correspond with the corresponding dll and lib, otherwise, string use will complain

15.

According to mysql official website compiled c ++ connector project, and then compile the release and debug versions of the corresponding cmake and other tools, you can find under visual studio installation directory, compile time should correspond to the lib into them, this advantage is that you can directly use dll function in, do not look for the name of the function, put the past in the directory should use the corresponding dll

16.

Database, the connection is the default, no code is bom of utf8, the coding is specified is utf8 utf8 encoding the bom, mysql workbentch the default is to use the non-bom utf8

17.

show full processlist
view the current database connection dynamic
kill 2222
to kill a process

18.

= 2 innodb_flush_log_at_trx_commit  
# 0:. If the brush will be written innodb_flush_log_at_trx_commit a value of 0, log buffer to disk log files per second, when the transaction is committed without any operation (execution is performed by the master thread of the thread mysql  
# Main . per second thread will redo log buffer is written to disk redo log files (rEDO lOG) regardless of whether the transaction has been submitted) default log file is ib_logfile0, ib_logfile1  
# 1: when set to the default value to 1, every time you commit a transaction, the log buffer will be flushed to the log.  
# 2: If set to 2, each transaction will be submitted to write the log, but does not perform the operation brush. Second timer will brush the log file. Note that, it does not guarantee 100% per second will surely brush to disk, depending on the scheduling process.  
# Every transaction commit data written to the transaction log is written here is only invoked writes to the file system and the file system is cached, so this does not guarantee that the write data has been written to physical disk  
# 1 is the default value in order to ensure full ACID. Of course, you can configure this item to a value other than 1 in exchange for higher performance, but when the system crashes, you will lose 1 second of data.  
# Set to 0, then, mysqld process crashes and they will lose the last second of transactions. Set 2, will only lose the last 1 second of data when an operating system crash or power failure. InnoDB doing recovery time will ignore this value.  
# Summary  
# 1 is set to be the safest course, but the performance page is the worst (relative to other two parameters, but not unacceptable). If you do not ask for data consistency and integrity, it can be set to 2, if only the most performance requirements, such as high concurrent write log server, set to 0 to obtain higher performance 

19.

max_allowed_packet = 16M # server to send and receive maximum packet length

20.

Mysql After installation is complete, this set is very small, it should be 8M, can cause the database to read and write when many, too many disk operations, and Caton entire system. So here need to set up some large, this is a parameter that must be modified when doing mysql server

= 64M innodb_buffer_pool_size  
# the InnoDB using a buffer pool and to store the original index data, unlike MyISAM.  
# larger you set here, you are accessing the table in which the data required for disk I / fewer O.  
# in a standalone on the database server, you can set this variable to the server's physical memory size of 80%  
# Do not set too large, otherwise, due to the competition of the physical memory may cause the operating system paging bumps.  
# Note that you each on 32-bit systems the process may be limited 2-3.5G user level memory limitations,  
too # so do not set.

 21.

After postgresql execution, the judgment result is successful, there are several results

typedef enum
{
    PGRES_EMPTY_QUERY = 0,        /* empty query string was executed */ PGRES_COMMAND_OK, /* a query command that doesn't return * anything was executed properly by the * backend */ PGRES_TUPLES_OK, /* a query command that returns tuples was * executed properly by the backend, PGresult * contains the result tuples */ PGRES_COPY_OUT, /* Copy Out data transfer in progress */ PGRES_COPY_IN, /* Copy In data transfer in progress */ PGRES_BAD_RESPONSE, /* an unexpected response was recv'd from the * backend */ PGRES_NONFATAL_ERROR, /* notice or warning message */ PGRES_FATAL_ERROR, /* query failed */ PGRES_COPY_BOTH, /* Copy In/Out data transfer in progress */ PGRES_SINGLE_TUPLE /* single tuple from larger resultset */ } ExecStatusType;

Success is not only one, the first one is empty query correctly returns, the second is no return to the correct return (such as insert, update) values, and the third is the query has the right to return (such as select) return value

Guess you like

Origin www.cnblogs.com/studywithallofyou/p/11351346.html