Compare commits

...

209 Commits

Author SHA1 Message Date
YuQing fe647b81c2 adapt to libserverframe 1.2.5 2024-09-17 11:50:06 +08:00
YuQing f1650d9b45 fastdfs.spec: systemctl enable services 2024-05-15 14:23:35 +08:00
YuQing aef5984bd8 correct debuild error 2024-04-10 10:25:33 +08:00
YuQing d8df0ae9f3 add debian sudir for debuild 2024-04-10 10:17:25 +08:00
YuQing 75c5535f5a remove compile warnings 2024-03-11 15:29:36 +08:00
YuQing 10aa9a8de8 INSTALL changed for V6.12.1 2024-03-11 15:15:03 +08:00
YuQing 8b952615ec make.sh: set DEBUG_FLAG to 0 2024-03-06 17:46:25 +08:00
YuQing 422c63cc54 log square quoted IPv6 address 2024-03-06 15:35:19 +08:00
YuQing ff2aef1735 adapt to libserverframe 1.2.3 2024-02-26 22:14:50 +08:00
YuQing 5680a7fab8 make.sh: set DEBUG_FLAG to 0 2024-02-13 10:38:19 +08:00
YuQing e5c48a4132 check filename duplicate by hashtable instead of file system access 2024-02-12 20:17:02 +08:00
YuQing 9425e44ddc upgrade version to 6.12.0 2024-02-11 10:23:51 +08:00
YuQing 0893bd73d5
Merge pull request #687 from Rustmilian/patch-1
fgrep: warning: fgrep is obsolescent; use grep -F
2024-01-12 09:02:21 +08:00
Rustmilian 0a9fef1339
fgrep: warning: fgrep is obsolescent; using grep -F 2024-01-11 16:48:45 -05:00
YuQing 65aeeb28cc bugfixed: fdfs_server_info_to_string support IPv6 correctly 2024-01-11 11:49:36 +08:00
YuQing 030520cb3a use INIT_SCHEDULE_ENTRY_EX1 for new thread 2023-12-29 10:21:41 +08:00
YuQing de36a81893 bugfixed: parse ip and port use parseAddress instead of splitEx 2023-12-21 19:39:19 +08:00
YuQing 67af9585d8 proto fetch storage ids add field allow_empty 2023-12-12 20:29:44 +08:00
YuQing ba41d958b2 client.conf add config item: connect_first_by 2023-12-11 19:20:32 +08:00
YuQing fd77ababf5 connect to storage server failover with multi IPs 2023-12-11 16:07:45 +08:00
YuQing 48fb05dbb2 specify the storage server ID for NAT network 2023-12-10 15:12:20 +08:00
YuQing 47759f2cf7 small changes for conf files 2023-12-06 14:26:41 +08:00
YuQing 3187fb3a29 upgrade version to 6.11.0 2023-12-05 17:07:52 +08:00
YuQing f1ab599f16 add more description for config item: reserved_storage_space 2023-12-04 15:22:57 +08:00
YuQing bd79b4d35b parameter use_storage_id in tracker.conf MUST set to true and
id_type_in_filename MUST set to id when IPv6 enabled
2023-12-01 16:50:01 +08:00
YuQing 7cae2bfb2b adapt to the newest libfastcommon and libserverframe 2023-11-29 20:18:50 +08:00
YuQing bfb3e9f887 code adjust for pull request #673 2023-11-24 19:46:16 +08:00
YuQing 1809d46020
Merge pull request #673 from sunqiangwei1988/master
Added: 增加IPv6支持
2023-11-23 16:50:38 +08:00
YuQing d24c3200d1 Merge branch 'use_libfastcommon1.70' 2023-11-16 11:36:52 +08:00
sunqiangwei1988 09f2405f45 Added: 增加IPv6支持
1、增加配置文件解析IPv6地址的能力。
2、修改配置文件增加IPv6配置说明。
3、修改fdht客户端增加IPv6支持。
4、增加IPv6长地址编码成短地址的功能,解决storage id使用IPv6地址时,会截取IPv6的前16位字符串时产生的不唯一问题。
2023-11-02 10:30:01 +08:00
YuQing 047d858220 adapt to the newest sf_send_done_callback 2023-10-26 10:51:30 +08:00
YuQing cb73cfe1a5 small changes for code format 2023-10-25 09:48:21 +08:00
YuQing 8363843597 call conn_pool_connect_server with timeout_ms 2023-10-24 17:22:24 +08:00
YuQing 64e0a536dc upgrade version to 6.10.0 2023-10-24 10:22:46 +08:00
YuQing 219e6e5a1d multi path round robin more gracefully 2023-10-15 20:34:30 +08:00
YuQing a90e6c681d Merge branch 'master' of github.com:happyfish100/fastdfs 2023-10-15 11:19:58 +08:00
YuQing c7d01ff422
Merge pull request #665 from niloay6/fix_multi_path_round_robin
fix: 多path的其中一个path存储超阈值时,store_path=0应该顺序写入,而不是每次都写第一个path
2023-10-15 11:18:22 +08:00
YuQing 2b111d9568 use free_queue_set_release_callback 2023-10-12 21:58:58 +08:00
YuQing e7caa614cf adapt to libfastcommon 1.70 and libserverframe 1.2.0 2023-10-12 17:55:11 +08:00
YuQing 4b42ed9fe2
Merge pull request #667 from sunqiangwei1988/master
Fixed: 修复storage_report_my_server_id中长度变量引用错误的问题
2023-10-11 18:21:51 +08:00
sunqiangwei1988 5060231f33 Fixed: 修复storage_report_my_server_id中长度变量引用错误的问题
在storage_report_my_server_id方法中,设置pkg_len值时,将常量引用从IP_ADDRESS_SIZE修改为正确的FDFS_STORAGE_ID_MAX_SIZE常量。
2023-10-03 13:25:58 +08:00
niloay6 e831870c8d fix: 修复当配置4个store_path时,第二个path(store_path1)存储超过阈值时,重复在第一个path(store_path0)写入的问题,即 store_path0 -> store_path2 -> store_path3,而不是一直是store_path0 2023-09-26 09:17:08 +08:00
YuQing 7c58b147ca common/fdfs_global.c: upgrade g_fdfs_version 2023-06-05 15:22:41 +08:00
YuQing 16f6808001 upgrade version to 6.9.5 2023-06-05 15:20:53 +08:00
YuQing fbae1053ae fdht_client/fdht_func.c: fixed compile error 2023-04-25 09:26:50 +08:00
YuQing 5335a59313
Merge pull request #631 from Yang1032/fix-2
fix: fix realloc mistakes to avoid memory leaks
2023-04-25 07:50:22 +08:00
YuQing 1b0e3a4710
Merge pull request #630 from Yang1032/fix-1
fix: possible out-of-bounds issues with array access
2023-04-25 07:46:53 +08:00
maogen.ymg b2966596a9 fix: fix realloc mistakes to avoid memory leaks 2023-04-24 21:34:53 +08:00
maogen.ymg c4355d126a fix: possible out-of-bounds issues with array access 2023-04-24 17:19:29 +08:00
YuQing 969cb24400 add ExecStartPost=/bin/sleep 0.1 to systemd service files 2023-04-10 09:38:36 +08:00
YuQing 4f7715e378 files HISTORY and INSTALL changed 2023-02-15 21:11:24 +08:00
YuQing ca47b893d1 upgrade version to 6.9.4 2023-02-15 21:01:40 +08:00
YuQing 0376cf8e2c bugfixed: report connections' current_count and max_count correctly 2023-02-15 12:44:53 +08:00
YuQing 0615ac3867
Merge pull request #616 from 674019130/master
chore: fix README.md typo
2023-02-13 16:32:24 +08:00
Su 70b9636288 chore: fix README.md typo 2023-02-13 13:01:27 +08:00
YuQing 1e209da4e2 use epoll edge trigger to resolve github issues #608 2023-02-12 10:49:10 +08:00
happyfish100 7414bea9b5
!6 update docker/dockerfile_network/Dockerfile.
Merge pull request !6 from alexzshl/N/A
2023-02-02 01:28:38 +00:00
alexzshl dbd5874fdf
update docker/dockerfile_network/Dockerfile.
1. 当前的 Dockerfile 似乎无法构建, 根据 info.md 的说明, 加入  libserverframe 后可以构建了
2. 原来的 RUN 语句太复杂, 不利于 docker build 时使用多阶段构建缓存
3. 另外, 请教一下, nginx 升级到最新的 1.23 是否有问题, 目前构建nginx是没有问题的 

Signed-off-by: alexzshl <alexzshl@126.com>
2023-01-31 18:44:50 +00:00
YuQing 3109593536 upgrade version to 6.9.3 2023-01-15 08:57:40 +08:00
YuQing f5f17ea6e7 sf_enable_thread_notify with false 2022-12-30 17:26:27 +08:00
YuQing 3f4d273746 simplify serivce name for tracker and storage 2022-12-24 15:05:10 +08:00
YuQing 079bc4737b use prctl to set pthread name under Linux 2022-12-24 10:29:42 +08:00
happyfish100 ed12daf5c3
!5 fix typo in storage_service.c
Merge pull request !5 from Ikko Ashimine/N/A
2022-12-16 08:34:12 +00:00
YuQing e2610befe3 upgrade version to 6.9.2 2022-12-16 16:21:34 +08:00
Ikko Ashimine 4bdcf2e6a9
fix typo in storage_service.c
seperated -> separated

Signed-off-by: Ikko Ashimine <eltociear@gmail.com>
2022-12-16 08:20:15 +00:00
YuQing bc52a5d6e1 output port with format %u instead %d 2022-12-16 16:11:41 +08:00
YuQing ab1c27c197 bugfixed: log connection ip_addr and port correctly 2022-12-16 16:00:40 +08:00
YuQing 326d83bb6e space size such as total_mb and free_mb use int64_t instead of int 2022-11-28 11:38:56 +08:00
YuQing 14079d19ef upgrade version to 6.9.1 2022-11-25 15:50:57 +08:00
YuQing 1c99ed8249 Merge branch 'master' of gitee.com:fastdfs100/fastdfs 2022-11-25 15:41:27 +08:00
YuQing 09491325cc bugfixed: clear task extra data correctly when the connection broken 2022-11-25 15:38:44 +08:00
happyfish100 3296c8c7a8
!4 文字修改
Merge pull request !4 from wzp/N/A
2022-10-21 06:40:20 +00:00
YuQing 4e714c13db add business support description 2022-09-19 09:28:39 +08:00
YuQing 92aa233134
Merge pull request #586 from 919927181/master
add nginx. d folder
2022-09-16 20:05:34 +08:00
liyanjing eacb2bce28 Due to the global configuration limitation, the nginx. d folder was missing from the last submission.
Signed-off-by: liyanjing <919927181@qq.com>
2022-09-16 19:14:56 +08:00
YuQing ffd788a840 remove *.d from .gitignore 2022-09-16 18:21:28 +08:00
YuQing 1a18598e1b
Merge pull request #585 from 919927181/master
Docker Build Image and Installation Manual For FastDFS-v6.0.8\v6.0.9
2022-09-16 08:50:51 +08:00
liyanjing 296df8c5fe fastdfs-v6.0.8\v6.0.9 docker构建镜像和安装手册
Signed-off-by: liyanjing <919927181@qq.com>
2022-09-16 08:27:53 +08:00
YuQing e6fcd3ecdd use atomic counter instead of mutex lock 2022-09-14 16:33:59 +08:00
YuQing 6befb09fe5 test programs compile OK. 2022-09-14 12:46:40 +08:00
YuQing 522bd50522 Merge branch 'use_libserverframe' 2022-09-14 11:25:48 +08:00
YuQing a54a109085 php_client/fastdfs_client.c compile OK. 2022-09-14 11:24:07 +08:00
YuQing bfa8a1eb4d remove useless FDFSConnectionStat type and global variable 2022-09-14 11:07:30 +08:00
YuQing 87139983c8 nio reform for file upload and download 2022-09-14 10:59:19 +08:00
wzp eb1639a5db
修改了一点点说明文字
Signed-off-by: wzp <winzip@163.com>
2022-09-13 15:16:04 +00:00
YuQing c6a92de3d2 set schedule id by sched_generate_next_id() 2022-09-13 20:35:09 +08:00
YuQing d7c0594565 storage nio use libserverframe 2022-09-13 16:30:43 +08:00
YuQing bf3bfa68f6 call sf_load_config_ex instead of sf_load_config 2022-09-12 15:05:57 +08:00
YuQing 8f538108ce tracker nio use libserverframe 2022-09-12 10:48:28 +08:00
YuQing 4dce44665d
Merge pull request #170 from mmcco/master
mmap(2) returns MAP_FAILED, not NULL, not failure
2022-09-11 21:04:18 +08:00
YuQing 1d7b15d1be use func sf_parse_daemon_mode_and_action from libserverframe 2022-09-11 09:51:52 +08:00
YuQing 2e342b6649 correct spell from fastcfs to fastdfs 2022-07-19 15:48:09 +08:00
happyfish100 9767b1e1de
!3 storage启动报错
Merge pull request !3 from juntt/master
2022-07-06 10:19:02 +00:00
juntt fc31983958
update tracker/fdfs_shared_func.c.
tracker.conf中配置为容量时
reserved_storage_space = 1g
storage启动报错日志
[2022-07-05 18:12:09] ERROR - file: shared_func.c, line: 2449, unkown byte unit:  MB, input string: 1024 MB
[2022-07-05 18:12:09] CRIT - exit abnormally!
2022-07-06 03:03:28 +00:00
YuQing 75e10b28f9 fastdfs.spec upgrade libfastcommon version 2022-06-21 12:47:03 +08:00
YuQing aaf77064e2 change fastdfs.spec for mkrpm.sh 2022-03-10 16:16:21 +08:00
YuQing c07fccb8a2 upgrade version to 6.0.8 2022-03-03 10:46:40 +08:00
YuQing e8c8f3d6ac use libfastcommon V1.56 2022-02-25 15:13:09 +08:00
YuQing 9d34ec5679 php_client adapt to php 8 2021-12-31 02:52:00 +08:00
YuQing b6ef9d0f25 modify After directive of systemd config files 2021-09-14 10:40:49 +08:00
YuQing 625c5eb326 correct libfastcommon version 2021-06-11 16:29:09 +08:00
YuQing 9d7342fca6
Merge pull request #498 from wswind/master
fix issue #497
2021-03-31 10:28:50 +08:00
ws d4cb69ef39 fix https://github.com/happyfish100/fastdfs/issues/497 2021-03-26 16:22:53 +08:00
YuQing 65b1f68a53 README: brief introduction of FastCFS 2021-03-20 20:49:26 +08:00
YuQing 8f661a1b64
Merge pull request #487 from Qi-Zou/patch-2
Update fdfs_storaged.service
2021-01-26 20:54:31 +08:00
ZQ 569282efc5
Update fdfs_trackerd.service
修改fdfs_trackerd.service 中 的服务名与文件名不匹配的问题
2021-01-26 19:57:46 +08:00
ZQ 0fd2ca2d80
Update fdfs_storaged.service
修复fdfs_storaged.service和  fdfs_trackerd.service 中的服务名与文件名不匹配的问题
2021-01-26 19:53:40 +08:00
YuQing 255f167491 remove compile noise when gcc version >= 7 2021-01-06 12:02:32 +08:00
YuQing 01b2b399a1 add systemd service files 2021-01-06 11:54:13 +08:00
YuQing d1d3e54781 fastdfs.spec: change libfastcommon version 2020-12-31 11:18:59 +08:00
YuQing e198079a4b correct fastdfs.spec 2020-12-31 11:17:24 +08:00
YuQing 55b2eeafc1 correct spell iovent to ioevent follows libfastcommon 2020-09-30 19:41:09 +08:00
YuQing b5534f9c8f use libfastcommon V1.44 2020-09-08 16:36:29 +08:00
YuQing 4aff731fd5 fix action fetch in argv 2020-08-31 10:48:54 +08:00
YuQing fde110996b upgrade version to v6.06 2019-12-31 07:36:03 +08:00
YuQing 28f9c419a3 memset return ip address to ascii 0 for Java SDK 2019-12-30 17:51:24 +08:00
YuQing a9e593e03b bugfixed: fdfs_storaged can't quit normally 2019-12-26 21:55:22 +08:00
YuQing 9442384755 log more info when send timeout 2019-12-26 11:01:58 +08:00
YuQing 10906677a4 change init alloc size 2019-12-26 10:38:28 +08:00
YuQing a277a08281 add conditions to call storage_trunk_save 2019-12-26 09:22:19 +08:00
YuQing 4be26a52f9 fdfs_monitor code refine 2019-12-26 07:28:40 +08:00
YuQing 8c5a6b6f00 fdfs_monitor.c: do NOT call getHostnameByIp 2019-12-25 19:26:58 +08:00
YuQing a885fd23cc set all space to ascii 0 when delete trunk file 2019-12-25 17:37:51 +08:00
YuQing e6ec41ba04 static variable expect_header 2019-12-24 22:12:40 +08:00
YuQing aefb4611aa refine logging delete unused trunk files 2019-12-24 21:44:50 +08:00
YuQing ef31a31152 trunk file id printf format change from %d to %u 2019-12-24 21:15:46 +08:00
YuQing 71856858eb bugfixed: delete first merged trunk node 2019-12-23 19:05:30 +08:00
YuQing b7447e5903 support delete unused trunk files 2019-12-23 16:11:18 +08:00
YuQing 49d51e949b fix previous value in trunk_save_merged_spaces 2019-12-23 08:13:40 +08:00
YuQing 2ab095bafd bugfixed: ++ppTrunkInfo again 2019-12-22 22:21:31 +08:00
YuQing 1e56afb08d remove trunk_file_lock and use atomic add/sub 2019-12-22 17:09:37 +08:00
YuQing 513894c5a2 support merge free trunk spaces 2019-12-21 21:00:09 +08:00
YuQing 8d2a04e435 remove debug log 2019-12-20 14:46:42 +08:00
YuQing f55d8fafc8 support alignment size for trunk space allocation 2019-12-20 12:02:48 +08:00
YuQing 13ba0963a3 trunk_binlog_truncate delete trunk data file 2019-12-20 09:07:09 +08:00
YuQing 4a6f89c692 support backup binlog file when truncate trunk binlog 2019-12-19 18:38:01 +08:00
YuQing 2c5955c1fe trunk binlog compression support transaction 2019-12-18 21:16:34 +08:00
YuQing fd8772976d check trunk binlog version before compressing 2019-12-15 21:26:13 +08:00
YuQing cab3a90d7f compress the trunk binlog gracefully 2019-12-15 18:49:02 +08:00
YuQing cf0ec7e4cf trunk server support compress the trunk binlog periodically 2019-12-14 21:03:35 +08:00
YuQing a49735ae5a fdfs_trackerd and fdfs_storaged print the server version in usage 2019-12-13 10:59:07 +08:00
YuQing 24bb1e97b5 change config files 2019-12-08 10:17:16 +08:00
YuQing 322ee15cbe beautify config files 2019-12-06 08:52:08 +08:00
YuQing e0d3d44f64 sigQuitHandler: tcp_set_try_again_when_interrupt to false 2019-12-05 12:13:53 +08:00
YuQing 983a21ba51 remove recovery_init_flag_file_ex 2019-12-05 12:04:30 +08:00
YuQing a424a06cf3 should use memset to init pReader 2019-12-05 09:51:36 +08:00
YuQing 6bfb8215ff upgrade version to 6.04 2019-12-05 08:54:39 +08:00
YuQing 22824e5f07 bugfix: init pReader->binlog_buff.version/length to 0 2019-12-04 22:59:24 +08:00
YuQing 01041705ba calc hash use src_filename when it not empty 2019-12-04 20:19:43 +08:00
YuQing 856ef15ab7 fix recovery_get_global_full_filename 2019-12-04 20:00:34 +08:00
YuQing edb3f6bb4d pthread_kill alive recovery threads 2019-12-04 16:29:08 +08:00
YuQing 867dc29111 use fdfs_get_ipaddr_by_peer_ip 2019-12-04 15:59:31 +08:00
YuQing 33b539eac6 disk recovery support multi-threads to speed up 2019-12-04 10:47:32 +08:00
YuQing 46171f2c64 add parameter compress_error_log_days_before 2019-12-01 20:39:31 +08:00
YuQing 634d85eaae support compress error log and access log 2019-11-30 16:12:18 +08:00
YuQing df2fd2069b storage_report_ip_changed ignore result EEXIST 2019-11-27 20:33:56 +08:00
YuQing 80c9930f22 INSTALL file changed 2019-11-23 10:07:11 +08:00
YuQing 949f53b15d INSTALL changed and modify website name 2019-11-23 09:56:35 +08:00
YuQing ad22505fd2 change sync_log_buff_interval from 10 to 1 2019-11-22 21:04:34 +08:00
YuQing da7300bc9b change comment/information in config files and codes 2019-11-22 09:26:47 +08:00
YuQing aeb171dca7 change .gitignore 2019-11-22 07:54:16 +08:00
YuQing 7455191fa3
Merge pull request #279 from retamia/master
fix: mac 下php_client编译出错的问题
2019-11-22 07:40:02 +08:00
YuQing 2f94d24e8f
Merge pull request #348 from SaintKayLuk/master
Dockerfile文件小bug
2019-11-22 07:26:57 +08:00
YuQing add02e7348 a little change 2019-11-20 17:02:21 +08:00
YuQing 9a29048ae5 larger network_timeout for fetching one-store-path binlog 2019-11-20 08:38:38 +08:00
YuQing 5a6acbdff8 remove debug info 2019-11-19 11:03:55 +08:00
YuQing 0551999135 storage.conf add parameter check_store_path_mark 2019-11-19 10:51:16 +08:00
YuQing 5557429899 log more info when ping tracker leader fail 2019-11-19 09:17:55 +08:00
YuQing e4c2644db2 change sleep seconds when ping tracker leader fail 2019-11-19 08:49:08 +08:00
YuQing 358fff4ac8 new selected tracker leader do NOT notify self by network 2019-11-18 22:30:10 +08:00
YuQing afff529a9b skip status FDFS_STORAGE_STATUS_DELETED 2019-11-18 18:34:22 +08:00
YuQing 132dbc0950 change comments in the config files 2019-11-18 08:41:49 +08:00
YuQing 4ed56c5f69 check store path's mark file to prevent confusion 2019-11-17 22:19:29 +08:00
YuQing af4f0754e4 code refine: extent struct FDFSStorePathInfo 2019-11-17 19:21:55 +08:00
YuQing 1ac2ced873 upgrade version to v6.03 2019-11-16 11:34:50 +08:00
YuQing 3b1045268e add func fdfs_set_server_info_index1 2019-11-16 11:30:20 +08:00
YuQing cb24cd82e1 storage server write to data_init_flag and mark file safely 2019-11-16 10:53:46 +08:00
YuQing 9dc6742b1e bugfix: fdfs_monitor fix get index of the specified tracker server 2019-11-16 09:25:57 +08:00
YuQing 017fff46f3 set my_status in storage join response 2019-11-15 15:05:11 +08:00
YuQing 22865e0542 storage server request tracker server to change it's status 2019-11-15 13:19:26 +08:00
YuQing 6ea2f5e1ca my_status change to my_result 2019-11-15 08:39:59 +08:00
YuQing afc4fa2346 code stype little adjust 2019-11-15 08:19:36 +08:00
YuQing 41855a4247 get_ipaddr_by_peer_ip refined 2019-11-14 21:02:02 +08:00
YuQing 21c52cf406 dual IPs support two different types of inner (intranet) IPs 2019-11-14 19:19:11 +08:00
saintkay 52ac538a71
Update Dockerfile
添加centos版本, gcc-c ++ 改为gcc-c++
2019-11-13 17:22:45 +08:00
saintkay 86cd69ed2d
Update Dockerfile
修改小bug
2019-11-13 17:15:22 +08:00
YuQing 6712843a80 upgrade version to V6.0.2 2019-11-13 16:55:11 +08:00
YuQing ec34a1f844 fdfs_file_info.c change crc32 output format 2019-11-13 16:48:49 +08:00
YuQing 9bc762bffb sync regenerated appender file 2019-11-12 19:07:49 +08:00
YuQing 6fb8fe206b modify php test for regenerate_appender_filename 2019-11-12 10:25:53 +08:00
YuQing c36419d5bb php ext support regenerate filename for appender file 2019-11-11 22:55:51 +08:00
YuQing 9c0bbce9df support regenerate filename for appender file 2019-11-10 20:38:36 +08:00
YuQing 9f0a914c93 recovery download file to local temp file then rename 2019-11-08 20:41:32 +08:00
YuQing cdb180ae32 get_file_info calculate CRC32 for appender file type 2019-11-06 19:33:19 +08:00
YuQing 57d2d815c6 add README_zh.md 2019-11-05 20:30:54 +08:00
YuQing ee48562fa5 set delay_seconds to 0 when delay_seconds < 0 2019-11-03 21:14:09 +08:00
YuQing 1a546865ac client/test OK 2019-10-29 21:25:22 +08:00
YuQing 9cb1182776 correct Wechat public account 2019-10-26 10:03:44 +08:00
YuQing 0ed18812f2 add Wechat public account description 2019-10-26 09:57:10 +08:00
YuQing fc8c6f8ebc log more info when recv timeout 2019-10-25 15:58:55 +08:00
YuQing 1943f3d49a upgrade version to V6.0.1 2019-10-25 14:49:51 +08:00
YuQing 6b1f5e0cca bugfix: must check and create data path 2019-10-25 11:58:20 +08:00
YuQing ecca01766a small change for config files 2019-10-24 10:59:59 +08:00
YuQing 461e78ca30 remove debug info 2019-10-23 16:50:40 +08:00
YuQing 77da832e05 compress and uncompress binlog file by gzip when need 2019-10-23 14:56:28 +08:00
YuQing 9d2db48f31 make.sh fix TARGET_PREFIX 2019-10-21 19:53:23 +08:00
YuQing 5eb02fd01f correct spell for tracker 2019-10-21 15:06:18 +08:00
retamia e39e96eaad fix: mac 下php_client编译出错的问题 2019-04-16 10:01:52 +08:00
Michael McConville d735e4b8ef mmap(2) returns MAP_FAILED, not NULL, not failure 2017-12-02 14:44:30 -07:00
201 changed files with 21324 additions and 11591 deletions

108
.gitignore vendored Normal file
View File

@ -0,0 +1,108 @@
# Makefile.in
storage/Makefile
tracker/Makefile
client/test/Makefile
client/Makefile
# client/fdfs_link_library.sh.in
client/fdfs_link_library.sh
# Compiled Object files
*.slo
*.lo
*.o
*.obj
# Precompiled Headers
*.gch
*.pch
# Compiled Dynamic libraries
*.so
*.dylib
*.dSYM
*.dll
# Fortran module files
*.mod
*.smod
# Compiled Static libraries
*.lai
*.la
*.a
*.lib
# Executables
*.exe
*.out
*.app
client/fdfs_append_file
client/fdfs_appender_test
client/fdfs_appender_test1
client/fdfs_crc32
client/fdfs_delete_file
client/fdfs_download_file
client/fdfs_file_info
client/fdfs_monitor
client/fdfs_test
client/fdfs_test1
client/fdfs_upload_appender
client/fdfs_upload_file
client/fdfs_regenerate_filename
client/test/fdfs_monitor
client/test/fdfs_test
client/test/fdfs_test1
storage/fdfs_storaged
tracker/fdfs_trackerd
test/combine_result
test/100M
test/10M
test/1M
test/200K
test/50K
test/5K
test/gen_files
test/test_delete
test/test_download
test/test_upload
test/upload/
test/download/
test/delete/
# other
php_client/.deps
php_client/.libs/
php_client/Makefile
php_client/Makefile.fragments
php_client/Makefile.global
php_client/Makefile.objects
php_client/acinclude.m4
php_client/aclocal.m4
php_client/autom4te.cache/
php_client/build/
php_client/config.guess
php_client/config.h
php_client/config.h.in
php_client/config.log
php_client/config.nice
php_client/config.status
php_client/config.sub
php_client/configure
php_client/configure.ac
php_client/install-sh
php_client/libtool
php_client/ltmain.sh
php_client/missing
php_client/mkinstalldirs
php_client/run-tests.php
# fastdfs runtime paths
data/
logs/
# others
*.pid
*.swp
*.swo

134
HISTORY
View File

@ -1,7 +1,138 @@
Version 6.12.2 2024-09-16
* use libfastcommon V1.75 and libserverframe 1.2.5
Version 6.12.1 2024-03-06
* adapt to libserverframe 1.2.3
* bugfixed: notify_leader_changed support IPv6 correctly
* log square quoted IPv6 address
Version 6.12.0 2024-02-12
* bugfixed: parse ip and port use parseAddress instead of splitEx
* bugfixed: fdfs_server_info_to_string support IPv6 correctly
* check filename duplicate by hashtable instead of file system access
Version 6.11.0 2023-12-10
* support IPv6, config item: address_family in tracker.conf and storage.conf
use libfastcommon V1.71 and libserverframe 1.2.1
* storage.conf can specify the storage server ID for NAT network
Version 6.10.0 2023-09-07
* use libfastcommon V1.70 and libserverframe 1.2.0
Version 6.9.5 2023-06-05
* fix possible out-of-bounds issues with array access
* fix realloc mistakes to avoid memory leaks
* add ExecStartPost=/bin/sleep 0.1 to systemd service files
* fdht_client/fdht_func.c: fixed compile error
Version 6.9.4 2023-02-15
* use epoll edge trigger to resolve github issues #608
* bugfixed: report connections' current_count and max_count correctly
Version 6.9.3 2022-12-24
* use prctl to set pthread name under Linux
Version 6.9.2 2022-11-28
* space size such as total_mb and free_mb use int64_t instead of int
* bugfixed: log connection ip_addr and port correctly
* output port with format %u instead %d
Version 6.9.1 2022-11-25
* bugfixed: clear task extra data correctly when the connection broken
Version 6.09 2022-09-14
* use libfastcommon V1.60 and libserverframe 1.1.19
* use atomic counter instead of mutex lock
Version 6.08 2022-06-21
* use libfastcommon V1.56
NOTE: you MUST upgrade libfastcommon to V1.56 or later
Version 6.07 2020-12-31
* use libfastcommon V1.44
NOTE: you MUST upgrade libfastcommon to V1.44 or later
* correct spell iovent to ioevent follows libfastcommon
Version 6.06 2019-12-30
* bugfixed: fdfs_storaged can't quit normally
* bugfixed: init/memset return ip address to ascii 0 for Java SDK
Version 6.05 2019-12-25
* fdfs_trackerd and fdfs_storaged print the server version in usage.
you can execute fdfs_trackerd or fdfs_storaged without parameters
to show the server version
* trunk server support compress the trunk binlog periodically,
the config items in tracker.conf: trunk_compress_binlog_interval
and trunk_compress_binlog_time_base
* trunk binlog compression support transaction
* support backup binlog file when truncate trunk binlog,
the config item in tracker.conf: trunk_binlog_max_backups
* support alignment size for trunk space allocation
the config item in tracker.conf: trunk_alloc_alignment_size
* support merge free trunk spaces
the config item in tracker.conf: trunk_free_space_merge
* support delete unused trunk files
the config item in tracker.conf: delete_unused_trunk_files
* fdfs_monitor.c: do NOT call getHostnameByIp
NOTE: you MUST upgrade libfastcommon to V1.43 or later
Version 6.04 2019-12-05
* storage_report_ip_changed ignore result EEXIST
* use get_gzip_command_filename from libfastcommon v1.42
* support compress error log and access log
* disk recovery support multi-threads to speed up
* bugfix: should use memset to init pReader in function
storage_reader_init, this bug is caused by v6.01
NOTE: you MUST upgrade libfastcommon to V1.42 or later
Version 6.03 2019-11-20
* dual IPs support two different types of inner (intranet) IPs
* storage server request tracker server to change it's status
to that of tracker leader when the storage server found
it's status inconsistence
* bugfix: fdfs_monitor fix get index of the specified tracker server
* storage server write to data_init_flag and mark file safely
(write to temp file then rename)
* code refine: combine g_fdfs_store_paths and g_path_space_list,
and extent struct FDFSStorePathInfo
* check store path's mark file to prevent confusion
* new selected tracker leader do NOT notify self by network
* larger network_timeout for fetching one-store-path binlog
when disk recovery
NOTE: the tracker and storage server must upgrade together
Version 6.02 2019-11-12
* get_file_info calculate CRC32 for appender file type
* disk recovery download file to local temp file then rename it
when the local file exists
* support regenerate filename for appender file
NOTE: the regenerated file will be a normal file!
Version 6.01 2019-10-25
* compress and uncompress binlog file by gzip when need,
config items in storage.conf: compress_binlog and compress_binlog_time
* bugfix: must check and create data path before write_to_pid_file
in fdfs_storaged.c
Version 6.00 2019-10-16
* tracker and storage server support dual IPs
1. you can config dual trackr IPs in storage.conf and client.conf,
1. you can config dual tracker IPs in storage.conf and client.conf,
the configuration item name is "tracker_server"
2. you can config dual storage IPs in storage_ids.conf
more detail please see the config files.
@ -14,6 +145,7 @@ Version 6.00 2019-10-16
* tracker server check tracker list when storage server join
* use socketCreateExAuto and socketClientExAuto exported by libfastcommon
Version 5.12 2018-06-07
* code refine for rare case
* replace print format OFF_PRINTF_FORMAT to PRId64

88
INSTALL
View File

@ -3,50 +3,77 @@ Copy right 2009 Happy Fish / YuQing
FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page for more detail.
English language: http://english.csource.org/
Chinese language: http://www.csource.org/
Chinese language: http://www.fastken.com/
#step 1. download libfastcommon source package from github and install it,
the github address:
https://github.com/happyfish100/libfastcommon.git
# step 1. download libfastcommon source codes and install it,
# github address: https://github.com/happyfish100/libfastcommon.git
# gitee address: https://gitee.com/fastdfs100/libfastcommon.git
# command lines as:
#step 2. download FastDFS source package and unpack it,
tar xzf FastDFS_v5.x.tar.gz
#for example:
tar xzf FastDFS_v5.08.tar.gz
git clone https://github.com/happyfish100/libfastcommon.git
cd libfastcommon; git checkout V1.0.75
./make.sh clean && ./make.sh && ./make.sh install
#step 3. enter the FastDFS dir
cd FastDFS
#step 4. execute:
./make.sh
# step 2. download libserverframe source codes and install it,
# github address: https://github.com/happyfish100/libserverframe.git
# gitee address: https://gitee.com/fastdfs100/libserverframe.git
# command lines as:
#step 5. make install
./make.sh install
git clone https://github.com/happyfish100/libserverframe.git
cd libserverframe; git checkout V1.2.5
./make.sh clean && ./make.sh && ./make.sh install
#step 6. edit/modify the config file of tracker and storage
# step 3. download fastdfs source codes and install it,
# github address: https://github.com/happyfish100/fastdfs.git
# gitee address: https://gitee.com/fastdfs100/fastdfs.git
# command lines as:
#step 7. run server programs
#start the tracker server:
git clone https://github.com/happyfish100/fastdfs.git
cd fastdfs; git checkout V6.12.2
./make.sh clean && ./make.sh && ./make.sh install
# step 4. setup the config files
# the setup script does NOT overwrite existing config files,
# please feel free to execute this script (take easy :)
./setup.sh /etc/fdfs
# step 5. edit or modify the config files of tracker, storage and client
such as:
vi /etc/fdfs/tracker.conf
vi /etc/fdfs/storage.conf
vi /etc/fdfs/client.conf
and so on ...
# step 6. run the server programs
# start the tracker server:
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart
#in Linux, you can start fdfs_trackerd as a service:
/sbin/service fdfs_trackerd start
#start the storage server:
# start the storage server:
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart
#in Linux, you can start fdfs_storaged as a service:
/sbin/service fdfs_storaged start
#step 8. run test program
#run the client test program:
# (optional) in Linux, you can start fdfs_trackerd and fdfs_storaged as a service:
/sbin/service fdfs_trackerd restart
/sbin/service fdfs_storaged restart
# step 7. (optional) run monitor program
# such as:
/usr/bin/fdfs_monitor /etc/fdfs/client.conf
# step 8. (optional) run the test program
# such as:
/usr/bin/fdfs_test <client_conf_filename> <operation>
/usr/bin/fdfs_test1 <client_conf_filename> <operation>
#for example, upload a file:
/usr/bin/fdfs_test conf/client.conf upload /usr/include/stdlib.h
#step 9. run monitor program
#run the monitor program:
/usr/bin/fdfs_monitor <client_conf_filename>
# for example, upload a file for test:
/usr/bin/fdfs_test /etc/fdfs/client.conf upload /usr/include/stdlib.h
tracker server config file sample please see conf/tracker.conf
@ -55,7 +82,6 @@ storage server config file sample please see conf/storage.conf
client config file sample please see conf/client.conf
Item detail
1. server common items
---------------------------------------------------

View File

@ -3,11 +3,10 @@ Copyright (C) 2008 Happy Fish / YuQing
FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page for more detail.
English language: http://english.csource.org/
Chinese language: http://www.csource.org/
Chinese language: http://www.fastken.com/
FastDFS is an open source high performance distributed file system. It's major
FastDFS is an open source high performance distributed file system. Its major
functions include: file storing, file syncing and file accessing (file uploading
and file downloading), and it can resolve the high capacity and load balancing
problem. FastDFS should meet the requirement of the website whose service based
@ -42,3 +41,15 @@ The identification of a file is composed of two parts: the volume name and
the file name.
Client test code use client library please refer to the directory: client/test.
For more FastDFS related articles, please subscribe the Wechat/Weixin public account
(Chinese Language): fastdfs
FastDFS is a lightweight object storage solution. If you need a general distributed
file system for databases, K8s and virtual machines (such as KVM), you can learn about
[FastCFS](https://github.com/happyfish100/FastCFS) which achieves strong data consistency
and high performance.
We provide technical support service and customized development. Welcome to use WeChat or email for discuss.
email: 384681(at)qq(dot)com

28
README_zh.md Normal file
View File

@ -0,0 +1,28 @@
FastDFS是一款开源的分布式文件系统功能主要包括文件存储、文件同步、文件访问文件上传、文件下载解决了文件大容量存储和高性能访问的问题。FastDFS特别适合以文件为载体的在线服务如图片、视频、文档等等服务。
FastDFS作为一款轻量级分布式文件系统版本V6.01代码量6.3万行。FastDFS用C语言实现支持Linux、FreeBSD、MacOS等类UNIX系统。FastDFS类似google FS属于应用级文件系统不是通用的文件系统只能通过专有API访问目前提供了C客户端和Java SDK以及PHP扩展SDK。
FastDFS为互联网应用量身定做解决大容量文件存储问题实现高性能和高扩展性。FastDFS可以看做是基于文件的key value存储系统key为文件IDvalue为文件本身因此称作分布式文件存储服务更为合适。
FastDFS的架构比较简单如下图所示
![architect](images/architect.png)
```
FastDFS特点
1分组存储简单灵活
2对等结构不存在单点
3文件ID由FastDFS生成作为文件访问凭证。FastDFS不需要传统的name server或meta server
4大、中、小文件均可以很好支持可以存储海量小文件
5一台storage支持多块磁盘支持单盘数据恢复
6提供了nginx扩展模块可以和nginx无缝衔接
7支持多线程方式上传和下载文件支持断点续传
8存储服务器上可以保存文件附加属性。
```
FastDFS更多更详细的功能和特性介绍请参阅FastDFS微信公众号的其他文章搜索公众号fastdfs。
FastDFS是轻量级的对象存储解决方案如果你在数据库、K8s和虚拟机如KVM等场景需要使用通用分布式文件系统可以了解一下保证数据强一致性且高性能的[FastCFS](https://gitee.com/fastdfs100/FastCFS)。
我们提供商业技术支持和定制化开发,欢迎微信或邮件洽谈。
email: 384681(at)qq(dot)com

View File

@ -4,9 +4,9 @@ COMPILE = $(CC) $(CFLAGS)
ENABLE_STATIC_LIB = $(ENABLE_STATIC_LIB)
ENABLE_SHARED_LIB = $(ENABLE_SHARED_LIB)
INC_PATH = -I../common -I../tracker -I/usr/include/fastcommon
LIB_PATH = $(LIBS) -lfastcommon
LIB_PATH = $(LIBS) -lfastcommon -lserverframe
TARGET_PATH = $(TARGET_PREFIX)/bin
TARGET_LIB = $(TARGET_PREFIX)/lib64
TARGET_LIB = $(TARGET_PREFIX)/$(LIB_VERSION)
TARGET_INC = $(TARGET_PREFIX)/include
CONFIG_PATH = $(TARGET_CONF_PATH)
@ -39,7 +39,7 @@ ALL_OBJS = $(STATIC_OBJS) $(FDFS_SHARED_OBJS)
ALL_PRGS = fdfs_monitor fdfs_test fdfs_test1 fdfs_crc32 fdfs_upload_file \
fdfs_download_file fdfs_delete_file fdfs_file_info \
fdfs_appender_test fdfs_appender_test1 fdfs_append_file \
fdfs_upload_appender
fdfs_upload_appender fdfs_regenerate_filename
STATIC_LIBS = libfdfsclient.a
@ -49,6 +49,7 @@ CLIENT_SHARED_LIBS = libfdfsclient.so
ALL_LIBS = $(STATIC_LIBS) $(SHARED_LIBS)
all: $(ALL_OBJS) $(ALL_PRGS) $(ALL_LIBS)
libfdfsclient.so:
$(COMPILE) -o $@ $< -shared $(FDFS_SHARED_OBJS) $(LIB_PATH)
libfdfsclient.a:
@ -67,12 +68,12 @@ install:
mkdir -p $(TARGET_LIB)
mkdir -p $(TARGET_PREFIX)/lib
cp -f $(ALL_PRGS) $(TARGET_PATH)
if [ $(ENABLE_STATIC_LIB) -eq 1 ]; then cp -f $(STATIC_LIBS) $(TARGET_LIB); cp -f $(STATIC_LIBS) $(TARGET_PREFIX)/lib/;fi
if [ $(ENABLE_SHARED_LIB) -eq 1 ]; then cp -f $(CLIENT_SHARED_LIBS) $(TARGET_LIB); cp -f $(CLIENT_SHARED_LIBS) $(TARGET_PREFIX)/lib/;fi
if [ $(ENABLE_STATIC_LIB) -eq 1 ]; then cp -f $(STATIC_LIBS) $(TARGET_LIB); cp -f $(STATIC_LIBS) $(TARGET_PREFIX)/lib/; fi
if [ $(ENABLE_SHARED_LIB) -eq 1 ]; then cp -f $(CLIENT_SHARED_LIBS) $(TARGET_LIB); cp -f $(CLIENT_SHARED_LIBS) $(TARGET_PREFIX)/lib/; fi
mkdir -p $(TARGET_INC)/fastdfs
cp -f $(FDFS_HEADER_FILES) $(TARGET_INC)/fastdfs
if [ ! -f $(CONFIG_PATH)/client.conf.sample ]; then cp -f ../conf/client.conf $(CONFIG_PATH)/client.conf.sample; fi
if [ ! -f $(CONFIG_PATH)/client.conf ]; then cp -f ../conf/client.conf $(CONFIG_PATH)/client.conf; fi
clean:
rm -f $(ALL_OBJS) $(ALL_PRGS) $(ALL_LIBS)

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
//client_func.c
@ -134,17 +134,18 @@ static int copy_tracker_servers(TrackerServerGroup *pTrackerGroup,
}
}
/*
/*
{
TrackerServerInfo *pServer;
for (pServer=pTrackerGroup->servers; pServer<pTrackerGroup->servers+ \
char formatted_ip[FORMATTED_IP_SIZE];
for (pServer=pTrackerGroup->servers; pServer<pTrackerGroup->servers+
pTrackerGroup->server_count; pServer++)
{
//printf("server=%s:%d\n", \
pServer->ip_addr, pServer->port);
format_ip_address(pServer->connections[0].ip_addr, formatted_ip);
printf("server=%s:%u\n", formatted_ip, pServer->connections[0].port);
}
}
*/
*/
return 0;
}
@ -241,7 +242,7 @@ static int fdfs_get_params_from_tracker(bool *use_storage_id)
continue_flag = false;
if ((result=fdfs_get_ini_context_from_tracker(&g_tracker_group,
&iniContext, &continue_flag, false, NULL)) != 0)
&iniContext, &continue_flag)) != 0)
{
return result;
}
@ -270,41 +271,41 @@ static int fdfs_client_do_init_ex(TrackerServerGroup *pTrackerGroup, \
pBasePath = iniGetStrValue(NULL, "base_path", iniContext);
if (pBasePath == NULL)
{
strcpy(g_fdfs_base_path, "/tmp");
strcpy(SF_G_BASE_PATH_STR, "/tmp");
}
else
{
snprintf(g_fdfs_base_path, sizeof(g_fdfs_base_path),
snprintf(SF_G_BASE_PATH_STR, sizeof(SF_G_BASE_PATH_STR),
"%s", pBasePath);
chopPath(g_fdfs_base_path);
if (!fileExists(g_fdfs_base_path))
chopPath(SF_G_BASE_PATH_STR);
if (!fileExists(SF_G_BASE_PATH_STR))
{
logError("file: "__FILE__", line: %d, " \
"\"%s\" can't be accessed, error info: %s", \
__LINE__, g_fdfs_base_path, STRERROR(errno));
__LINE__, SF_G_BASE_PATH_STR, STRERROR(errno));
return errno != 0 ? errno : ENOENT;
}
if (!isDir(g_fdfs_base_path))
if (!isDir(SF_G_BASE_PATH_STR))
{
logError("file: "__FILE__", line: %d, " \
"\"%s\" is not a directory!", \
__LINE__, g_fdfs_base_path);
__LINE__, SF_G_BASE_PATH_STR);
return ENOTDIR;
}
}
g_fdfs_connect_timeout = iniGetIntValue(NULL, "connect_timeout", \
SF_G_CONNECT_TIMEOUT = iniGetIntValue(NULL, "connect_timeout", \
iniContext, DEFAULT_CONNECT_TIMEOUT);
if (g_fdfs_connect_timeout <= 0)
if (SF_G_CONNECT_TIMEOUT <= 0)
{
g_fdfs_connect_timeout = DEFAULT_CONNECT_TIMEOUT;
SF_G_CONNECT_TIMEOUT = DEFAULT_CONNECT_TIMEOUT;
}
g_fdfs_network_timeout = iniGetIntValue(NULL, "network_timeout", \
SF_G_NETWORK_TIMEOUT = iniGetIntValue(NULL, "network_timeout", \
iniContext, DEFAULT_NETWORK_TIMEOUT);
if (g_fdfs_network_timeout <= 0)
if (SF_G_NETWORK_TIMEOUT <= 0)
{
g_fdfs_network_timeout = DEFAULT_NETWORK_TIMEOUT;
SF_G_NETWORK_TIMEOUT = DEFAULT_NETWORK_TIMEOUT;
}
if ((result=fdfs_load_tracker_group_ex(pTrackerGroup, \
@ -348,39 +349,77 @@ static int fdfs_client_do_init_ex(TrackerServerGroup *pTrackerGroup, \
return result;
}
load_fdfs_parameters_from_tracker = iniGetBoolValue(NULL, \
"load_fdfs_parameters_from_tracker", \
load_fdfs_parameters_from_tracker = iniGetBoolValue(NULL,
"load_fdfs_parameters_from_tracker",
iniContext, false);
if (load_fdfs_parameters_from_tracker)
{
fdfs_get_params_from_tracker(&use_storage_id);
if ((result=fdfs_get_params_from_tracker(&use_storage_id)) != 0)
{
return result;
}
}
else
{
use_storage_id = iniGetBoolValue(NULL, "use_storage_id", \
iniContext, false);
if (use_storage_id)
{
result = fdfs_load_storage_ids_from_file( \
conf_filename, iniContext);
}
}
{
use_storage_id = iniGetBoolValue(NULL, "use_storage_id",
iniContext, false);
if (use_storage_id)
{
if ((result=fdfs_load_storage_ids_from_file(
conf_filename, iniContext)) != 0)
{
return result;
}
}
}
if (use_storage_id)
{
FDFSStorageIdInfo *idInfo;
FDFSStorageIdInfo *end;
char *connect_first_by;
end = g_storage_ids_by_id.ids + g_storage_ids_by_id.count;
for (idInfo=g_storage_ids_by_id.ids; idInfo<end; idInfo++)
{
if (idInfo->ip_addrs.count > 1)
{
g_multi_storage_ips = true;
break;
}
}
if (g_multi_storage_ips)
{
connect_first_by = iniGetStrValue(NULL,
"connect_first_by", iniContext);
if (connect_first_by != NULL && strncasecmp(connect_first_by,
"last", 4) == 0)
{
g_connect_first_by = fdfs_connect_first_by_last_connected;
}
}
}
#ifdef DEBUG_FLAG
logDebug("base_path=%s, " \
"connect_timeout=%d, "\
"network_timeout=%d, "\
"tracker_server_count=%d, " \
"anti_steal_token=%d, " \
"anti_steal_secret_key length=%d, " \
"use_connection_pool=%d, " \
"g_connection_pool_max_idle_time=%ds, " \
"use_storage_id=%d, storage server id count: %d\n", \
g_fdfs_base_path, g_fdfs_connect_timeout, \
g_fdfs_network_timeout, pTrackerGroup->server_count, \
g_anti_steal_token, g_anti_steal_secret_key.length, \
g_use_connection_pool, g_connection_pool_max_idle_time, \
use_storage_id, g_storage_ids_by_id.count);
logDebug("base_path=%s, "
"connect_timeout=%d, "
"network_timeout=%d, "
"tracker_server_count=%d, "
"anti_steal_token=%d, "
"anti_steal_secret_key length=%d, "
"use_connection_pool=%d, "
"g_connection_pool_max_idle_time=%ds, "
"use_storage_id=%d, connect_first_by=%s, "
"storage server id count: %d, "
"multi storage ips: %d\n",
SF_G_BASE_PATH_STR, SF_G_CONNECT_TIMEOUT,
SF_G_NETWORK_TIMEOUT, pTrackerGroup->server_count,
g_anti_steal_token, g_anti_steal_secret_key.length,
g_use_connection_pool, g_connection_pool_max_idle_time,
use_storage_id, g_connect_first_by == fdfs_connect_first_by_tracker ?
"tracker" : "last-connected", g_storage_ids_by_id.count,
g_multi_storage_ips);
#endif
return 0;

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
//client_func.h
@ -16,6 +16,8 @@
#define _CLIENT_FUNC_H_
typedef struct {
short file_type;
bool get_from_server;
time_t create_timestamp;
int crc32;
int source_id; //source storage id

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdlib.h>
@ -13,6 +13,7 @@
int g_tracker_server_http_port = 80;
TrackerServerGroup g_tracker_group = {0, 0, -1, NULL};
bool g_multi_storage_ips = false;
FDFSConnectFirstBy g_connect_first_by = fdfs_connect_first_by_tracker;
bool g_anti_steal_token = false;
BufferInfo g_anti_steal_secret_key = {0};

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
//client_global.h
@ -15,6 +15,11 @@
#include "tracker_types.h"
#include "fdfs_shared_func.h"
typedef enum {
fdfs_connect_first_by_tracker,
fdfs_connect_first_by_last_connected
} FDFSConnectFirstBy;
#ifdef __cplusplus
extern "C" {
#endif
@ -22,6 +27,8 @@ extern "C" {
extern int g_tracker_server_http_port;
extern TrackerServerGroup g_tracker_group;
extern bool g_multi_storage_ips;
extern FDFSConnectFirstBy g_connect_first_by;
extern bool g_anti_steal_token;
extern BufferInfo g_anti_steal_secret_key;

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -48,7 +48,7 @@ int uploadFileCallback(void *arg, const int64_t file_size, int sock)
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
SF_G_NETWORK_TIMEOUT, &total_send_bytes);
}
int main(int argc, char *argv[])
@ -79,13 +79,13 @@ int main(int argc, char *argv[])
const char *file_ext_name;
struct stat stat_buf;
printf("This is FastDFS client test program v%d.%02d\n" \
printf("This is FastDFS client test program v%d.%d.%d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
"Please visit the FastDFS Home Page http://www.fastken.com/ \n" \
"for more detail.\n\n", g_fdfs_version.major, g_fdfs_version.minor,
g_fdfs_version.patch);
if (argc < 3)
{

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -48,7 +48,7 @@ int uploadFileCallback(void *arg, const int64_t file_size, int sock)
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
SF_G_NETWORK_TIMEOUT, &total_send_bytes);
}
int main(int argc, char *argv[])
@ -78,13 +78,13 @@ int main(int argc, char *argv[])
const char *file_ext_name;
struct stat stat_buf;
printf("This is FastDFS client test program v%d.%02d\n" \
printf("This is FastDFS client test program v%d.%d.%d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
"Please visit the FastDFS Home Page http://www.fastken.com/ \n" \
"for more detail.\n\n", g_fdfs_version.major, g_fdfs_version.minor,
g_fdfs_version.patch);
if (argc < 3)
{

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef FDFS_CLIENT_H

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -19,6 +19,7 @@
int main(int argc, char *argv[])
{
char *conf_filename;
const char *file_type_str;
char file_id[128];
int result;
FDFSFileInfo file_info;
@ -44,7 +45,7 @@ int main(int argc, char *argv[])
result = fdfs_get_file_info_ex1(file_id, true, &file_info);
if (result != 0)
{
printf("query file info fail, " \
fprintf(stderr, "query file info fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
@ -52,6 +53,25 @@ int main(int argc, char *argv[])
{
char szDatetime[32];
switch (file_info.file_type)
{
case FDFS_FILE_TYPE_NORMAL:
file_type_str = "normal";
break;
case FDFS_FILE_TYPE_SLAVE:
file_type_str = "slave";
break;
case FDFS_FILE_TYPE_APPENDER:
file_type_str = "appender";
break;
default:
file_type_str = "unkown";
break;
}
printf("GET FROM SERVER: %s\n\n",
file_info.get_from_server ? "true" : "false");
printf("file type: %s\n", file_type_str);
printf("source storage id: %d\n", file_info.source_id);
printf("source ip address: %s\n", file_info.source_ip_addr);
printf("file create timestamp: %s\n", formatDatetime(
@ -59,7 +79,7 @@ int main(int argc, char *argv[])
szDatetime, sizeof(szDatetime)));
printf("file size: %"PRId64"\n", \
file_info.file_size);
printf("file crc32: %u (0x%08X)\n", \
printf("file crc32: %d (0x%08x)\n", \
file_info.crc32, file_info.crc32);
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -35,12 +35,13 @@ static void usage(char *argv[])
int main(int argc, char *argv[])
{
char formatted_ip[FORMATTED_IP_SIZE];
char *conf_filename;
int result;
char *op_type;
char *group_name;
char *tracker_server;
int arg_index;
char *group_name;
int result;
if (argc < 2)
{
@ -96,7 +97,7 @@ int main(int argc, char *argv[])
}
log_init();
g_log_context.log_level = LOG_DEBUG;
//g_log_context.log_level = LOG_DEBUG;
ignore_signal_pipe();
if ((result=fdfs_client_init(conf_filename)) != 0)
@ -131,9 +132,9 @@ int main(int argc, char *argv[])
for (i=0; i<g_tracker_group.server_count; i++)
{
if (fdfs_server_contain1(g_tracker_group.servers + i,
&conn) == 0)
if (fdfs_server_contain1(g_tracker_group.servers + i, &conn))
{
fdfs_set_server_info_index1(g_tracker_group.servers + i, &conn);
g_tracker_group.server_index = i;
break;
}
@ -155,7 +156,9 @@ int main(int argc, char *argv[])
fdfs_client_destroy();
return errno != 0 ? errno : ECONNREFUSED;
}
printf("\ntracker server is %s:%d\n\n", pTrackerServer->ip_addr, pTrackerServer->port);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
printf("\ntracker server is %s:%u\n\n", formatted_ip,
pTrackerServer->port);
if (arg_index < argc)
{
@ -277,32 +280,35 @@ static int list_storages(FDFSGroupStat *pGroupStat)
char szSyncedDelaySeconds[128];
char szHostname[128];
char szHostnamePrompt[128+8];
char szDiskTotalSpace[32];
char szDiskFreeSpace[32];
char szTrunkSpace[32];
int k;
int max_last_source_update;
printf( "group name = %s\n" \
"disk total space = %"PRId64" MB\n" \
"disk free space = %"PRId64" MB\n" \
"trunk free space = %"PRId64" MB\n" \
"storage server count = %d\n" \
"active server count = %d\n" \
"storage server port = %d\n" \
"storage HTTP port = %d\n" \
"store path count = %d\n" \
"subdir count per path = %d\n" \
"current write server index = %d\n" \
"current trunk file id = %d\n\n", \
pGroupStat->group_name, \
pGroupStat->total_mb, \
pGroupStat->free_mb, \
pGroupStat->trunk_free_mb, \
pGroupStat->count, \
pGroupStat->active_count, \
pGroupStat->storage_port, \
pGroupStat->storage_http_port, \
pGroupStat->store_path_count, \
pGroupStat->subdir_count_per_path, \
pGroupStat->current_write_server, \
printf( "group name = %s\n"
"disk total space = %s MB\n"
"disk free space = %s MB\n"
"trunk free space = %s MB\n"
"storage server count = %d\n"
"active server count = %d\n"
"storage server port = %d\n"
"storage HTTP port = %d\n"
"store path count = %d\n"
"subdir count per path = %d\n"
"current write server index = %d\n"
"current trunk file id = %d\n\n",
pGroupStat->group_name,
long_to_comma_str(pGroupStat->total_mb, szDiskTotalSpace),
long_to_comma_str(pGroupStat->free_mb, szDiskFreeSpace),
long_to_comma_str(pGroupStat->trunk_free_mb, szTrunkSpace),
pGroupStat->count,
pGroupStat->active_count,
pGroupStat->storage_port,
pGroupStat->storage_http_port,
pGroupStat->store_path_count,
pGroupStat->subdir_count_per_path,
pGroupStat->current_write_server,
pGroupStat->current_trunk_file_id
);
@ -352,8 +358,12 @@ static int list_storages(FDFSGroupStat *pGroupStat)
int second;
char szDelayTime[64];
delay_seconds = (int)(max_last_source_update - \
delay_seconds = (int)(max_last_source_update -
pStorageStat->last_synced_timestamp);
if (delay_seconds < 0)
{
delay_seconds = 0;
}
day = delay_seconds / (24 * 3600);
remain_seconds = delay_seconds % (24 * 3600);
hour = remain_seconds / 3600;
@ -385,7 +395,8 @@ static int list_storages(FDFSGroupStat *pGroupStat)
}
}
getHostnameByIp(pStorage->ip_addr, szHostname, sizeof(szHostname));
//getHostnameByIp(pStorage->ip_addr, szHostname, sizeof(szHostname));
*szHostname = '\0';
if (*szHostname != '\0')
{
sprintf(szHostnamePrompt, " (%s)", szHostname);
@ -406,138 +417,138 @@ static int list_storages(FDFSGroupStat *pGroupStat)
*szUpTime = '\0';
}
printf( "\tStorage %d:\n" \
"\t\tid = %s\n" \
"\t\tip_addr = %s%s %s\n" \
"\t\thttp domain = %s\n" \
"\t\tversion = %s\n" \
"\t\tjoin time = %s\n" \
"\t\tup time = %s\n" \
"\t\ttotal storage = %d MB\n" \
"\t\tfree storage = %d MB\n" \
"\t\tupload priority = %d\n" \
"\t\tstore_path_count = %d\n" \
"\t\tsubdir_count_per_path = %d\n" \
"\t\tstorage_port = %d\n" \
"\t\tstorage_http_port = %d\n" \
"\t\tcurrent_write_path = %d\n" \
"\t\tsource storage id = %s\n" \
"\t\tif_trunk_server = %d\n" \
"\t\tconnection.alloc_count = %d\n" \
"\t\tconnection.current_count = %d\n" \
"\t\tconnection.max_count = %d\n" \
"\t\ttotal_upload_count = %"PRId64"\n" \
"\t\tsuccess_upload_count = %"PRId64"\n" \
"\t\ttotal_append_count = %"PRId64"\n" \
"\t\tsuccess_append_count = %"PRId64"\n" \
"\t\ttotal_modify_count = %"PRId64"\n" \
"\t\tsuccess_modify_count = %"PRId64"\n" \
"\t\ttotal_truncate_count = %"PRId64"\n" \
"\t\tsuccess_truncate_count = %"PRId64"\n" \
"\t\ttotal_set_meta_count = %"PRId64"\n" \
"\t\tsuccess_set_meta_count = %"PRId64"\n" \
"\t\ttotal_delete_count = %"PRId64"\n" \
"\t\tsuccess_delete_count = %"PRId64"\n" \
"\t\ttotal_download_count = %"PRId64"\n" \
"\t\tsuccess_download_count = %"PRId64"\n" \
"\t\ttotal_get_meta_count = %"PRId64"\n" \
"\t\tsuccess_get_meta_count = %"PRId64"\n" \
"\t\ttotal_create_link_count = %"PRId64"\n" \
"\t\tsuccess_create_link_count = %"PRId64"\n"\
"\t\ttotal_delete_link_count = %"PRId64"\n" \
"\t\tsuccess_delete_link_count = %"PRId64"\n" \
"\t\ttotal_upload_bytes = %"PRId64"\n" \
"\t\tsuccess_upload_bytes = %"PRId64"\n" \
"\t\ttotal_append_bytes = %"PRId64"\n" \
"\t\tsuccess_append_bytes = %"PRId64"\n" \
"\t\ttotal_modify_bytes = %"PRId64"\n" \
"\t\tsuccess_modify_bytes = %"PRId64"\n" \
"\t\tstotal_download_bytes = %"PRId64"\n" \
"\t\tsuccess_download_bytes = %"PRId64"\n" \
"\t\ttotal_sync_in_bytes = %"PRId64"\n" \
"\t\tsuccess_sync_in_bytes = %"PRId64"\n" \
"\t\ttotal_sync_out_bytes = %"PRId64"\n" \
"\t\tsuccess_sync_out_bytes = %"PRId64"\n" \
"\t\ttotal_file_open_count = %"PRId64"\n" \
"\t\tsuccess_file_open_count = %"PRId64"\n" \
"\t\ttotal_file_read_count = %"PRId64"\n" \
"\t\tsuccess_file_read_count = %"PRId64"\n" \
"\t\ttotal_file_write_count = %"PRId64"\n" \
"\t\tsuccess_file_write_count = %"PRId64"\n" \
"\t\tlast_heart_beat_time = %s\n" \
"\t\tlast_source_update = %s\n" \
"\t\tlast_sync_update = %s\n" \
"\t\tlast_synced_timestamp = %s %s\n", \
++k, pStorage->id, pStorage->ip_addr, \
szHostnamePrompt, get_storage_status_caption( \
pStorage->status), pStorage->domain_name, \
pStorage->version, \
formatDatetime(pStorage->join_time, \
"%Y-%m-%d %H:%M:%S", \
szJoinTime, sizeof(szJoinTime)), \
szUpTime, pStorage->total_mb, \
pStorage->free_mb, \
pStorage->upload_priority, \
pStorage->store_path_count, \
pStorage->subdir_count_per_path, \
pStorage->storage_port, \
pStorage->storage_http_port, \
pStorage->current_write_path, \
pStorage->src_id, \
pStorage->if_trunk_server, \
pStorageStat->connection.alloc_count, \
pStorageStat->connection.current_count, \
pStorageStat->connection.max_count, \
pStorageStat->total_upload_count, \
pStorageStat->success_upload_count, \
pStorageStat->total_append_count, \
pStorageStat->success_append_count, \
pStorageStat->total_modify_count, \
pStorageStat->success_modify_count, \
pStorageStat->total_truncate_count, \
pStorageStat->success_truncate_count, \
pStorageStat->total_set_meta_count, \
pStorageStat->success_set_meta_count, \
pStorageStat->total_delete_count, \
pStorageStat->success_delete_count, \
pStorageStat->total_download_count, \
pStorageStat->success_download_count, \
pStorageStat->total_get_meta_count, \
pStorageStat->success_get_meta_count, \
pStorageStat->total_create_link_count, \
pStorageStat->success_create_link_count, \
pStorageStat->total_delete_link_count, \
pStorageStat->success_delete_link_count, \
pStorageStat->total_upload_bytes, \
pStorageStat->success_upload_bytes, \
pStorageStat->total_append_bytes, \
pStorageStat->success_append_bytes, \
pStorageStat->total_modify_bytes, \
pStorageStat->success_modify_bytes, \
pStorageStat->total_download_bytes, \
pStorageStat->success_download_bytes, \
pStorageStat->total_sync_in_bytes, \
pStorageStat->success_sync_in_bytes, \
pStorageStat->total_sync_out_bytes, \
pStorageStat->success_sync_out_bytes, \
pStorageStat->total_file_open_count, \
pStorageStat->success_file_open_count, \
pStorageStat->total_file_read_count, \
pStorageStat->success_file_read_count, \
pStorageStat->total_file_write_count, \
pStorageStat->success_file_write_count, \
formatDatetime(pStorageStat->last_heart_beat_time, \
"%Y-%m-%d %H:%M:%S", \
szLastHeartBeatTime, sizeof(szLastHeartBeatTime)), \
formatDatetime(pStorageStat->last_source_update, \
"%Y-%m-%d %H:%M:%S", \
szSrcUpdTime, sizeof(szSrcUpdTime)), \
formatDatetime(pStorageStat->last_sync_update, \
"%Y-%m-%d %H:%M:%S", \
szSyncUpdTime, sizeof(szSyncUpdTime)), \
formatDatetime(pStorageStat->last_synced_timestamp, \
"%Y-%m-%d %H:%M:%S", \
szSyncedTimestamp, sizeof(szSyncedTimestamp)),\
printf( "\tStorage %d:\n"
"\t\tid = %s\n"
"\t\tip_addr = %s%s %s\n"
"\t\thttp domain = %s\n"
"\t\tversion = %s\n"
"\t\tjoin time = %s\n"
"\t\tup time = %s\n"
"\t\ttotal storage = %s MB\n"
"\t\tfree storage = %s MB\n"
"\t\tupload priority = %d\n"
"\t\tstore_path_count = %d\n"
"\t\tsubdir_count_per_path = %d\n"
"\t\tstorage_port = %d\n"
"\t\tstorage_http_port = %d\n"
"\t\tcurrent_write_path = %d\n"
"\t\tsource storage id = %s\n"
"\t\tif_trunk_server = %d\n"
"\t\tconnection.alloc_count = %d\n"
"\t\tconnection.current_count = %d\n"
"\t\tconnection.max_count = %d\n"
"\t\ttotal_upload_count = %"PRId64"\n"
"\t\tsuccess_upload_count = %"PRId64"\n"
"\t\ttotal_append_count = %"PRId64"\n"
"\t\tsuccess_append_count = %"PRId64"\n"
"\t\ttotal_modify_count = %"PRId64"\n"
"\t\tsuccess_modify_count = %"PRId64"\n"
"\t\ttotal_truncate_count = %"PRId64"\n"
"\t\tsuccess_truncate_count = %"PRId64"\n"
"\t\ttotal_set_meta_count = %"PRId64"\n"
"\t\tsuccess_set_meta_count = %"PRId64"\n"
"\t\ttotal_delete_count = %"PRId64"\n"
"\t\tsuccess_delete_count = %"PRId64"\n"
"\t\ttotal_download_count = %"PRId64"\n"
"\t\tsuccess_download_count = %"PRId64"\n"
"\t\ttotal_get_meta_count = %"PRId64"\n"
"\t\tsuccess_get_meta_count = %"PRId64"\n"
"\t\ttotal_create_link_count = %"PRId64"\n"
"\t\tsuccess_create_link_count = %"PRId64"\n"
"\t\ttotal_delete_link_count = %"PRId64"\n"
"\t\tsuccess_delete_link_count = %"PRId64"\n"
"\t\ttotal_upload_bytes = %"PRId64"\n"
"\t\tsuccess_upload_bytes = %"PRId64"\n"
"\t\ttotal_append_bytes = %"PRId64"\n"
"\t\tsuccess_append_bytes = %"PRId64"\n"
"\t\ttotal_modify_bytes = %"PRId64"\n"
"\t\tsuccess_modify_bytes = %"PRId64"\n"
"\t\tstotal_download_bytes = %"PRId64"\n"
"\t\tsuccess_download_bytes = %"PRId64"\n"
"\t\ttotal_sync_in_bytes = %"PRId64"\n"
"\t\tsuccess_sync_in_bytes = %"PRId64"\n"
"\t\ttotal_sync_out_bytes = %"PRId64"\n"
"\t\tsuccess_sync_out_bytes = %"PRId64"\n"
"\t\ttotal_file_open_count = %"PRId64"\n"
"\t\tsuccess_file_open_count = %"PRId64"\n"
"\t\ttotal_file_read_count = %"PRId64"\n"
"\t\tsuccess_file_read_count = %"PRId64"\n"
"\t\ttotal_file_write_count = %"PRId64"\n"
"\t\tsuccess_file_write_count = %"PRId64"\n"
"\t\tlast_heart_beat_time = %s\n"
"\t\tlast_source_update = %s\n"
"\t\tlast_sync_update = %s\n"
"\t\tlast_synced_timestamp = %s %s\n",
++k, pStorage->id, pStorage->ip_addr,
szHostnamePrompt, get_storage_status_caption(
pStorage->status), pStorage->domain_name,
pStorage->version,
formatDatetime(pStorage->join_time,
"%Y-%m-%d %H:%M:%S",
szJoinTime, sizeof(szJoinTime)), szUpTime,
long_to_comma_str(pStorage->total_mb, szDiskTotalSpace),
long_to_comma_str(pStorage->free_mb, szDiskFreeSpace),
pStorage->upload_priority,
pStorage->store_path_count,
pStorage->subdir_count_per_path,
pStorage->storage_port,
pStorage->storage_http_port,
pStorage->current_write_path,
pStorage->src_id,
pStorage->if_trunk_server,
pStorageStat->connection.alloc_count,
pStorageStat->connection.current_count,
pStorageStat->connection.max_count,
pStorageStat->total_upload_count,
pStorageStat->success_upload_count,
pStorageStat->total_append_count,
pStorageStat->success_append_count,
pStorageStat->total_modify_count,
pStorageStat->success_modify_count,
pStorageStat->total_truncate_count,
pStorageStat->success_truncate_count,
pStorageStat->total_set_meta_count,
pStorageStat->success_set_meta_count,
pStorageStat->total_delete_count,
pStorageStat->success_delete_count,
pStorageStat->total_download_count,
pStorageStat->success_download_count,
pStorageStat->total_get_meta_count,
pStorageStat->success_get_meta_count,
pStorageStat->total_create_link_count,
pStorageStat->success_create_link_count,
pStorageStat->total_delete_link_count,
pStorageStat->success_delete_link_count,
pStorageStat->total_upload_bytes,
pStorageStat->success_upload_bytes,
pStorageStat->total_append_bytes,
pStorageStat->success_append_bytes,
pStorageStat->total_modify_bytes,
pStorageStat->success_modify_bytes,
pStorageStat->total_download_bytes,
pStorageStat->success_download_bytes,
pStorageStat->total_sync_in_bytes,
pStorageStat->success_sync_in_bytes,
pStorageStat->total_sync_out_bytes,
pStorageStat->success_sync_out_bytes,
pStorageStat->total_file_open_count,
pStorageStat->success_file_open_count,
pStorageStat->total_file_read_count,
pStorageStat->success_file_read_count,
pStorageStat->total_file_write_count,
pStorageStat->success_file_write_count,
formatDatetime(pStorageStat->last_heart_beat_time,
"%Y-%m-%d %H:%M:%S",
szLastHeartBeatTime, sizeof(szLastHeartBeatTime)),
formatDatetime(pStorageStat->last_source_update,
"%Y-%m-%d %H:%M:%S",
szSrcUpdTime, sizeof(szSrcUpdTime)),
formatDatetime(pStorageStat->last_sync_update,
"%Y-%m-%d %H:%M:%S",
szSyncUpdTime, sizeof(szSyncUpdTime)),
formatDatetime(pStorageStat->last_synced_timestamp,
"%Y-%m-%d %H:%M:%S",
szSyncedTimestamp, sizeof(szSyncedTimestamp)),
szSyncedDelaySeconds);
}

View File

@ -0,0 +1,68 @@
/**
* Copyright (C) 2008 Happy Fish / YuQing
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include "fdfs_client.h"
#include "fastcommon/logger.h"
int main(int argc, char *argv[])
{
char *conf_filename;
ConnectionInfo *pTrackerServer;
int result;
char appender_file_id[128];
char new_file_id[128];
if (argc < 3)
{
fprintf(stderr, "regenerate filename for the appender file.\n"
"NOTE: the regenerated file will be a normal file!\n"
"Usage: %s <config_file> <appender_file_id>\n",
argv[0]);
return 1;
}
log_init();
g_log_context.log_level = LOG_ERR;
conf_filename = argv[1];
if ((result=fdfs_client_init(conf_filename)) != 0)
{
return result;
}
pTrackerServer = tracker_get_connection();
if (pTrackerServer == NULL)
{
fdfs_client_destroy();
return errno != 0 ? errno : ECONNREFUSED;
}
snprintf(appender_file_id, sizeof(appender_file_id), "%s", argv[2]);
if ((result=storage_regenerate_appender_filename1(pTrackerServer,
NULL, appender_file_id, new_file_id)) != 0)
{
fprintf(stderr, "regenerate file %s fail, "
"error no: %d, error info: %s\n",
appender_file_id, result, STRERROR(result));
return result;
}
printf("%s\n", new_file_id);
tracker_close_connection_ex(pTrackerServer, true);
fdfs_client_destroy();
return result;
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -48,7 +48,7 @@ int uploadFileCallback(void *arg, const int64_t file_size, int sock)
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
SF_G_NETWORK_TIMEOUT, &total_send_bytes);
}
int main(int argc, char *argv[])
@ -66,6 +66,7 @@ int main(int argc, char *argv[])
int meta_count;
int i;
FDFSMetaData *pMetaList;
char formatted_ip[FORMATTED_IP_SIZE];
char token[32 + 1];
char file_id[128];
char file_url[256];
@ -73,20 +74,20 @@ int main(int argc, char *argv[])
char szPortPart[16];
int url_len;
time_t ts;
char *file_buff;
char *file_buff;
int64_t file_size;
char *operation;
char *meta_buff;
int store_path_index;
FDFSFileInfo file_info;
printf("This is FastDFS client test program v%d.%02d\n" \
printf("This is FastDFS client test program v%d.%d.%d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
"Please visit the FastDFS Home Page http://www.fastken.com/ \n" \
"for more detail.\n\n", g_fdfs_version.major, g_fdfs_version.minor,
g_fdfs_version.patch);
if (argc < 3)
{
@ -460,8 +461,9 @@ int main(int argc, char *argv[])
printf("server list (%d):\n", server_count);
for (i=0; i<server_count; i++)
{
printf("\t%s:%d\n", \
storageServers[i].ip_addr, \
format_ip_address(storageServers[i].
ip_addr, formatted_ip);
printf("\t%s:%u\n", formatted_ip,
storageServers[i].port);
}
printf("\n");
@ -488,8 +490,8 @@ int main(int argc, char *argv[])
return result;
}
printf("storage=%s:%d\n", storageServer.ip_addr, \
storageServer.port);
format_ip_address(storageServer.ip_addr, formatted_ip);
printf("storage=%s:%u\n", formatted_ip, storageServer.port);
if ((pStorageServer=tracker_make_connection(&storageServer, \
&result)) == NULL)
@ -670,15 +672,17 @@ int main(int argc, char *argv[])
/* for test only */
if ((result=fdfs_active_test(pTrackerServer)) != 0)
{
printf("active_test to tracker server %s:%d fail, errno: %d\n", \
pTrackerServer->ip_addr, pTrackerServer->port, result);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
printf("active_test to tracker server %s:%u fail, errno: %d\n",
formatted_ip, pTrackerServer->port, result);
}
/* for test only */
if ((result=fdfs_active_test(pStorageServer)) != 0)
{
printf("active_test to storage server %s:%d fail, errno: %d\n", \
pStorageServer->ip_addr, pStorageServer->port, result);
format_ip_address(pStorageServer->ip_addr, formatted_ip);
printf("active_test to storage server %s:%u fail, errno: %d\n",
formatted_ip, pStorageServer->port, result);
}
tracker_close_connection_ex(pStorageServer, true);
@ -688,4 +692,3 @@ int main(int argc, char *argv[])
return result;
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>
@ -47,7 +47,7 @@ int uploadFileCallback(void *arg, const int64_t file_size, int sock)
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
SF_G_NETWORK_TIMEOUT, &total_send_bytes);
}
int main(int argc, char *argv[])
@ -63,6 +63,7 @@ int main(int argc, char *argv[])
int meta_count;
int i;
FDFSMetaData *pMetaList;
char formatted_ip[FORMATTED_IP_SIZE];
char token[32 + 1];
char file_id[128];
char master_file_id[128];
@ -71,20 +72,20 @@ int main(int argc, char *argv[])
char szPortPart[16];
int url_len;
time_t ts;
char *file_buff;
char *file_buff;
int64_t file_size;
char *operation;
char *meta_buff;
int store_path_index;
FDFSFileInfo file_info;
printf("This is FastDFS client test program v%d.%02d\n" \
printf("This is FastDFS client test program v%d.%d.%d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
"Please visit the FastDFS Home Page http://www.fastken.com/ \n" \
"for more detail.\n\n", g_fdfs_version.major, g_fdfs_version.minor,
g_fdfs_version.patch);
if (argc < 3)
{
@ -432,8 +433,9 @@ int main(int argc, char *argv[])
printf("server list (%d):\n", server_count);
for (i=0; i<server_count; i++)
{
printf("\t%s:%d\n", \
storageServers[i].ip_addr, \
format_ip_address(storageServers[i].
ip_addr, formatted_ip);
printf("\t%s:%u\n", formatted_ip,
storageServers[i].port);
}
printf("\n");
@ -455,8 +457,8 @@ int main(int argc, char *argv[])
return result;
}
printf("storage=%s:%d\n", storageServer.ip_addr, \
storageServer.port);
format_ip_address(storageServer.ip_addr, formatted_ip);
printf("storage=%s:%u\n", formatted_ip, storageServer.port);
if ((pStorageServer=tracker_make_connection(&storageServer, \
&result)) == NULL)
@ -637,15 +639,17 @@ int main(int argc, char *argv[])
/* for test only */
if ((result=fdfs_active_test(pTrackerServer)) != 0)
{
printf("active_test to tracker server %s:%d fail, errno: %d\n", \
pTrackerServer->ip_addr, pTrackerServer->port, result);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
printf("active_test to tracker server %s:%u fail, errno: %d\n",
formatted_ip, pTrackerServer->port, result);
}
/* for test only */
if ((result=fdfs_active_test(pStorageServer)) != 0)
{
printf("active_test to storage server %s:%d fail, errno: %d\n", \
pStorageServer->ip_addr, pStorageServer->port, result);
format_ip_address(pStorageServer->ip_addr, formatted_ip);
printf("active_test to storage server %s:%u fail, errno: %d\n",
formatted_ip, pStorageServer->port, result);
}
tracker_close_connection_ex(pStorageServer, true);

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <stdio.h>

File diff suppressed because it is too large Load Diff

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef STORAGE_CLIENT_H
@ -563,6 +563,23 @@ int fdfs_get_file_info_ex(const char *group_name, const char *remote_filename, \
const bool get_from_server, FDFSFileInfo *pFileInfo);
/**
* regenerate normal filename for appender file
* Note: the appender file will change to normal file
* params:
* pTrackerServer: the tracker server
* pStorageServer: the storage server
* group_name: the group name
* appender_filename: the appender filename
* new_group_name: return the new group name
* new_remote_filename: return the new filename
* return: 0 success, !=0 fail, return the error code
**/
int storage_regenerate_appender_filename(ConnectionInfo *pTrackerServer,
ConnectionInfo *pStorageServer, const char *group_name,
const char *appender_filename, char *new_group_name,
char *new_remote_filename);
#ifdef __cplusplus
}
#endif

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef STORAGE_CLIENT1_H
@ -524,6 +524,22 @@ int fdfs_get_file_info_ex1(const char *file_id, const bool get_from_server, \
int storage_file_exist1(ConnectionInfo *pTrackerServer, \
ConnectionInfo *pStorageServer, \
const char *file_id);
/**
* regenerate normal filename for appender file
* Note: the appender file will change to normal file
* params:
* pTrackerServer: the tracker server
* pStorageServer: the storage server
* group_name: the group name
* appender_file_id: the appender file id
* file_id: regenerated file id return by storage server
* return: 0 success, !=0 fail, return the error code
**/
int storage_regenerate_appender_filename1(ConnectionInfo *pTrackerServer,
ConnectionInfo *pStorageServer, const char *appender_file_id,
char *new_file_id);
#ifdef __cplusplus
}
#endif

View File

@ -1,8 +1,9 @@
.SUFFIXES: .c .o
COMPILE = $(CC) $(CFLAGS)
INC_PATH = -I/usr/include/fastcommon -I/usr/include/fastdfs
LIB_PATH = -L/usr/local/lib -lfastcommon -lfdfsclient $(LIBS)
INC_PATH = -I/usr/include/fastcommon -I/usr/include/fastdfs \
-I/usr/local/include/fastcommon -I/usr/local/include/fastdfs
LIB_PATH = -L/usr/local/lib -lfastcommon -lserverframe -lfdfsclient $(LIBS)
TARGET_PATH = $(TARGET_PATH)
ALL_OBJS =

View File

@ -1,589 +0,0 @@
/**
* Copyright (C) 2008 Happy Fish / YuQing
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
**/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string.h>
#include <errno.h>
#include <signal.h>
#include <sys/types.h>
#include "fastcommon/sockopt.h"
#include "fastcommon/logger.h"
#include "client_global.h"
#include "fdfs_global.h"
#include "fdfs_client.h"
static ConnectionInfo *pTrackerServer;
static int list_all_groups(const char *group_name);
static void usage(char *argv[])
{
printf("Usage: %s <config_file> [-h <tracker_server>] [list|delete|set_trunk_server <group_name> " \
"[storage_id]]\n", argv[0]);
}
int main(int argc, char *argv[])
{
char *conf_filename;
int result;
char *op_type;
char *tracker_server;
int arg_index;
char *group_name;
if (argc < 2)
{
usage(argv);
return 1;
}
tracker_server = NULL;
conf_filename = argv[1];
arg_index = 2;
if (arg_index >= argc)
{
op_type = "list";
}
else
{
int len;
len = strlen(argv[arg_index]);
if (len >= 2 && strncmp(argv[arg_index], "-h", 2) == 0)
{
if (len == 2)
{
arg_index++;
if (arg_index >= argc)
{
usage(argv);
return 1;
}
tracker_server = argv[arg_index++];
}
else
{
tracker_server = argv[arg_index] + 2;
arg_index++;
}
if (arg_index < argc)
{
op_type = argv[arg_index++];
}
else
{
op_type = "list";
}
}
else
{
op_type = argv[arg_index++];
}
}
log_init();
g_log_context.log_level = LOG_DEBUG;
ignore_signal_pipe();
if ((result=fdfs_client_init(conf_filename)) != 0)
{
return result;
}
load_log_level_ex(conf_filename);
if (tracker_server == NULL)
{
if (g_tracker_group.server_count > 1)
{
srand(time(NULL));
rand(); //discard the first
g_tracker_group.server_index = (int)( \
(g_tracker_group.server_count * (double)rand()) \
/ (double)RAND_MAX);
}
}
else
{
int i;
char ip_addr[IP_ADDRESS_SIZE];
*ip_addr = '\0';
if (getIpaddrByName(tracker_server, ip_addr, sizeof(ip_addr)) \
== INADDR_NONE)
{
printf("resolve ip address of tracker server: %s " \
"fail!\n", tracker_server);
return 2;
}
for (i=0; i<g_tracker_group.server_count; i++)
{
if (strcmp(g_tracker_group.servers[i].ip_addr, \
ip_addr) == 0)
{
g_tracker_group.server_index = i;
break;
}
}
if (i == g_tracker_group.server_count)
{
printf("tracker server: %s not exists!\n", tracker_server);
return 2;
}
}
printf("server_count=%d, server_index=%d\n", g_tracker_group.server_count, g_tracker_group.server_index);
pTrackerServer = tracker_get_connection();
if (pTrackerServer == NULL)
{
fdfs_client_destroy();
return errno != 0 ? errno : ECONNREFUSED;
}
printf("\ntracker server is %s:%d\n\n", pTrackerServer->ip_addr, pTrackerServer->port);
if (arg_index < argc)
{
group_name = argv[arg_index++];
}
else
{
group_name = NULL;
}
if (strcmp(op_type, "list") == 0)
{
if (group_name == NULL)
{
result = list_all_groups(NULL);
}
else
{
result = list_all_groups(group_name);
}
}
else if (strcmp(op_type, "delete") == 0)
{
if (arg_index >= argc)
{
if ((result=tracker_delete_group(&g_tracker_group, \
group_name)) == 0)
{
printf("delete group: %s success\n", \
group_name);
}
else
{
printf("delete group: %s fail, " \
"error no: %d, error info: %s\n", \
group_name, result, STRERROR(result));
}
}
else
{
char *storage_id;
storage_id = argv[arg_index++];
if ((result=tracker_delete_storage(&g_tracker_group, \
group_name, storage_id)) == 0)
{
printf("delete storage server %s::%s success\n", \
group_name, storage_id);
}
else
{
printf("delete storage server %s::%s fail, " \
"error no: %d, error info: %s\n", \
group_name, storage_id, \
result, STRERROR(result));
}
}
}
else if (strcmp(op_type, "set_trunk_server") == 0)
{
char *storage_id;
char new_trunk_server_id[FDFS_STORAGE_ID_MAX_SIZE];
if (group_name == NULL)
{
usage(argv);
return 1;
}
if (arg_index >= argc)
{
storage_id = "";
}
else
{
storage_id = argv[arg_index++];
}
if ((result=tracker_set_trunk_server(&g_tracker_group, \
group_name, storage_id, new_trunk_server_id)) == 0)
{
printf("set trunk server %s::%s success, " \
"new trunk server: %s\n", group_name, \
storage_id, new_trunk_server_id);
}
else
{
printf("set trunk server %s::%s fail, " \
"error no: %d, error info: %s\n", \
group_name, storage_id, \
result, STRERROR(result));
}
}
else
{
printf("Invalid command %s\n\n", op_type);
usage(argv);
}
tracker_close_connection_ex(pTrackerServer, true);
fdfs_client_destroy();
return 0;
}
static int list_storages(FDFSGroupStat *pGroupStat)
{
int result;
int storage_count;
FDFSStorageInfo storage_infos[FDFS_MAX_SERVERS_EACH_GROUP];
FDFSStorageInfo *p;
FDFSStorageInfo *pStorage;
FDFSStorageInfo *pStorageEnd;
FDFSStorageStat *pStorageStat;
char szJoinTime[32];
char szUpTime[32];
char szLastHeartBeatTime[32];
char szSrcUpdTime[32];
char szSyncUpdTime[32];
char szSyncedTimestamp[32];
char szSyncedDelaySeconds[128];
char szHostname[128];
char szHostnamePrompt[128+8];
int k;
int max_last_source_update;
printf( "group name = %s\n" \
"disk total space = %"PRId64" MB\n" \
"disk free space = %"PRId64" MB\n" \
"trunk free space = %"PRId64" MB\n" \
"storage server count = %d\n" \
"active server count = %d\n" \
"storage server port = %d\n" \
"storage HTTP port = %d\n" \
"store path count = %d\n" \
"subdir count per path = %d\n" \
"current write server index = %d\n" \
"current trunk file id = %d\n\n", \
pGroupStat->group_name, \
pGroupStat->total_mb, \
pGroupStat->free_mb, \
pGroupStat->trunk_free_mb, \
pGroupStat->count, \
pGroupStat->active_count, \
pGroupStat->storage_port, \
pGroupStat->storage_http_port, \
pGroupStat->store_path_count, \
pGroupStat->subdir_count_per_path, \
pGroupStat->current_write_server, \
pGroupStat->current_trunk_file_id
);
result = tracker_list_servers(pTrackerServer, \
pGroupStat->group_name, NULL, \
storage_infos, FDFS_MAX_SERVERS_EACH_GROUP, \
&storage_count);
if (result != 0)
{
return result;
}
k = 0;
pStorageEnd = storage_infos + storage_count;
for (pStorage=storage_infos; pStorage<pStorageEnd; \
pStorage++)
{
max_last_source_update = 0;
for (p=storage_infos; p<pStorageEnd; p++)
{
if (p != pStorage && p->stat.last_source_update
> max_last_source_update)
{
max_last_source_update = \
p->stat.last_source_update;
}
}
pStorageStat = &(pStorage->stat);
if (max_last_source_update == 0)
{
*szSyncedDelaySeconds = '\0';
}
else
{
if (pStorageStat->last_synced_timestamp == 0)
{
strcpy(szSyncedDelaySeconds, "(never synced)");
}
else
{
int delay_seconds;
int remain_seconds;
int day;
int hour;
int minute;
int second;
char szDelayTime[64];
delay_seconds = (int)(max_last_source_update - \
pStorageStat->last_synced_timestamp);
day = delay_seconds / (24 * 3600);
remain_seconds = delay_seconds % (24 * 3600);
hour = remain_seconds / 3600;
remain_seconds %= 3600;
minute = remain_seconds / 60;
second = remain_seconds % 60;
if (day != 0)
{
sprintf(szDelayTime, "%d days " \
"%02dh:%02dm:%02ds", \
day, hour, minute, second);
}
else if (hour != 0)
{
sprintf(szDelayTime, "%02dh:%02dm:%02ds", \
hour, minute, second);
}
else if (minute != 0)
{
sprintf(szDelayTime, "%02dm:%02ds", minute, second);
}
else
{
sprintf(szDelayTime, "%ds", second);
}
sprintf(szSyncedDelaySeconds, "(%s delay)", szDelayTime);
}
}
getHostnameByIp(pStorage->ip_addr, szHostname, sizeof(szHostname));
if (*szHostname != '\0')
{
sprintf(szHostnamePrompt, " (%s)", szHostname);
}
else
{
*szHostnamePrompt = '\0';
}
if (pStorage->up_time != 0)
{
formatDatetime(pStorage->up_time, \
"%Y-%m-%d %H:%M:%S", \
szUpTime, sizeof(szUpTime));
}
else
{
*szUpTime = '\0';
}
printf( "\tStorage %d:\n" \
"\t\tid = %s\n" \
"\t\tip_addr = %s%s %s\n" \
"\t\thttp domain = %s\n" \
"\t\tversion = %s\n" \
"\t\tjoin time = %s\n" \
"\t\tup time = %s\n" \
"\t\ttotal storage = %d MB\n" \
"\t\tfree storage = %d MB\n" \
"\t\tupload priority = %d\n" \
"\t\tstore_path_count = %d\n" \
"\t\tsubdir_count_per_path = %d\n" \
"\t\tstorage_port = %d\n" \
"\t\tstorage_http_port = %d\n" \
"\t\tcurrent_write_path = %d\n" \
"\t\tsource storage id = %s\n" \
"\t\tif_trunk_server = %d\n" \
"\t\tconnection.alloc_count = %d\n" \
"\t\tconnection.current_count = %d\n" \
"\t\tconnection.max_count = %d\n" \
"\t\ttotal_upload_count = %"PRId64"\n" \
"\t\tsuccess_upload_count = %"PRId64"\n" \
"\t\ttotal_append_count = %"PRId64"\n" \
"\t\tsuccess_append_count = %"PRId64"\n" \
"\t\ttotal_modify_count = %"PRId64"\n" \
"\t\tsuccess_modify_count = %"PRId64"\n" \
"\t\ttotal_truncate_count = %"PRId64"\n" \
"\t\tsuccess_truncate_count = %"PRId64"\n" \
"\t\ttotal_set_meta_count = %"PRId64"\n" \
"\t\tsuccess_set_meta_count = %"PRId64"\n" \
"\t\ttotal_delete_count = %"PRId64"\n" \
"\t\tsuccess_delete_count = %"PRId64"\n" \
"\t\ttotal_download_count = %"PRId64"\n" \
"\t\tsuccess_download_count = %"PRId64"\n" \
"\t\ttotal_get_meta_count = %"PRId64"\n" \
"\t\tsuccess_get_meta_count = %"PRId64"\n" \
"\t\ttotal_create_link_count = %"PRId64"\n" \
"\t\tsuccess_create_link_count = %"PRId64"\n"\
"\t\ttotal_delete_link_count = %"PRId64"\n" \
"\t\tsuccess_delete_link_count = %"PRId64"\n" \
"\t\ttotal_upload_bytes = %"PRId64"\n" \
"\t\tsuccess_upload_bytes = %"PRId64"\n" \
"\t\ttotal_append_bytes = %"PRId64"\n" \
"\t\tsuccess_append_bytes = %"PRId64"\n" \
"\t\ttotal_modify_bytes = %"PRId64"\n" \
"\t\tsuccess_modify_bytes = %"PRId64"\n" \
"\t\tstotal_download_bytes = %"PRId64"\n" \
"\t\tsuccess_download_bytes = %"PRId64"\n" \
"\t\ttotal_sync_in_bytes = %"PRId64"\n" \
"\t\tsuccess_sync_in_bytes = %"PRId64"\n" \
"\t\ttotal_sync_out_bytes = %"PRId64"\n" \
"\t\tsuccess_sync_out_bytes = %"PRId64"\n" \
"\t\ttotal_file_open_count = %"PRId64"\n" \
"\t\tsuccess_file_open_count = %"PRId64"\n" \
"\t\ttotal_file_read_count = %"PRId64"\n" \
"\t\tsuccess_file_read_count = %"PRId64"\n" \
"\t\ttotal_file_write_count = %"PRId64"\n" \
"\t\tsuccess_file_write_count = %"PRId64"\n" \
"\t\tlast_heart_beat_time = %s\n" \
"\t\tlast_source_update = %s\n" \
"\t\tlast_sync_update = %s\n" \
"\t\tlast_synced_timestamp = %s %s\n", \
++k, pStorage->id, pStorage->ip_addr, \
szHostnamePrompt, get_storage_status_caption( \
pStorage->status), pStorage->domain_name, \
pStorage->version, \
formatDatetime(pStorage->join_time, \
"%Y-%m-%d %H:%M:%S", \
szJoinTime, sizeof(szJoinTime)), \
szUpTime, pStorage->total_mb, \
pStorage->free_mb, \
pStorage->upload_priority, \
pStorage->store_path_count, \
pStorage->subdir_count_per_path, \
pStorage->storage_port, \
pStorage->storage_http_port, \
pStorage->current_write_path, \
pStorage->src_id, \
pStorage->if_trunk_server, \
pStorageStat->connection.alloc_count, \
pStorageStat->connection.current_count, \
pStorageStat->connection.max_count, \
pStorageStat->total_upload_count, \
pStorageStat->success_upload_count, \
pStorageStat->total_append_count, \
pStorageStat->success_append_count, \
pStorageStat->total_modify_count, \
pStorageStat->success_modify_count, \
pStorageStat->total_truncate_count, \
pStorageStat->success_truncate_count, \
pStorageStat->total_set_meta_count, \
pStorageStat->success_set_meta_count, \
pStorageStat->total_delete_count, \
pStorageStat->success_delete_count, \
pStorageStat->total_download_count, \
pStorageStat->success_download_count, \
pStorageStat->total_get_meta_count, \
pStorageStat->success_get_meta_count, \
pStorageStat->total_create_link_count, \
pStorageStat->success_create_link_count, \
pStorageStat->total_delete_link_count, \
pStorageStat->success_delete_link_count, \
pStorageStat->total_upload_bytes, \
pStorageStat->success_upload_bytes, \
pStorageStat->total_append_bytes, \
pStorageStat->success_append_bytes, \
pStorageStat->total_modify_bytes, \
pStorageStat->success_modify_bytes, \
pStorageStat->total_download_bytes, \
pStorageStat->success_download_bytes, \
pStorageStat->total_sync_in_bytes, \
pStorageStat->success_sync_in_bytes, \
pStorageStat->total_sync_out_bytes, \
pStorageStat->success_sync_out_bytes, \
pStorageStat->total_file_open_count, \
pStorageStat->success_file_open_count, \
pStorageStat->total_file_read_count, \
pStorageStat->success_file_read_count, \
pStorageStat->total_file_write_count, \
pStorageStat->success_file_write_count, \
formatDatetime(pStorageStat->last_heart_beat_time, \
"%Y-%m-%d %H:%M:%S", \
szLastHeartBeatTime, sizeof(szLastHeartBeatTime)), \
formatDatetime(pStorageStat->last_source_update, \
"%Y-%m-%d %H:%M:%S", \
szSrcUpdTime, sizeof(szSrcUpdTime)), \
formatDatetime(pStorageStat->last_sync_update, \
"%Y-%m-%d %H:%M:%S", \
szSyncUpdTime, sizeof(szSyncUpdTime)), \
formatDatetime(pStorageStat->last_synced_timestamp, \
"%Y-%m-%d %H:%M:%S", \
szSyncedTimestamp, sizeof(szSyncedTimestamp)),\
szSyncedDelaySeconds);
}
return 0;
}
static int list_all_groups(const char *group_name)
{
int result;
int group_count;
FDFSGroupStat group_stats[FDFS_MAX_GROUPS];
FDFSGroupStat *pGroupStat;
FDFSGroupStat *pGroupEnd;
int i;
result = tracker_list_groups(pTrackerServer, \
group_stats, FDFS_MAX_GROUPS, \
&group_count);
if (result != 0)
{
tracker_close_all_connections();
fdfs_client_destroy();
return result;
}
pGroupEnd = group_stats + group_count;
if (group_name == NULL)
{
printf("group count: %d\n", group_count);
i = 0;
for (pGroupStat=group_stats; pGroupStat<pGroupEnd; \
pGroupStat++)
{
printf( "\nGroup %d:\n", ++i);
list_storages(pGroupStat);
}
}
else
{
for (pGroupStat=group_stats; pGroupStat<pGroupEnd; \
pGroupStat++)
{
if (strcmp(pGroupStat->group_name, group_name) == 0)
{
list_storages(pGroupStat);
break;
}
}
}
return 0;
}

1
client/test/fdfs_monitor.c Symbolic link
View File

@ -0,0 +1 @@
../fdfs_monitor.c

View File

@ -1,691 +0,0 @@
/**
* Copyright (C) 2008 Happy Fish / YuQing
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
**/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include "fdfs_client.h"
#include "fdfs_global.h"
#include "fastcommon/base64.h"
#include "fastcommon/sockopt.h"
#include "fastcommon/logger.h"
#include "fdfs_http_shared.h"
int writeToFileCallback(void *arg, const int64_t file_size, const char *data, \
const int current_size)
{
if (arg == NULL)
{
return EINVAL;
}
if (fwrite(data, current_size, 1, (FILE *)arg) != 1)
{
return errno != 0 ? errno : EIO;
}
return 0;
}
int uploadFileCallback(void *arg, const int64_t file_size, int sock)
{
int64_t total_send_bytes;
char *filename;
if (arg == NULL)
{
return EINVAL;
}
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
}
int main(int argc, char *argv[])
{
char *conf_filename;
char *local_filename;
ConnectionInfo *pTrackerServer;
ConnectionInfo *pStorageServer;
int result;
ConnectionInfo storageServer;
char group_name[FDFS_GROUP_NAME_MAX_LEN + 1];
char remote_filename[256];
char master_filename[256];
FDFSMetaData meta_list[32];
int meta_count;
int i;
FDFSMetaData *pMetaList;
char token[32 + 1];
char file_id[128];
char file_url[256];
char szDatetime[20];
char szPortPart[16];
int url_len;
time_t ts;
char *file_buff;
int64_t file_size;
char *operation;
char *meta_buff;
int store_path_index;
FDFSFileInfo file_info;
printf("This is FastDFS client test program v%d.%02d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
if (argc < 3)
{
printf("Usage: %s <config_file> <operation>\n" \
"\toperation: upload, download, getmeta, setmeta, " \
"delete and query_servers\n", argv[0]);
return 1;
}
log_init();
g_log_context.log_level = LOG_DEBUG;
conf_filename = argv[1];
operation = argv[2];
if ((result=fdfs_client_init(conf_filename)) != 0)
{
return result;
}
pTrackerServer = tracker_get_connection();
if (pTrackerServer == NULL)
{
fdfs_client_destroy();
return errno != 0 ? errno : ECONNREFUSED;
}
pStorageServer = NULL;
*group_name = '\0';
local_filename = NULL;
if (strcmp(operation, "upload") == 0)
{
int upload_type;
char *prefix_name;
const char *file_ext_name;
char slave_filename[256];
int slave_filename_len;
if (argc < 4)
{
printf("Usage: %s <config_file> upload " \
"<local_filename> [FILE | BUFF | CALLBACK] \n",\
argv[0]);
fdfs_client_destroy();
return EINVAL;
}
local_filename = argv[3];
if (argc == 4)
{
upload_type = FDFS_UPLOAD_BY_FILE;
}
else
{
if (strcmp(argv[4], "BUFF") == 0)
{
upload_type = FDFS_UPLOAD_BY_BUFF;
}
else if (strcmp(argv[4], "CALLBACK") == 0)
{
upload_type = FDFS_UPLOAD_BY_CALLBACK;
}
else
{
upload_type = FDFS_UPLOAD_BY_FILE;
}
}
store_path_index = 0;
{
ConnectionInfo storageServers[FDFS_MAX_SERVERS_EACH_GROUP];
ConnectionInfo *pServer;
ConnectionInfo *pServerEnd;
int storage_count;
if ((result=tracker_query_storage_store_list_without_group( \
pTrackerServer, storageServers, \
FDFS_MAX_SERVERS_EACH_GROUP, &storage_count, \
group_name, &store_path_index)) == 0)
{
printf("tracker_query_storage_store_list_without_group: \n");
pServerEnd = storageServers + storage_count;
for (pServer=storageServers; pServer<pServerEnd; pServer++)
{
printf("\tserver %d. group_name=%s, " \
"ip_addr=%s, port=%d\n", \
(int)(pServer - storageServers) + 1, \
group_name, pServer->ip_addr, pServer->port);
}
printf("\n");
}
}
if ((result=tracker_query_storage_store(pTrackerServer, \
&storageServer, group_name, &store_path_index)) != 0)
{
fdfs_client_destroy();
printf("tracker_query_storage fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
return result;
}
printf("group_name=%s, ip_addr=%s, port=%d\n", \
group_name, storageServer.ip_addr, \
storageServer.port);
if ((pStorageServer=tracker_connect_server(&storageServer, \
&result)) == NULL)
{
fdfs_client_destroy();
return result;
}
memset(&meta_list, 0, sizeof(meta_list));
meta_count = 0;
strcpy(meta_list[meta_count].name, "ext_name");
strcpy(meta_list[meta_count].value, "jpg");
meta_count++;
strcpy(meta_list[meta_count].name, "width");
strcpy(meta_list[meta_count].value, "160");
meta_count++;
strcpy(meta_list[meta_count].name, "height");
strcpy(meta_list[meta_count].value, "80");
meta_count++;
strcpy(meta_list[meta_count].name, "file_size");
strcpy(meta_list[meta_count].value, "115120");
meta_count++;
file_ext_name = fdfs_get_file_ext_name(local_filename);
*group_name = '\0';
if (upload_type == FDFS_UPLOAD_BY_FILE)
{
result = storage_upload_by_filename(pTrackerServer, \
pStorageServer, store_path_index, \
local_filename, file_ext_name, \
meta_list, meta_count, \
group_name, remote_filename);
printf("storage_upload_by_filename\n");
}
else if (upload_type == FDFS_UPLOAD_BY_BUFF)
{
char *file_content;
if ((result=getFileContent(local_filename, \
&file_content, &file_size)) == 0)
{
result = storage_upload_by_filebuff(pTrackerServer, \
pStorageServer, store_path_index, \
file_content, file_size, file_ext_name, \
meta_list, meta_count, \
group_name, remote_filename);
free(file_content);
}
printf("storage_upload_by_filebuff\n");
}
else
{
struct stat stat_buf;
if (stat(local_filename, &stat_buf) == 0 && \
S_ISREG(stat_buf.st_mode))
{
file_size = stat_buf.st_size;
result = storage_upload_by_callback(pTrackerServer, \
pStorageServer, store_path_index, \
uploadFileCallback, local_filename, \
file_size, file_ext_name, \
meta_list, meta_count, \
group_name, remote_filename);
}
printf("storage_upload_by_callback\n");
}
if (result != 0)
{
printf("upload file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
tracker_close_connection_ex(pStorageServer, true);
fdfs_client_destroy();
return result;
}
if (g_tracker_server_http_port == 80)
{
*szPortPart = '\0';
}
else
{
sprintf(szPortPart, ":%d", g_tracker_server_http_port);
}
sprintf(file_id, "%s/%s", group_name, remote_filename);
url_len = sprintf(file_url, "http://%s%s/%s", \
pStorageServer->ip_addr, szPortPart, file_id);
if (g_anti_steal_token)
{
ts = time(NULL);
fdfs_http_gen_token(&g_anti_steal_secret_key, file_id, \
ts, token);
sprintf(file_url + url_len, "?token=%s&ts=%d", \
token, (int)ts);
}
printf("group_name=%s, remote_filename=%s\n", \
group_name, remote_filename);
fdfs_get_file_info(group_name, remote_filename, &file_info);
printf("source ip address: %s\n", file_info.source_ip_addr);
printf("file timestamp=%s\n", formatDatetime(
file_info.create_timestamp, "%Y-%m-%d %H:%M:%S", \
szDatetime, sizeof(szDatetime)));
printf("file size=%"PRId64"\n", file_info.file_size);
printf("file crc32=%u\n", file_info.crc32);
printf("example file url: %s\n", file_url);
strcpy(master_filename, remote_filename);
*remote_filename = '\0';
if (upload_type == FDFS_UPLOAD_BY_FILE)
{
prefix_name = "_big";
result = storage_upload_slave_by_filename(pTrackerServer,
NULL, local_filename, master_filename, \
prefix_name, file_ext_name, \
meta_list, meta_count, \
group_name, remote_filename);
printf("storage_upload_slave_by_filename\n");
}
else if (upload_type == FDFS_UPLOAD_BY_BUFF)
{
char *file_content;
prefix_name = "1024x1024";
if ((result=getFileContent(local_filename, \
&file_content, &file_size)) == 0)
{
result = storage_upload_slave_by_filebuff(pTrackerServer, \
NULL, file_content, file_size, master_filename,
prefix_name, file_ext_name, \
meta_list, meta_count, \
group_name, remote_filename);
free(file_content);
}
printf("storage_upload_slave_by_filebuff\n");
}
else
{
struct stat stat_buf;
prefix_name = "-small";
if (stat(local_filename, &stat_buf) == 0 && \
S_ISREG(stat_buf.st_mode))
{
file_size = stat_buf.st_size;
result = storage_upload_slave_by_callback(pTrackerServer, \
NULL, uploadFileCallback, local_filename, \
file_size, master_filename, prefix_name, \
file_ext_name, meta_list, meta_count, \
group_name, remote_filename);
}
printf("storage_upload_slave_by_callback\n");
}
if (result != 0)
{
printf("upload slave file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
tracker_close_connection_ex(pStorageServer, true);
fdfs_client_destroy();
return result;
}
if (g_tracker_server_http_port == 80)
{
*szPortPart = '\0';
}
else
{
sprintf(szPortPart, ":%d", g_tracker_server_http_port);
}
sprintf(file_id, "%s/%s", group_name, remote_filename);
url_len = sprintf(file_url, "http://%s%s/%s", \
pStorageServer->ip_addr, szPortPart, file_id);
if (g_anti_steal_token)
{
ts = time(NULL);
fdfs_http_gen_token(&g_anti_steal_secret_key, file_id, \
ts, token);
sprintf(file_url + url_len, "?token=%s&ts=%d", \
token, (int)ts);
}
printf("group_name=%s, remote_filename=%s\n", \
group_name, remote_filename);
fdfs_get_file_info(group_name, remote_filename, &file_info);
printf("source ip address: %s\n", file_info.source_ip_addr);
printf("file timestamp=%s\n", formatDatetime(
file_info.create_timestamp, "%Y-%m-%d %H:%M:%S", \
szDatetime, sizeof(szDatetime)));
printf("file size=%"PRId64"\n", file_info.file_size);
printf("file crc32=%u\n", file_info.crc32);
printf("example file url: %s\n", file_url);
if (fdfs_gen_slave_filename(master_filename, \
prefix_name, file_ext_name, \
slave_filename, &slave_filename_len) == 0)
{
if (strcmp(remote_filename, slave_filename) != 0)
{
printf("slave_filename=%s\n" \
"remote_filename=%s\n" \
"not equal!\n", \
slave_filename, remote_filename);
}
}
}
else if (strcmp(operation, "download") == 0 ||
strcmp(operation, "getmeta") == 0 ||
strcmp(operation, "setmeta") == 0 ||
strcmp(operation, "query_servers") == 0 ||
strcmp(operation, "delete") == 0)
{
if (argc < 5)
{
printf("Usage: %s <config_file> %s " \
"<group_name> <remote_filename>\n", \
argv[0], operation);
fdfs_client_destroy();
return EINVAL;
}
snprintf(group_name, sizeof(group_name), "%s", argv[3]);
snprintf(remote_filename, sizeof(remote_filename), \
"%s", argv[4]);
if (strcmp(operation, "setmeta") == 0 ||
strcmp(operation, "delete") == 0)
{
result = tracker_query_storage_update(pTrackerServer, \
&storageServer, group_name, remote_filename);
}
else if (strcmp(operation, "query_servers") == 0)
{
ConnectionInfo storageServers[FDFS_MAX_SERVERS_EACH_GROUP];
int server_count;
result = tracker_query_storage_list(pTrackerServer, \
storageServers, FDFS_MAX_SERVERS_EACH_GROUP, \
&server_count, group_name, remote_filename);
if (result != 0)
{
printf("tracker_query_storage_list fail, "\
"group_name=%s, filename=%s, " \
"error no: %d, error info: %s\n", \
group_name, remote_filename, \
result, STRERROR(result));
}
else
{
printf("server list (%d):\n", server_count);
for (i=0; i<server_count; i++)
{
printf("\t%s:%d\n", \
storageServers[i].ip_addr, \
storageServers[i].port);
}
printf("\n");
}
tracker_close_connection_ex(pTrackerServer, result != 0);
fdfs_client_destroy();
return result;
}
else
{
result = tracker_query_storage_fetch(pTrackerServer, \
&storageServer, group_name, remote_filename);
}
if (result != 0)
{
fdfs_client_destroy();
printf("tracker_query_storage_fetch fail, " \
"group_name=%s, filename=%s, " \
"error no: %d, error info: %s\n", \
group_name, remote_filename, \
result, STRERROR(result));
return result;
}
printf("storage=%s:%d\n", storageServer.ip_addr, \
storageServer.port);
if ((pStorageServer=tracker_connect_server(&storageServer, \
&result)) == NULL)
{
fdfs_client_destroy();
return result;
}
if (strcmp(operation, "download") == 0)
{
if (argc >= 6)
{
local_filename = argv[5];
if (strcmp(local_filename, "CALLBACK") == 0)
{
FILE *fp;
fp = fopen(local_filename, "wb");
if (fp == NULL)
{
result = errno != 0 ? errno : EPERM;
printf("open file \"%s\" fail, " \
"errno: %d, error info: %s", \
local_filename, result, \
STRERROR(result));
}
else
{
result = storage_download_file_ex( \
pTrackerServer, pStorageServer, \
group_name, remote_filename, 0, 0, \
writeToFileCallback, fp, &file_size);
fclose(fp);
}
}
else
{
result = storage_download_file_to_file( \
pTrackerServer, pStorageServer, \
group_name, remote_filename, \
local_filename, &file_size);
}
}
else
{
file_buff = NULL;
if ((result=storage_download_file_to_buff( \
pTrackerServer, pStorageServer, \
group_name, remote_filename, \
&file_buff, &file_size)) == 0)
{
local_filename = strrchr( \
remote_filename, '/');
if (local_filename != NULL)
{
local_filename++; //skip /
}
else
{
local_filename=remote_filename;
}
result = writeToFile(local_filename, \
file_buff, file_size);
free(file_buff);
}
}
if (result == 0)
{
printf("download file success, " \
"file size=%"PRId64", file save to %s\n", \
file_size, local_filename);
}
else
{
printf("download file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
else if (strcmp(operation, "getmeta") == 0)
{
if ((result=storage_get_metadata(pTrackerServer, \
pStorageServer, group_name, remote_filename, \
&pMetaList, &meta_count)) == 0)
{
printf("get meta data success, " \
"meta count=%d\n", meta_count);
for (i=0; i<meta_count; i++)
{
printf("%s=%s\n", \
pMetaList[i].name, \
pMetaList[i].value);
}
free(pMetaList);
}
else
{
printf("getmeta fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
else if (strcmp(operation, "setmeta") == 0)
{
if (argc < 7)
{
printf("Usage: %s <config_file> %s " \
"<group_name> <remote_filename> " \
"<op_flag> <metadata_list>\n" \
"\top_flag: %c for overwrite, " \
"%c for merge\n" \
"\tmetadata_list: name1=value1," \
"name2=value2,...\n", \
argv[0], operation, \
STORAGE_SET_METADATA_FLAG_OVERWRITE, \
STORAGE_SET_METADATA_FLAG_MERGE);
fdfs_client_destroy();
return EINVAL;
}
meta_buff = strdup(argv[6]);
if (meta_buff == NULL)
{
printf("Out of memory!\n");
return ENOMEM;
}
pMetaList = fdfs_split_metadata_ex(meta_buff, \
',', '=', &meta_count, &result);
if (pMetaList == NULL)
{
printf("Out of memory!\n");
free(meta_buff);
return ENOMEM;
}
if ((result=storage_set_metadata(pTrackerServer, \
NULL, group_name, remote_filename, \
pMetaList, meta_count, *argv[5])) == 0)
{
printf("set meta data success\n");
}
else
{
printf("setmeta fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
free(meta_buff);
free(pMetaList);
}
else if(strcmp(operation, "delete") == 0)
{
if ((result=storage_delete_file(pTrackerServer, \
NULL, group_name, remote_filename)) == 0)
{
printf("delete file success\n");
}
else
{
printf("delete file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
}
else
{
fdfs_client_destroy();
printf("invalid operation: %s\n", operation);
return EINVAL;
}
/* for test only */
if ((result=fdfs_active_test(pTrackerServer)) != 0)
{
printf("active_test to tracker server %s:%d fail, errno: %d\n", \
pTrackerServer->ip_addr, pTrackerServer->port, result);
}
/* for test only */
if ((result=fdfs_active_test(pStorageServer)) != 0)
{
printf("active_test to storage server %s:%d fail, errno: %d\n", \
pStorageServer->ip_addr, pStorageServer->port, result);
}
tracker_close_connection_ex(pStorageServer, true);
tracker_close_connection_ex(pTrackerServer, true);
fdfs_client_destroy();
return result;
}

1
client/test/fdfs_test.c Symbolic link
View File

@ -0,0 +1 @@
../fdfs_test.c

View File

@ -1,658 +0,0 @@
/**
* Copyright (C) 2008 Happy Fish / YuQing
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
**/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <string.h>
#include <errno.h>
#include <sys/types.h>
#include <sys/stat.h>
#include "fdfs_client.h"
#include "fdfs_global.h"
#include "fastcommon/base64.h"
#include "fdfs_http_shared.h"
#include "fastcommon/sockopt.h"
#include "fastcommon/logger.h"
int writeToFileCallback(void *arg, const int64_t file_size, const char *data, \
const int current_size)
{
if (arg == NULL)
{
return EINVAL;
}
if (fwrite(data, current_size, 1, (FILE *)arg) != 1)
{
return errno != 0 ? errno : EIO;
}
return 0;
}
int uploadFileCallback(void *arg, const int64_t file_size, int sock)
{
int64_t total_send_bytes;
char *filename;
if (arg == NULL)
{
return EINVAL;
}
filename = (char *)arg;
return tcpsendfile(sock, filename, file_size, \
g_fdfs_network_timeout, &total_send_bytes);
}
int main(int argc, char *argv[])
{
char *conf_filename;
char *local_filename;
ConnectionInfo *pTrackerServer;
ConnectionInfo *pStorageServer;
int result;
ConnectionInfo storageServer;
char group_name[FDFS_GROUP_NAME_MAX_LEN + 1];
FDFSMetaData meta_list[32];
int meta_count;
int i;
FDFSMetaData *pMetaList;
char token[32 + 1];
char file_id[128];
char master_file_id[128];
char file_url[256];
char szDatetime[20];
char szPortPart[16];
int url_len;
time_t ts;
char *file_buff;
int64_t file_size;
char *operation;
char *meta_buff;
int store_path_index;
FDFSFileInfo file_info;
printf("This is FastDFS client test program v%d.%02d\n" \
"\nCopyright (C) 2008, Happy Fish / YuQing\n" \
"\nFastDFS may be copied only under the terms of the GNU General\n" \
"Public License V3, which may be found in the FastDFS source kit.\n" \
"Please visit the FastDFS Home Page http://www.csource.org/ \n" \
"for more detail.\n\n" \
, g_fdfs_version.major, g_fdfs_version.minor);
if (argc < 3)
{
printf("Usage: %s <config_file> <operation>\n" \
"\toperation: upload, download, getmeta, setmeta, " \
"delete and query_servers\n", argv[0]);
return 1;
}
log_init();
g_log_context.log_level = LOG_DEBUG;
conf_filename = argv[1];
operation = argv[2];
if ((result=fdfs_client_init(conf_filename)) != 0)
{
return result;
}
pTrackerServer = tracker_get_connection();
if (pTrackerServer == NULL)
{
fdfs_client_destroy();
return errno != 0 ? errno : ECONNREFUSED;
}
local_filename = NULL;
if (strcmp(operation, "upload") == 0)
{
int upload_type;
char *prefix_name;
const char *file_ext_name;
char slave_file_id[256];
int slave_file_id_len;
if (argc < 4)
{
printf("Usage: %s <config_file> upload " \
"<local_filename> [FILE | BUFF | CALLBACK] \n",\
argv[0]);
fdfs_client_destroy();
return EINVAL;
}
local_filename = argv[3];
if (argc == 4)
{
upload_type = FDFS_UPLOAD_BY_FILE;
}
else
{
if (strcmp(argv[4], "BUFF") == 0)
{
upload_type = FDFS_UPLOAD_BY_BUFF;
}
else if (strcmp(argv[4], "CALLBACK") == 0)
{
upload_type = FDFS_UPLOAD_BY_CALLBACK;
}
else
{
upload_type = FDFS_UPLOAD_BY_FILE;
}
}
{
ConnectionInfo storageServers[FDFS_MAX_SERVERS_EACH_GROUP];
ConnectionInfo *pServer;
ConnectionInfo *pServerEnd;
int storage_count;
strcpy(group_name, "group1");
if ((result=tracker_query_storage_store_list_with_group( \
pTrackerServer, group_name, storageServers, \
FDFS_MAX_SERVERS_EACH_GROUP, &storage_count, \
&store_path_index)) == 0)
{
printf("tracker_query_storage_store_list_with_group: \n");
pServerEnd = storageServers + storage_count;
for (pServer=storageServers; pServer<pServerEnd; pServer++)
{
printf("\tserver %d. group_name=%s, " \
"ip_addr=%s, port=%d\n", \
(int)(pServer - storageServers) + 1, \
group_name, pServer->ip_addr, \
pServer->port);
}
printf("\n");
}
}
*group_name = '\0';
if ((result=tracker_query_storage_store(pTrackerServer, \
&storageServer, group_name, &store_path_index)) != 0)
{
fdfs_client_destroy();
printf("tracker_query_storage fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
return result;
}
printf("group_name=%s, ip_addr=%s, port=%d\n", \
group_name, storageServer.ip_addr, \
storageServer.port);
if ((pStorageServer=tracker_connect_server(&storageServer, \
&result)) == NULL)
{
fdfs_client_destroy();
return result;
}
memset(&meta_list, 0, sizeof(meta_list));
meta_count = 0;
strcpy(meta_list[meta_count].name, "ext_name");
strcpy(meta_list[meta_count].value, "jpg");
meta_count++;
strcpy(meta_list[meta_count].name, "width");
strcpy(meta_list[meta_count].value, "160");
meta_count++;
strcpy(meta_list[meta_count].name, "height");
strcpy(meta_list[meta_count].value, "80");
meta_count++;
strcpy(meta_list[meta_count].name, "file_size");
strcpy(meta_list[meta_count].value, "115120");
meta_count++;
file_ext_name = fdfs_get_file_ext_name(local_filename);
strcpy(group_name, "");
if (upload_type == FDFS_UPLOAD_BY_FILE)
{
printf("storage_upload_by_filename\n");
result = storage_upload_by_filename1(pTrackerServer, \
pStorageServer, store_path_index, \
local_filename, file_ext_name, \
meta_list, meta_count, \
group_name, file_id);
}
else if (upload_type == FDFS_UPLOAD_BY_BUFF)
{
char *file_content;
printf("storage_upload_by_filebuff\n");
if ((result=getFileContent(local_filename, \
&file_content, &file_size)) == 0)
{
result = storage_upload_by_filebuff1(pTrackerServer, \
pStorageServer, store_path_index, \
file_content, file_size, file_ext_name, \
meta_list, meta_count, \
group_name, file_id);
free(file_content);
}
}
else
{
struct stat stat_buf;
printf("storage_upload_by_callback\n");
if (stat(local_filename, &stat_buf) == 0 && \
S_ISREG(stat_buf.st_mode))
{
file_size = stat_buf.st_size;
result = storage_upload_by_callback1(pTrackerServer, \
pStorageServer, store_path_index, \
uploadFileCallback, local_filename, \
file_size, file_ext_name, \
meta_list, meta_count, \
group_name, file_id);
}
}
if (result != 0)
{
printf("upload file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
tracker_close_connection_ex(pStorageServer, true);
fdfs_client_destroy();
return result;
}
if (g_tracker_server_http_port == 80)
{
*szPortPart = '\0';
}
else
{
sprintf(szPortPart, ":%d", g_tracker_server_http_port);
}
url_len = sprintf(file_url, "http://%s%s/%s", \
pStorageServer->ip_addr, szPortPart, file_id);
if (g_anti_steal_token)
{
ts = time(NULL);
fdfs_http_gen_token(&g_anti_steal_secret_key, \
file_id, ts, token);
sprintf(file_url + url_len, "?token=%s&ts=%d", \
token, (int)ts);
}
fdfs_get_file_info1(file_id, &file_info);
printf("source ip address: %s\n", file_info.source_ip_addr);
printf("file timestamp=%s\n", formatDatetime(
file_info.create_timestamp, "%Y-%m-%d %H:%M:%S", \
szDatetime, sizeof(szDatetime)));
printf("file size=%"PRId64"\n", file_info.file_size);
printf("file crc32=%u\n", file_info.crc32);
printf("example file url: %s\n", file_url);
strcpy(master_file_id, file_id);
*file_id = '\0';
if (upload_type == FDFS_UPLOAD_BY_FILE)
{
prefix_name = "_big";
printf("storage_upload_slave_by_filename\n");
result = storage_upload_slave_by_filename1( \
pTrackerServer, NULL, \
local_filename, master_file_id, \
prefix_name, file_ext_name, \
meta_list, meta_count, file_id);
}
else if (upload_type == FDFS_UPLOAD_BY_BUFF)
{
char *file_content;
prefix_name = "1024x1024";
printf("storage_upload_slave_by_filebuff\n");
if ((result=getFileContent(local_filename, \
&file_content, &file_size)) == 0)
{
result = storage_upload_slave_by_filebuff1( \
pTrackerServer, NULL, file_content, file_size, \
master_file_id, prefix_name, file_ext_name, \
meta_list, meta_count, file_id);
free(file_content);
}
}
else
{
struct stat stat_buf;
prefix_name = "_small";
printf("storage_upload_slave_by_callback\n");
if (stat(local_filename, &stat_buf) == 0 && \
S_ISREG(stat_buf.st_mode))
{
file_size = stat_buf.st_size;
result = storage_upload_slave_by_callback1( \
pTrackerServer, NULL, \
uploadFileCallback, local_filename, \
file_size, master_file_id, \
prefix_name, file_ext_name, \
meta_list, meta_count, file_id);
}
}
if (result != 0)
{
printf("upload slave file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
tracker_close_connection_ex(pStorageServer, true);
fdfs_client_destroy();
return result;
}
if (g_tracker_server_http_port == 80)
{
*szPortPart = '\0';
}
else
{
sprintf(szPortPart, ":%d", g_tracker_server_http_port);
}
url_len = sprintf(file_url, "http://%s%s/%s", \
pStorageServer->ip_addr, szPortPart, file_id);
if (g_anti_steal_token)
{
ts = time(NULL);
fdfs_http_gen_token(&g_anti_steal_secret_key, \
file_id, ts, token);
sprintf(file_url + url_len, "?token=%s&ts=%d", \
token, (int)ts);
}
fdfs_get_file_info1(file_id, &file_info);
printf("source ip address: %s\n", file_info.source_ip_addr);
printf("file timestamp=%s\n", formatDatetime(
file_info.create_timestamp, "%Y-%m-%d %H:%M:%S", \
szDatetime, sizeof(szDatetime)));
printf("file size=%"PRId64"\n", file_info.file_size);
printf("file crc32=%u\n", file_info.crc32);
printf("example file url: %s\n", file_url);
if (fdfs_gen_slave_filename(master_file_id, \
prefix_name, file_ext_name, \
slave_file_id, &slave_file_id_len) == 0)
{
if (strcmp(file_id, slave_file_id) != 0)
{
printf("slave_file_id=%s\n" \
"file_id=%s\n" \
"not equal!\n", \
slave_file_id, file_id);
}
}
}
else if (strcmp(operation, "download") == 0 ||
strcmp(operation, "getmeta") == 0 ||
strcmp(operation, "setmeta") == 0 ||
strcmp(operation, "query_servers") == 0 ||
strcmp(operation, "delete") == 0)
{
if (argc < 4)
{
printf("Usage: %s <config_file> %s " \
"<file_id>\n", \
argv[0], operation);
fdfs_client_destroy();
return EINVAL;
}
snprintf(file_id, sizeof(file_id), "%s", argv[3]);
if (strcmp(operation, "query_servers") == 0)
{
ConnectionInfo storageServers[FDFS_MAX_SERVERS_EACH_GROUP];
int server_count;
result = tracker_query_storage_list1(pTrackerServer, \
storageServers, FDFS_MAX_SERVERS_EACH_GROUP, \
&server_count, file_id);
if (result != 0)
{
printf("tracker_query_storage_list1 fail, "\
"file_id=%s, " \
"error no: %d, error info: %s\n", \
file_id, result, STRERROR(result));
}
else
{
printf("server list (%d):\n", server_count);
for (i=0; i<server_count; i++)
{
printf("\t%s:%d\n", \
storageServers[i].ip_addr, \
storageServers[i].port);
}
printf("\n");
}
tracker_close_connection_ex(pTrackerServer, true);
fdfs_client_destroy();
return result;
}
if ((result=tracker_query_storage_fetch1(pTrackerServer, \
&storageServer, file_id)) != 0)
{
fdfs_client_destroy();
printf("tracker_query_storage_fetch fail, " \
"file_id=%s, " \
"error no: %d, error info: %s\n", \
file_id, result, STRERROR(result));
return result;
}
printf("storage=%s:%d\n", storageServer.ip_addr, \
storageServer.port);
if ((pStorageServer=tracker_connect_server(&storageServer, \
&result)) == NULL)
{
fdfs_client_destroy();
return result;
}
if (strcmp(operation, "download") == 0)
{
if (argc >= 5)
{
local_filename = argv[4];
if (strcmp(local_filename, "CALLBACK") == 0)
{
FILE *fp;
fp = fopen(local_filename, "wb");
if (fp == NULL)
{
result = errno != 0 ? errno : EPERM;
printf("open file \"%s\" fail, " \
"errno: %d, error info: %s", \
local_filename, result, \
STRERROR(result));
}
else
{
result = storage_download_file_ex1( \
pTrackerServer, pStorageServer, \
file_id, 0, 0, \
writeToFileCallback, fp, &file_size);
fclose(fp);
}
}
else
{
result = storage_download_file_to_file1( \
pTrackerServer, pStorageServer, \
file_id, \
local_filename, &file_size);
}
}
else
{
file_buff = NULL;
if ((result=storage_download_file_to_buff1( \
pTrackerServer, pStorageServer, \
file_id, \
&file_buff, &file_size)) == 0)
{
local_filename = strrchr( \
file_id, '/');
if (local_filename != NULL)
{
local_filename++; //skip /
}
else
{
local_filename=file_id;
}
result = writeToFile(local_filename, \
file_buff, file_size);
free(file_buff);
}
}
if (result == 0)
{
printf("download file success, " \
"file size=%"PRId64", file save to %s\n", \
file_size, local_filename);
}
else
{
printf("download file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
else if (strcmp(operation, "getmeta") == 0)
{
if ((result=storage_get_metadata1(pTrackerServer, \
NULL, file_id, \
&pMetaList, &meta_count)) == 0)
{
printf("get meta data success, " \
"meta count=%d\n", meta_count);
for (i=0; i<meta_count; i++)
{
printf("%s=%s\n", \
pMetaList[i].name, \
pMetaList[i].value);
}
free(pMetaList);
}
else
{
printf("getmeta fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
else if (strcmp(operation, "setmeta") == 0)
{
if (argc < 6)
{
printf("Usage: %s <config_file> %s " \
"<file_id> " \
"<op_flag> <metadata_list>\n" \
"\top_flag: %c for overwrite, " \
"%c for merge\n" \
"\tmetadata_list: name1=value1," \
"name2=value2,...\n", \
argv[0], operation, \
STORAGE_SET_METADATA_FLAG_OVERWRITE, \
STORAGE_SET_METADATA_FLAG_MERGE);
fdfs_client_destroy();
return EINVAL;
}
meta_buff = strdup(argv[5]);
if (meta_buff == NULL)
{
printf("Out of memory!\n");
return ENOMEM;
}
pMetaList = fdfs_split_metadata_ex(meta_buff, \
',', '=', &meta_count, &result);
if (pMetaList == NULL)
{
printf("Out of memory!\n");
free(meta_buff);
return ENOMEM;
}
if ((result=storage_set_metadata1(pTrackerServer, \
NULL, file_id, \
pMetaList, meta_count, *argv[4])) == 0)
{
printf("set meta data success\n");
}
else
{
printf("setmeta fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
free(meta_buff);
free(pMetaList);
}
else if(strcmp(operation, "delete") == 0)
{
if ((result=storage_delete_file1(pTrackerServer, \
NULL, file_id)) == 0)
{
printf("delete file success\n");
}
else
{
printf("delete file fail, " \
"error no: %d, error info: %s\n", \
result, STRERROR(result));
}
}
}
else
{
fdfs_client_destroy();
printf("invalid operation: %s\n", operation);
return EINVAL;
}
/* for test only */
if ((result=fdfs_active_test(pTrackerServer)) != 0)
{
printf("active_test to tracker server %s:%d fail, errno: %d\n", \
pTrackerServer->ip_addr, pTrackerServer->port, result);
}
/* for test only */
if ((result=fdfs_active_test(pStorageServer)) != 0)
{
printf("active_test to storage server %s:%d fail, errno: %d\n", \
pStorageServer->ip_addr, pStorageServer->port, result);
}
tracker_close_connection_ex(pStorageServer, true);
tracker_close_connection_ex(pTrackerServer, true);
fdfs_client_destroy();
return result;
}

1
client/test/fdfs_test1.c Symbolic link
View File

@ -0,0 +1 @@
../fdfs_test1.c

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
@ -249,6 +249,7 @@ int tracker_list_servers(ConnectionInfo *pTrackerServer, \
{
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + \
IP_ADDRESS_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
bool new_connection;
TrackerHeader *pHeader;
ConnectionInfo *conn;
@ -293,17 +294,16 @@ int tracker_list_servers(ConnectionInfo *pTrackerServer, \
long2buff(FDFS_GROUP_NAME_MAX_LEN + id_len, pHeader->pkg_len);
pHeader->cmd = TRACKER_PROTO_CMD_SERVER_LIST_STORAGE;
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + id_len, \
g_fdfs_network_timeout)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
}
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + id_len,
SF_G_NETWORK_TIMEOUT)) != 0)
{
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, errno: %d, "
"error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
pInBuff = (char *)stats;
@ -330,10 +330,10 @@ int tracker_list_servers(ConnectionInfo *pTrackerServer, \
if (in_bytes % sizeof(TrackerStorageStat) != 0)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64
" is invalid", __LINE__, formatted_ip,
pTrackerServer->port, in_bytes);
*storage_count = 0;
return EINVAL;
@ -342,11 +342,12 @@ int tracker_list_servers(ConnectionInfo *pTrackerServer, \
*storage_count = in_bytes / sizeof(TrackerStorageStat);
if (*storage_count > max_storages)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d insufficent space, " \
"max storage count: %d, expect count: %d", \
__LINE__, pTrackerServer->ip_addr, \
pTrackerServer->port, max_storages, *storage_count);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u insufficent space, "
"max storage count: %d, expect count: %d",
__LINE__, formatted_ip, pTrackerServer->port,
max_storages, *storage_count);
*storage_count = 0;
return ENOSPC;
}
@ -484,6 +485,7 @@ int tracker_list_one_group(ConnectionInfo *pTrackerServer, \
ConnectionInfo *conn;
bool new_connection;
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
TrackerGroupStat src;
char *pInBuff;
int result;
@ -497,15 +499,14 @@ int tracker_list_one_group(ConnectionInfo *pTrackerServer, \
sizeof(TrackerHeader), "%s", group_name);
pHeader->cmd = TRACKER_PROTO_CMD_SERVER_LIST_ONE_GROUP;
long2buff(FDFS_GROUP_NAME_MAX_LEN, pHeader->pkg_len);
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(out_buff), g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(out_buff), SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, errno: %d, "
"error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -532,10 +533,10 @@ int tracker_list_one_group(ConnectionInfo *pTrackerServer, \
if (in_bytes != sizeof(TrackerGroupStat))
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid", __LINE__, formatted_ip,
pTrackerServer->port, in_bytes);
return EINVAL;
}
@ -569,6 +570,7 @@ int tracker_list_groups(ConnectionInfo *pTrackerServer, \
TrackerGroupStat *pSrc;
TrackerGroupStat *pEnd;
FDFSGroupStat *pDest;
char formatted_ip[FORMATTED_IP_SIZE];
int result;
int64_t in_bytes;
@ -577,15 +579,14 @@ int tracker_list_groups(ConnectionInfo *pTrackerServer, \
memset(&header, 0, sizeof(header));
header.cmd = TRACKER_PROTO_CMD_SERVER_LIST_ALL_GROUPS;
header.status = 0;
if ((result=tcpsenddata_nb(conn->sock, &header, \
sizeof(header), g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, &header,
sizeof(header), SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, errno: %d, "
"error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -613,10 +614,10 @@ int tracker_list_groups(ConnectionInfo *pTrackerServer, \
if (in_bytes % sizeof(TrackerGroupStat) != 0)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid", __LINE__, formatted_ip,
pTrackerServer->port, in_bytes);
*group_count = 0;
return EINVAL;
@ -625,11 +626,12 @@ int tracker_list_groups(ConnectionInfo *pTrackerServer, \
*group_count = in_bytes / sizeof(TrackerGroupStat);
if (*group_count > max_groups)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d insufficent space, " \
"max group count: %d, expect count: %d", \
__LINE__, pTrackerServer->ip_addr, \
pTrackerServer->port, max_groups, *group_count);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u insufficent space, "
"max group count: %d, expect count: %d",
__LINE__, formatted_ip, pTrackerServer->port,
max_groups, *group_count);
*group_count = 0;
return ENOSPC;
}
@ -672,6 +674,7 @@ int tracker_do_query_storage(ConnectionInfo *pTrackerServer, \
bool new_connection;
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + 128];
char in_buff[sizeof(TrackerHeader) + TRACKER_QUERY_STORAGE_FETCH_BODY_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
char *pInBuff;
int64_t in_bytes;
int result;
@ -693,16 +696,15 @@ int tracker_do_query_storage(ConnectionInfo *pTrackerServer, \
long2buff(FDFS_GROUP_NAME_MAX_LEN + filename_len, pHeader->pkg_len);
pHeader->cmd = cmd;
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN +
filename_len, g_fdfs_network_timeout)) != 0)
filename_len, SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, errno: %d, "
"error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -729,12 +731,11 @@ int tracker_do_query_storage(ConnectionInfo *pTrackerServer, \
if (in_bytes != TRACKER_QUERY_STORAGE_FETCH_BODY_LEN)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid, " \
"expect length: %d", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, in_bytes, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid, expect length: %d", __LINE__,
formatted_ip, pTrackerServer->port, in_bytes,
TRACKER_QUERY_STORAGE_FETCH_BODY_LEN);
return EINVAL;
}
@ -759,6 +760,7 @@ int tracker_query_storage_list(ConnectionInfo *pTrackerServer, \
char in_buff[sizeof(TrackerHeader) + \
TRACKER_QUERY_STORAGE_FETCH_BODY_LEN + \
FDFS_MAX_SERVERS_EACH_GROUP * IP_ADDRESS_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
char *pInBuff;
int64_t in_bytes;
int result;
@ -779,14 +781,13 @@ int tracker_query_storage_list(ConnectionInfo *pTrackerServer, \
pHeader->cmd = TRACKER_PROTO_CMD_SERVICE_QUERY_FETCH_ALL;
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN +
filename_len, g_fdfs_network_timeout)) != 0)
filename_len, SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -814,10 +815,10 @@ int tracker_query_storage_list(ConnectionInfo *pTrackerServer, \
if ((in_bytes - TRACKER_QUERY_STORAGE_FETCH_BODY_LEN) % \
(IP_ADDRESS_SIZE - 1) != 0)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid", __LINE__, formatted_ip,
pTrackerServer->port, in_bytes);
return EINVAL;
}
@ -826,10 +827,11 @@ int tracker_query_storage_list(ConnectionInfo *pTrackerServer, \
(IP_ADDRESS_SIZE - 1);
if (nMaxServerCount < *server_count)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response storage server " \
"count: %d, exceeds max server count: %d!", __LINE__, \
pTrackerServer->ip_addr, pTrackerServer->port, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response storage server "
"count: %d, exceeds max server count: %d!", __LINE__,
formatted_ip, pTrackerServer->port,
*server_count, nMaxServerCount);
return ENOSPC;
}
@ -864,6 +866,7 @@ int tracker_query_storage_store_without_group(ConnectionInfo *pTrackerServer,
TrackerHeader header;
char in_buff[sizeof(TrackerHeader) + \
TRACKER_QUERY_STORAGE_STORE_BODY_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
bool new_connection;
ConnectionInfo *conn;
char *pInBuff;
@ -878,14 +881,13 @@ int tracker_query_storage_store_without_group(ConnectionInfo *pTrackerServer,
memset(&header, 0, sizeof(header));
header.cmd = TRACKER_PROTO_CMD_SERVICE_QUERY_STORE_WITHOUT_GROUP_ONE;
if ((result=tcpsenddata_nb(conn->sock, &header, \
sizeof(header), g_fdfs_network_timeout)) != 0)
sizeof(header), SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -912,12 +914,12 @@ int tracker_query_storage_store_without_group(ConnectionInfo *pTrackerServer,
if (in_bytes != TRACKER_QUERY_STORAGE_STORE_BODY_LEN)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid, " \
"expect length: %d", __LINE__, \
pTrackerServer->ip_addr, pTrackerServer->port, \
in_bytes, TRACKER_QUERY_STORAGE_STORE_BODY_LEN);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid, expect length: %d", __LINE__,
formatted_ip, pTrackerServer->port, in_bytes,
TRACKER_QUERY_STORAGE_STORE_BODY_LEN);
return EINVAL;
}
@ -943,6 +945,7 @@ int tracker_query_storage_store_with_group(ConnectionInfo *pTrackerServer, \
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN];
char in_buff[sizeof(TrackerHeader) + \
TRACKER_QUERY_STORAGE_STORE_BODY_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
char *pInBuff;
int64_t in_bytes;
int result;
@ -959,16 +962,15 @@ int tracker_query_storage_store_with_group(ConnectionInfo *pTrackerServer, \
long2buff(FDFS_GROUP_NAME_MAX_LEN, pHeader->pkg_len);
pHeader->cmd = TRACKER_PROTO_CMD_SERVICE_QUERY_STORE_WITH_GROUP_ONE;
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN, \
g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN,
SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -995,11 +997,11 @@ int tracker_query_storage_store_with_group(ConnectionInfo *pTrackerServer, \
if (in_bytes != TRACKER_QUERY_STORAGE_STORE_BODY_LEN)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid, " \
"expect length: %d", __LINE__, \
pTrackerServer->ip_addr, pTrackerServer->port, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data "
"length: %"PRId64" is invalid, expect length: %d",
__LINE__, formatted_ip, pTrackerServer->port,
in_bytes, TRACKER_QUERY_STORAGE_STORE_BODY_LEN);
return EINVAL;
}
@ -1027,6 +1029,7 @@ int tracker_query_storage_store_list_with_group( \
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN];
char in_buff[sizeof(TrackerHeader) + FDFS_MAX_SERVERS_EACH_GROUP * \
TRACKER_QUERY_STORAGE_STORE_BODY_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
char returned_group_name[FDFS_GROUP_NAME_MAX_LEN + 1];
char *pInBuff;
char *p;
@ -1055,15 +1058,14 @@ int tracker_query_storage_store_list_with_group( \
}
long2buff(out_len, pHeader->pkg_len);
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(TrackerHeader) + out_len, g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + out_len, SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -1090,11 +1092,11 @@ int tracker_query_storage_store_list_with_group( \
if (in_bytes < TRACKER_QUERY_STORAGE_STORE_BODY_LEN)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid, " \
"expect length >= %d", __LINE__, \
pTrackerServer->ip_addr, pTrackerServer->port, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data "
"length: %"PRId64" is invalid, expect length >= %d",
__LINE__, formatted_ip, pTrackerServer->port,
in_bytes, TRACKER_QUERY_STORAGE_STORE_BODY_LEN);
return EINVAL;
}
@ -1104,22 +1106,23 @@ int tracker_query_storage_store_list_with_group( \
ipPortsLen = in_bytes - (FDFS_GROUP_NAME_MAX_LEN + 1);
if (ipPortsLen % RECORD_LENGTH != 0)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
pTrackerServer->port, in_bytes);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data "
"length: %"PRId64" is invalid", __LINE__,
formatted_ip, pTrackerServer->port, in_bytes);
return EINVAL;
}
*storage_count = ipPortsLen / RECORD_LENGTH;
if (nMaxServerCount < *storage_count)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response storage server " \
"count: %d, exceeds max server count: %d!", \
__LINE__, pTrackerServer->ip_addr, \
pTrackerServer->port, *storage_count, nMaxServerCount);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response storage server "
"count: %d, exceeds max server count: %d!",
__LINE__, formatted_ip, pTrackerServer->port,
*storage_count, nMaxServerCount);
return ENOSPC;
}
@ -1157,6 +1160,7 @@ int tracker_delete_storage(TrackerServerGroup *pTrackerGroup, \
FDFSStorageInfo storage_infos[1];
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + \
FDFS_STORAGE_ID_MAX_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
char in_buff[1];
char *pInBuff;
int64_t in_bytes;
@ -1226,13 +1230,13 @@ int tracker_delete_storage(TrackerServerGroup *pTrackerGroup, \
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN +
storage_id_len, g_fdfs_network_timeout)) != 0)
storage_id_len, SF_G_NETWORK_TIMEOUT)) != 0)
{
format_ip_address(conn->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%d fail, "
"errno: %d, error info: %s", __LINE__,
conn->ip_addr, conn->port,
result, STRERROR(result));
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
conn->port, result, STRERROR(result));
}
else
{
@ -1280,6 +1284,7 @@ int tracker_delete_group(TrackerServerGroup *pTrackerGroup, \
TrackerServerInfo *pServer;
TrackerServerInfo *pEnd;
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN];
char formatted_ip[FORMATTED_IP_SIZE];
char in_buff[1];
char *pInBuff;
int64_t in_bytes;
@ -1306,13 +1311,13 @@ int tracker_delete_group(TrackerServerGroup *pTrackerGroup, \
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN,
g_fdfs_network_timeout)) != 0)
SF_G_NETWORK_TIMEOUT)) != 0)
{
format_ip_address(conn->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%d fail, "
"errno: %d, error info: %s", __LINE__,
conn->ip_addr, conn->port,
result, STRERROR(result));
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
conn->port, result, STRERROR(result));
break;
}
@ -1343,6 +1348,7 @@ int tracker_set_trunk_server(TrackerServerGroup *pTrackerGroup, \
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + \
FDFS_STORAGE_ID_MAX_SIZE];
char in_buff[FDFS_STORAGE_ID_MAX_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
char *pInBuff;
int64_t in_bytes;
int result;
@ -1380,15 +1386,15 @@ int tracker_set_trunk_server(TrackerServerGroup *pTrackerGroup, \
continue;
}
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
if ((result=tcpsenddata_nb(conn->sock, out_buff,
sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN +
storage_id_len, g_fdfs_network_timeout)) != 0)
storage_id_len, SF_G_NETWORK_TIMEOUT)) != 0)
{
format_ip_address(conn->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%d fail, "
"errno: %d, error info: %s", __LINE__,
conn->ip_addr, conn->port,
result, STRERROR(result));
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
conn->port, result, STRERROR(result));
tracker_close_connection_ex(conn, true);
continue;
@ -1429,8 +1435,8 @@ int tracker_set_trunk_server(TrackerServerGroup *pTrackerGroup, \
return result;
}
int tracker_get_storage_status(ConnectionInfo *pTrackerServer, \
const char *group_name, const char *ip_addr, \
int tracker_get_storage_status(ConnectionInfo *pTrackerServer,
const char *group_name, const char *ip_addr,
FDFSStorageBrief *pDestBuff)
{
TrackerHeader *pHeader;
@ -1438,6 +1444,7 @@ int tracker_get_storage_status(ConnectionInfo *pTrackerServer, \
bool new_connection;
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + \
IP_ADDRESS_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
char *pInBuff;
char *p;
int result;
@ -1467,15 +1474,14 @@ int tracker_get_storage_status(ConnectionInfo *pTrackerServer, \
}
pHeader->cmd = TRACKER_PROTO_CMD_STORAGE_GET_STATUS;
long2buff(FDFS_GROUP_NAME_MAX_LEN + ip_len, pHeader->pkg_len);
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
p - out_buff, g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, out_buff,
p - out_buff, SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -1502,11 +1508,11 @@ int tracker_get_storage_status(ConnectionInfo *pTrackerServer, \
if (in_bytes != sizeof(FDFSStorageBrief))
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
pTrackerServer->port, in_bytes);
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data "
"length: %"PRId64" is invalid", __LINE__,
formatted_ip, pTrackerServer->port, in_bytes);
return EINVAL;
}
@ -1522,6 +1528,7 @@ int tracker_get_storage_id(ConnectionInfo *pTrackerServer, \
bool new_connection;
char out_buff[sizeof(TrackerHeader) + FDFS_GROUP_NAME_MAX_LEN + \
IP_ADDRESS_SIZE];
char formatted_ip[FORMATTED_IP_SIZE];
char *p;
int result;
int ip_len;
@ -1555,15 +1562,14 @@ int tracker_get_storage_id(ConnectionInfo *pTrackerServer, \
}
pHeader->cmd = TRACKER_PROTO_CMD_STORAGE_GET_SERVER_ID;
long2buff(FDFS_GROUP_NAME_MAX_LEN + ip_len, pHeader->pkg_len);
if ((result=tcpsenddata_nb(conn->sock, out_buff, \
p - out_buff, g_fdfs_network_timeout)) != 0)
if ((result=tcpsenddata_nb(conn->sock, out_buff,
p - out_buff, SF_G_NETWORK_TIMEOUT)) != 0)
{
logError("file: "__FILE__", line: %d, " \
"send data to tracker server %s:%d fail, " \
"errno: %d, error info: %s", __LINE__, \
pTrackerServer->ip_addr, \
pTrackerServer->port, \
result, STRERROR(result));
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"send data to tracker server %s:%u fail, "
"errno: %d, error info: %s", __LINE__, formatted_ip,
pTrackerServer->port, result, STRERROR(result));
}
else
{
@ -1589,10 +1595,10 @@ int tracker_get_storage_id(ConnectionInfo *pTrackerServer, \
if (in_bytes == 0 || in_bytes >= FDFS_STORAGE_ID_MAX_SIZE)
{
logError("file: "__FILE__", line: %d, " \
"tracker server %s:%d response data " \
"length: %"PRId64" is invalid", \
__LINE__, pTrackerServer->ip_addr, \
format_ip_address(pTrackerServer->ip_addr, formatted_ip);
logError("file: "__FILE__", line: %d, "
"tracker server %s:%u response data length: %"PRId64" "
"is invalid", __LINE__, formatted_ip,
pTrackerServer->port, in_bytes);
return EINVAL;
}
@ -1654,4 +1660,3 @@ int tracker_get_storage_max_status(TrackerServerGroup *pTrackerGroup, \
return 0;
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef TRACKER_CLIENT_H
@ -26,8 +26,8 @@ typedef struct
char src_id[FDFS_STORAGE_ID_MAX_SIZE]; //src storage id
char domain_name[FDFS_DOMAIN_NAME_MAX_SIZE]; //http domain name
char version[FDFS_VERSION_SIZE];
int total_mb; //total disk storage in MB
int free_mb; //free disk storage in MB
int64_t total_mb; //total disk storage in MB
int64_t free_mb; //free disk storage in MB
int upload_priority; //upload priority
time_t join_time; //storage join timestamp (create timestamp)
time_t up_time; //storage service started timestamp

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
//fdfs_define.h
@ -14,7 +14,7 @@
#include <pthread.h>
#include "fastcommon/common_define.h"
#define FDFS_TRACKER_SERVER_DEF_PORT 22000
#define FDFS_TRACKER_SERVER_DEF_PORT 22122
#define FDFS_STORAGE_SERVER_DEF_PORT 23000
#define FDFS_DEF_STORAGE_RESERVED_MB 1024
#define TRACKER_ERROR_LOG_FILENAME "trackerd"

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <time.h>
@ -20,13 +20,11 @@
#include "fastcommon/logger.h"
#include "fdfs_global.h"
int g_fdfs_connect_timeout = DEFAULT_CONNECT_TIMEOUT;
int g_fdfs_network_timeout = DEFAULT_NETWORK_TIMEOUT;
char g_fdfs_base_path[MAX_PATH_SIZE] = {'/', 't', 'm', 'p', '\0'};
Version g_fdfs_version = {5, 12};
Version g_fdfs_version = {6, 12, 2};
bool g_use_connection_pool = false;
ConnectionPool g_connection_pool;
int g_connection_pool_max_idle_time = 3600;
struct base64_context g_fdfs_base64_context;
/*
data filename format:

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
//fdfs_global.h
@ -12,8 +12,10 @@
#define _FDFS_GLOBAL_H
#include "fastcommon/common_define.h"
#include "fdfs_define.h"
#include "fastcommon/base64.h"
#include "fastcommon/connection_pool.h"
#include "sf/sf_global.h"
#include "fdfs_define.h"
#define FDFS_FILE_EXT_NAME_MAX_LEN 6
@ -21,13 +23,11 @@
extern "C" {
#endif
extern int g_fdfs_connect_timeout;
extern int g_fdfs_network_timeout;
extern char g_fdfs_base_path[MAX_PATH_SIZE];
extern Version g_fdfs_version;
extern bool g_use_connection_pool;
extern ConnectionPool g_connection_pool;
extern int g_connection_pool_max_idle_time;
extern struct base64_context g_fdfs_base64_context;
int fdfs_check_data_filename(const char *filename, const int len);
int fdfs_gen_slave_filename(const char *master_filename, \

View File

@ -4,7 +4,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <time.h>
@ -72,7 +72,7 @@ int fdfs_http_get_content_type_by_extname(FDFSHTTPParams *pParams, \
return 0;
}
pHashData = hash_find_ex(&pParams->content_type_hash, \
pHashData = fc_hash_find_ex(&pParams->content_type_hash, \
ext_name, ext_len + 1);
if (pHashData == NULL)
{
@ -282,7 +282,7 @@ int fdfs_http_params_load(IniContext *pIniContext, \
if (!(pParams->need_find_content_type || pParams->support_multi_range))
{
hash_destroy(&pParams->content_type_hash);
fc_hash_destroy(&pParams->content_type_hash);
}
if ((result=getFileContent(token_check_fail_filename, \
@ -301,7 +301,7 @@ void fdfs_http_params_destroy(FDFSHTTPParams *pParams)
{
if (!(pParams->need_find_content_type || pParams->support_multi_range))
{
hash_destroy(&pParams->content_type_hash);
fc_hash_destroy(&pParams->content_type_hash);
}
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef _FDFS_HTTP_SHARED_H

View File

@ -4,7 +4,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#include <time.h>
@ -69,11 +69,11 @@ int load_mime_types_from_file(HashArray *pHash, const char *mime_filename)
}
}
if ((result=hash_init_ex(pHash, PJWHash, 2 * 1024, 0.75, 0, true)) != 0)
if ((result=fc_hash_init_ex(pHash, PJWHash, 2 * 1024, 0.75, 0, true)) != 0)
{
free(content);
logError("file: "__FILE__", line: %d, " \
"hash_init_ex fail, errno: %d, error info: %s", \
"fc_hash_init_ex fail, errno: %d, error info: %s", \
__LINE__, result, STRERROR(result));
return result;
}
@ -108,14 +108,14 @@ int load_mime_types_from_file(HashArray *pHash, const char *mime_filename)
continue;
}
if ((result=hash_insert_ex(pHash, ext_name, \
if ((result=fc_hash_insert_ex(pHash, ext_name, \
strlen(ext_name)+1, content_type, \
strlen(content_type)+1, true)) < 0)
{
free(content);
result *= -1;
logError("file: "__FILE__", line: %d, " \
"hash_insert_ex fail, errno: %d, " \
"fc_hash_insert_ex fail, errno: %d, " \
"error info: %s", __LINE__, \
result, STRERROR(result));
return result;
@ -125,7 +125,7 @@ int load_mime_types_from_file(HashArray *pHash, const char *mime_filename)
free(content);
//hash_stat_print(pHash);
//fc_hash_stat_print(pHash);
return 0;
}

View File

@ -3,7 +3,7 @@
*
* FastDFS may be copied only under the terms of the GNU General
* Public License V3, which may be found in the FastDFS source kit.
* Please visit the FastDFS Home Page http://www.csource.org/ for more detail.
* Please visit the FastDFS Home Page http://www.fastken.com/ for more detail.
**/
#ifndef _MINE_FILE_PARSER_H

View File

@ -1,23 +1,30 @@
# connect timeout in seconds
# default value is 30s
connect_timeout=10
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout=60
network_timeout = 60
# the base path to store log files
base_path=/home/yuqing/fastdfs
base_path = /opt/fastdfs
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an intranet IP and an extranet IP.
# such as: 192.168.2.100,122.244.141.46
tracker_server=192.168.0.196:22122
tracker_server=192.168.0.197:22122
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# IPv4:
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
#
# IPv6:
# for example: [2409:8a20:42d:2f40:587a:4c47:72c0:ad8e,fe80::1ee9:90a8:1351:436c]:22122
#
tracker_server = 192.168.0.196:22122
tracker_server = 192.168.0.197:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
@ -28,7 +35,14 @@ tracker_server=192.168.0.197:22122
### notice
### info
### debug
log_level=info
log_level = info
# connect which ip address first for multi IPs of a storage server, value list:
## tracker: connect to the ip address return by tracker server first
## last-connected: connect to the ip address last connected first
# default value is tracker
# since V6.11
connect_first_by = tracker
# if use connection pool
# default value is false
@ -44,7 +58,7 @@ connection_pool_max_idle_time = 3600
# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker=false
load_fdfs_parameters_from_tracker = false
# if use storage ID instead of IP address
# same as tracker.conf
@ -61,7 +75,7 @@ storage_ids_filename = storage_ids.conf
#HTTP settings
http.tracker_server_port=80
http.tracker_server_port = 80
#use "#include" directive to include HTTP other settiongs
##include http.conf

View File

@ -5,24 +5,24 @@ http.default_content_type = application/octet-stream
# MIME types file format: MIME_type extensions
# such as: image/jpeg jpeg jpg jpe
# you can use apache's MIME file: mime.types
http.mime_types_filename=mime.types
http.mime_types_filename = mime.types
# if use token to anti-steal
# default value is false (0)
http.anti_steal.check_token=false
http.anti_steal.check_token = false
# token TTL (time to live), seconds
# default value is 600
http.anti_steal.token_ttl=900
http.anti_steal.token_ttl = 900
# secret key to generate anti-steal token
# this parameter must be set when http.anti_steal.check_token set to true
# the length of the secret key should not exceed 128 bytes
http.anti_steal.secret_key=FastDFS1234567890
http.anti_steal.secret_key = FastDFS1234567890
# return the content of the file when check token fail
# default value is empty (no file sepecified)
http.anti_steal.token_check_fail=/home/yuqing/fastdfs/conf/anti-steal.jpg
http.anti_steal.token_check_fail = /home/yuqing/fastdfs/conf/anti-steal.jpg
# if support multi regions for HTTP Range
# default value is true

View File

@ -1,67 +1,105 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled=false
disabled = false
# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group1
# and storage_ids.conf must be configured correctly.
group_name = group1
# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
#
# bind IPv4 example: 192.168.2.100
#
# bind IPv6 example: 2409:8a20:42d:2f40:587a:4c47:72c0:ad8e
#
# bind IPv4 and IPv6 example: 192.168.2.100,2409:8a20:42d:2f40:587a:4c47:72c0:ad8e
#
# as any/all addresses, IPv4 is 0.0.0.0, IPv6 is ::
#
bind_addr =
# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true
client_bind = true
# the storage server port
port=23000
port = 23000
# the address family of service, value list:
## IPv4: IPv4 stack
## IPv6: IPv6 stack
## auto: auto detect by bind_addr, IPv4 first then IPv6 when bind_addr is empty
## both: IPv4 and IPv6 dual stacks
# default value is auto
# since V6.11
address_family = auto
# specify the storage server ID for NAT network
# NOT set or commented for auto set by the local ip addresses
# since V6.11
#
# NOTE:
## * this paramter is valid only when use_storage_id and trust_storage_server_id
## in tracker.conf set to true
## * the storage server id must exist in storage_ids.conf
#server_id =
# connect timeout in seconds
# default value is 30s
connect_timeout=10
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout=60
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# heart beat interval in seconds
heart_beat_interval=30
# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
heart_beat_interval = 30
# disk usage report interval in seconds
stat_report_interval=60
# the storage server send disk usage report to tracker server periodically
# default value is 300
stat_report_interval = 60
# the base path to store data and log files
base_path=/home/yuqing/fastdfs
# NOTE: the binlog files maybe are large, make sure
# the base path has enough disk space,
# eg. the disk free space should > 50GB
base_path = /opt/fastdfs
# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
max_connections=1024
# default value is 256
max_connections = 1024
# the buff size to recv / send data
# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
buff_size = 256KB
# accept thread count
# default value is 1
# default value is 1 which is recommended
# since V4.07
accept_threads=1
accept_threads = 1
# work thread count, should <= max_connections
# work thread deal network io
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads=4
work_threads = 4
# if disk read / write separated
## false for mixed read and write
@ -70,13 +108,13 @@ work_threads=4
# since V2.00
disk_rw_separated = true
# disk reader thread count per store base path
# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# disk writer thread count per store base path
# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
@ -84,45 +122,62 @@ disk_writer_threads = 1
# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50
sync_wait_msec = 50
# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0
sync_interval = 0
# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00
sync_start_time = 00:00
# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59
sync_end_time = 23:59
# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500
write_mark_file_freq = 500
# path(disk or mount point) count, default value is 1
store_path_count=1
# disk recovery thread count
# default value is 1
# since V6.04
disk_recovery_threads = 3
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs2
# store path (disk or mount point) count, default value is 1
store_path_count = 1
# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
# the store paths' order is very important, don't mess up!!!
# the base_path should be independent (different) of the store paths
store_path0 = /opt/fastdfs
#store_path1 = /opt/fastdfs2
# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256
subdir_count_per_path = 256
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an intranet IP and an extranet IP.
# such as: 192.168.2.100,122.244.141.46
tracker_server=192.168.209.121:22122
tracker_server=192.168.209.122:22122
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# IPv4:
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
#
# IPv6:
# for example: [2409:8a20:42d:2f40:587a:4c47:72c0:ad8e,fe80::1ee9:90a8:1351:436c]:22122
#
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
@ -133,15 +188,15 @@ tracker_server=192.168.209.122:22122
### notice
### info
### debug
log_level=info
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
run_by_group =
#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
@ -151,70 +206,71 @@ run_by_user=
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*
allow_hosts = *
# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0
file_distribute_path_mode = 0
# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
file_distribute_rotate_count=100
file_distribute_rotate_count = 100
# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0
fsync_after_written_bytes = 0
# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10
sync_log_buff_interval = 1
# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10
sync_binlog_buff_interval = 1
# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300
sync_stat_file_interval = 300
# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB
thread_stack_size = 512KB
# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10
upload_priority = 10
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=
if_alias_prefix =
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0
check_file_duplicate = 0
# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash
file_signature_method = hash
# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS
key_namespace = FastDFS
# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0
keep_alive = 0
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
@ -237,7 +293,17 @@ rotate_access_log = false
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00
access_log_rotate_time = 00:00
# if compress the old access log by gzip
# default value is false
# since V6.04
compress_old_access_log = false
# compress the access log days before
# default value is 1
# since V6.04
compress_access_log_days_before = 7
# if rotate the error log every day
# default value is false
@ -248,7 +314,17 @@ rotate_error_log = false
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
@ -270,12 +346,12 @@ log_file_keep_days = 0
# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false
file_sync_skip_invalid_record = false
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
@ -283,10 +359,29 @@ use_connection_pool = false
# since V4.05
connection_pool_max_idle_time = 3600
# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = true
# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time = 01:30
# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
check_store_path_mark = true
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=
http.domain_name =
# the port of the web server on this storage server
http.server_port=8888
http.server_port = 8888

View File

@ -1,10 +1,22 @@
# <id> <group_name> <ip_or_hostname[:port]>
#
# id is a natural number (1, 2, 3 etc.),
# 6 bits of the id length is enough, such as 100001
#
# storage ip or hostname can be dual IPs seperated by comma,
# one is an intranet IP and another is an extranet IP.
# one is an inner (intranet) IP and another is an outer (extranet) IP,
# or two different types of inner (intranet) IPs
# IPv4:
# for example: 192.168.2.100,122.244.141.46
# another eg.: 192.168.1.10,172.17.4.21
#
# IPv6:
# or example: [2409:8a20:42d:2f40:587a:4c47:72c0:ad8e,fe80::1ee9:90a8:1351:436c]
# another eg.: [2409:8a20:42d:2f40:587a:4c47:72c0:ad8e,fe80::1ee9:90a8:1351:436c]:100002
#
# the port is optional. if you run more than one storaged instances
# in a server, you must specified the port to distinguish different instances.
# 100001 group1 192.168.0.196
# 100002 group1 192.168.0.116
100001 group1 192.168.0.196
100002 group1 192.168.0.197
100003 group1 [2409:8a20:42d:2f40:587a:4c47:72c0:ad8e]:100002

View File

@ -1,86 +1,115 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled=false
disabled = false
# bind an address of this host
# empty for bind all addresses of this host
bind_addr=
#
# bind IPv4 example: 192.168.2.100
#
# bind IPv6 example: 2409:8a20:42d:2f40:587a:4c47:72c0:ad8e
#
# bind IPv4 and IPv6 example: 192.168.2.100,2409:8a20:42d:2f40:587a:4c47:72c0:ad8e
#
# as any/all addresses, IPv4 is 0.0.0.0, IPv6 is ::
#
bind_addr =
# the tracker server port
port=22122
port = 22122
# the address family of service, value list:
## IPv4: IPv4 stack
## IPv6: IPv6 stack
## auto: auto detect by bind_addr, IPv4 first then IPv6 when bind_addr is empty
## both: IPv4 and IPv6 dual stacks
#
# following parameter use_storage_id MUST set to true and
# id_type_in_filename MUST set to id when IPv6 enabled
#
# default value is auto
# since V6.11
address_family = auto
# connect timeout in seconds
# default value is 30s
connect_timeout=10
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout=60
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# the base path to store data and log files
base_path=/home/yuqing/fastdfs
base_path = /opt/fastdfs
# max concurrent connections this server supported
# you should set this parameter larger, eg. 102400
max_connections=1024
# max concurrent connections this server support
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024
# accept thread count
# default value is 1
# default value is 1 which is recommended
# since V4.07
accept_threads=1
accept_threads = 1
# work thread count, should <= max_connections
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads=4
work_threads = 4
# min buff size
# the min network buff size
# default value 8KB
min_buff_size = 8KB
# max buff size
# the max network buff size
# default value 128KB
max_buff_size = 128KB
# the method of selecting group to upload files
# the method for selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup=2
store_lookup = 2
# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group=group2
store_group = group2
# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server=0
store_server = 0
# which path(means disk or mount point) of the storage server to upload file
# which path (means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path=0
store_path = 0
# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server=0
download_server = 0
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space,
# no file can be uploaded to this group.
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 10%
#
### XX.XX% as ratio such as: reserved_storage_space = 10%
#
# NOTE:
## the absolute reserved space is the sum of all store paths in the storage server
## the reserved space ratio is for each store path
reserved_storage_space = 20%
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
@ -91,7 +120,7 @@ reserved_storage_space = 10%
### notice
### info
### debug
log_level=info
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
@ -99,7 +128,7 @@ run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user=
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
@ -109,11 +138,11 @@ run_by_user=
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*
allow_hosts = *
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 10
sync_log_buff_interval = 1
# check storage server alive interval seconds
check_active_interval = 120
@ -150,7 +179,24 @@ slot_min_size = 256
# store the upload file to trunk file when it's size <= this value
# default value is 16MB
# since V3.00
slot_max_size = 16MB
slot_max_size = 1MB
# the alignment size to allocate the trunk space
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
# fragmentation, but the more space is wasted.
trunk_alloc_alignment_size = 256
# if merge contiguous free spaces of trunk file
# default value is false
# since V6.05
trunk_free_space_merge = true
# if delete / reclaim the unused trunk files
# default value is false
# since V6.05
delete_unused_trunk_files = false
# the trunk file size, should >= 4MB
# default value is 64MB
@ -174,8 +220,8 @@ trunk_create_file_time_base = 02:00
trunk_create_file_interval = 86400
# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create
# the trunk files
# when the free trunk file size less than the threshold,
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G
@ -195,19 +241,41 @@ trunk_init_check_occupying = false
trunk_init_reload_from_binlog = false
# the min interval for compressing the trunk binlog file
# unit: second
# default value is 0, 0 means never compress
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 0
trunk_compress_binlog_min_interval = 86400
# if use storage ID instead of IP address
# the interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400
# compress the trunk binlog time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00
# max backups for the trunk binlog file
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7
# if use storage server ID instead of IP address
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf
@ -217,7 +285,14 @@ storage_ids_filename = storage_ids.conf
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = ip
id_type_in_filename = id
# if trust the storage server ID sent by the storage server
# this paramter is valid only when use_storage_id set to true
# default value is true
# since V6.11
trust_storage_server_id = true
# if store slave file use symbol link
# default value is false
@ -233,7 +308,17 @@ rotate_error_log = false
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
@ -249,7 +334,7 @@ log_file_keep_days = 0
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
@ -258,21 +343,21 @@ use_connection_pool = false
connection_pool_max_idle_time = 3600
# HTTP port on this tracker server
http.server_port=8080
http.server_port = 8080
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30
http.check_alive_interval = 30
# check storage HTTP server alive type, values are:
# tcp : connect to the storge server with HTTP port only,
# do not request and get response
# http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp
http.check_alive_type = tcp
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html
http.check_alive_uri = /status.html

5
debian/README.Debian vendored Normal file
View File

@ -0,0 +1,5 @@
FastDFS is an open source high performance distributed file system. Its major
functions include: file storing, file syncing and file accessing (file uploading
and file downloading), and it can resolve the high capacity and load balancing
problem. FastDFS should meet the requirement of the website whose service based
on files such as photo sharing site and video sharing site.

7
debian/changelog vendored Normal file
View File

@ -0,0 +1,7 @@
fastdfs (6.12.1-1) stable; urgency=medium
* adapt to libserverframe 1.2.3
* bugfixed: notify_leader_changed support IPv6 correctly
* log square quoted IPv6 address
-- YuQing <384681@qq.com> Wed, 6 Mar 2024 15:14:27 +0000

1
debian/compat vendored Normal file
View File

@ -0,0 +1 @@
11

56
debian/control vendored Normal file
View File

@ -0,0 +1,56 @@
Source: fastdfs
Section: admin
Priority: optional
Maintainer: YuQing <384681@qq.com>
Build-Depends: debhelper (>=11~),
libfastcommon-dev (>= 1.0.73),
libserverframe-dev (>= 1.2.3)
Standards-Version: 4.1.4
Homepage: http://github.com/happyfish100/fastdfs/
Package: fastdfs
Architecture: linux-any
Multi-Arch: foreign
Depends: fastdfs-server (= ${binary:Version}),
fastdfs-tool (= ${binary:Version}),
${misc:Depends}
Description: FastDFS server and client
Package: fastdfs-server
Architecture: linux-any
Multi-Arch: foreign
Depends: libfastcommon (>= ${libfastcommon:Version}),
libserverframe (>= ${libserverframe:Version}),
fastdfs-config (>= ${fastdfs-config:Version}),
${misc:Depends}, ${shlibs:Depends}
Description: FastDFS server
Package: libfdfsclient
Architecture: linux-any
Multi-Arch: foreign
Depends: libfastcommon (>= ${libfastcommon:Version}),
libserverframe (>= ${libserverframe:Version}),
${misc:Depends}, ${shlibs:Depends}
Description: FastDFS client tools
Package: libfdfsclient-dev
Architecture: linux-any
Multi-Arch: foreign
Depends: libfdfsclient (= ${binary:Version}),
${misc:Depends}
Description: header files of FastDFS client library
This package provides the header files of libfdfsclient
Package: fastdfs-tool
Architecture: linux-any
Multi-Arch: foreign
Depends: libfdfsclient (= ${binary:Version}),
fastdfs-config (>= ${fastdfs-config:Version}),
${misc:Depends}, ${shlibs:Depends}
Description: FastDFS client tools
Package: fastdfs-config
Architecture: linux-any
Multi-Arch: foreign
Description: FastDFS config files for sample
FastDFS config files for sample including server and client

675
debian/copyright vendored Normal file
View File

@ -0,0 +1,675 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

1
debian/fastdfs-config.install vendored Normal file
View File

@ -0,0 +1 @@
etc/fdfs/*.conf

1
debian/fastdfs-server.dirs vendored Normal file
View File

@ -0,0 +1 @@
opt/fastdfs

2
debian/fastdfs-server.install vendored Normal file
View File

@ -0,0 +1,2 @@
usr/bin/fdfs_trackerd
usr/bin/fdfs_storaged

1
debian/fastdfs-tool.dirs vendored Normal file
View File

@ -0,0 +1 @@
opt/fastdfs

13
debian/fastdfs-tool.install vendored Normal file
View File

@ -0,0 +1,13 @@
usr/bin/fdfs_monitor
usr/bin/fdfs_test
usr/bin/fdfs_test1
usr/bin/fdfs_crc32
usr/bin/fdfs_upload_file
usr/bin/fdfs_download_file
usr/bin/fdfs_delete_file
usr/bin/fdfs_file_info
usr/bin/fdfs_appender_test
usr/bin/fdfs_appender_test1
usr/bin/fdfs_append_file
usr/bin/fdfs_upload_appender
usr/bin/fdfs_regenerate_filename

1
debian/libfdfsclient-dev.install vendored Normal file
View File

@ -0,0 +1 @@
usr/include/fastdfs/*

1
debian/libfdfsclient.install vendored Normal file
View File

@ -0,0 +1 @@
usr/lib/libfdfsclient*

30
debian/rules vendored Executable file
View File

@ -0,0 +1,30 @@
#!/usr/bin/make -f
export DH_VERBOSE=1
export DESTDIR=$(CURDIR)/debian/tmp
export CONFDIR=$(DESTDIR)/etc/fdfs/
%:
dh $@
override_dh_auto_build:
./make.sh clean && DESTDIR=$(DESTDIR) ./make.sh
override_dh_auto_install:
DESTDIR=$(DESTDIR) ./make.sh install
mkdir -p $(CONFDIR)
cp conf/*.conf $(CONFDIR)
cp systemd/fdfs_storaged.service debian/fastdfs-server.fdfs_storaged.service
cp systemd/fdfs_trackerd.service debian/fastdfs-server.fdfs_trackerd.service
dh_auto_install
override_dh_installsystemd:
dh_installsystemd --package=fastdfs-server --name=fdfs_storaged --no-start --no-restart-on-upgrade
dh_installsystemd --package=fastdfs-server --name=fdfs_trackerd --no-start --no-restart-on-upgrade
.PHONY: override_dh_gencontrol
override_dh_gencontrol:
dh_gencontrol -- -Tdebian/substvars

1
debian/source/format vendored Normal file
View File

@ -0,0 +1 @@
3.0 (quilt)

3
debian/substvars vendored Normal file
View File

@ -0,0 +1,3 @@
libfastcommon:Version=1.0.73
libserverframe:Version=1.2.3
fastdfs-config:Version=1.0.0

3
debian/watch vendored Normal file
View File

@ -0,0 +1,3 @@
version=3
opts="mode=git" https://github.com/happyfish100/fastdfs.git \
refs/tags/v([\d\.]+) debian uupdate

View File

@ -0,0 +1,22 @@
# FastDFS Dockerfile local (本地包构建)
感谢余大的杰作!
本目录包含了docker构建镜像集群安装帮助手册
1、目录结构
./build_image-v6.0.9 fastdfs-v6.0.9版本的构建docker镜像
./fastdfs-conf 配置文件其实和build_image_v.x下的文件是相同的。
|--setting_conf.sh 设置配置文件的脚本
./自定义镜像和安装手册.txt
./qa.txt 来自于bbs论坛的问题整理http://bbs.chinaunix.net/forum-240-1.html
2、fastdfs 版本安装变化
+ v6.0.9 依赖libevent、libfastcommon和libserverframe v6.0.8及以下依赖libevent和libfastcommon两个库其中libfastcommon是 FastDFS 自身提供的。
+ v6.0.9 适配fastdfs-nginx-module-1.23及以上版本v6.0.8及以下是fastdfs-nginx-module-1.22

View File

@ -0,0 +1,74 @@
# 选择系统镜像作为基础镜像可以使用超小的Linux镜像alpine
#FROM centos:7
FROM alpine:3.12
LABEL MAINTAINER liyanjing 284223249@qq.com
# 0.安装包位置fdfs的基本目录和存储目录
ENV INSTALL_PATH=/usr/local/src \
LIBFASTCOMMON_VERSION="1.0.57" \
FASTDFS_VERSION="6.08" \
FASTDFS_NGINX_MODULE_VERSION="1.22" \
NGINX_VERSION="1.22.0" \
TENGINE_VERSION="2.3.3"
# 0.change the system source for installing libs
RUN echo "http://mirrors.aliyun.com/alpine/v3.12/main" > /etc/apk/repositories \
&& echo "http://mirrors.aliyun.com/alpine/v3.12/community" >> /etc/apk/repositories
# 1.复制安装包
ADD soft ${INSTALL_PATH}
# 2.环境安装
# - 创建fdfs的存储目录
# - 安装依赖
# - 安装libfastcommon
# - 安装fastdfs
# - 安装nginx,设置nginx和fastdfs联合环境并配置nginx
#Run yum -y install -y gcc gcc-c++ libevent libevent-devel make automake autoconf libtool perl pcre pcre-devel zlib zlib-devel openssl openssl-devel zip unzip net-tools wget vim lsof \
RUN apk update && apk add --no-cache --virtual .build-deps bash autoconf gcc libc-dev make pcre-dev zlib-dev linux-headers gnupg libxslt-dev gd-dev geoip-dev wget \
&& cd ${INSTALL_PATH} \
&& tar -zxf libfastcommon-${LIBFASTCOMMON_VERSION}.tar.gz \
&& tar -zxf fastdfs-${FASTDFS_VERSION}.tar.gz \
&& tar -zxf fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \
&& tar -zxf nginx-${NGINX_VERSION}.tar.gz \
\
&& cd ${INSTALL_PATH}/libfastcommon-${LIBFASTCOMMON_VERSION}/ \
&& ./make.sh \
&& ./make.sh install \
&& cd ${INSTALL_PATH}/fastdfs-${FASTDFS_VERSION}/ \
&& ./make.sh \
&& ./make.sh install \
\
&& cd ${INSTALL_PATH}/nginx-${NGINX_VERSION}/ \
&& ./configure --prefix=/usr/local/nginx --pid-path=/var/run/nginx/nginx.pid --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-stream=dynamic \
--add-module=${INSTALL_PATH}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/ \
&& make \
&& make install \
\
&& rm -rf ${INSTALL_PATH}/* \
&& apk del .build-deps gcc libc-dev make linux-headers gnupg libxslt-dev gd-dev geoip-dev wget
# 3.添加配置文件,目标路径以/结尾docker会把它当作目录不存在时会自动创建
COPY conf/*.* /etc/fdfs/
COPY nginx_conf/nginx.conf /usr/local/nginx/conf/
COPY nginx_conf.d/*.conf /usr/local/nginx/conf.d/
COPY start.sh /
ENV TZ=Asia/Shanghai
# 4.更改启动脚本执行权限,设置时区为中国时间
RUN chmod u+x /start.sh \
&& apk add --no-cache bash pcre-dev zlib-dev \
\
&& apk add -U tzdata \
&& ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
&& apk del tzdata && rm -rf /var/cache/apk/*
EXPOSE 22122 23000 9088
WORKDIR /
# 镜像启动
ENTRYPOINT ["/bin/bash","/start.sh"]

View File

@ -0,0 +1,71 @@
# connect timeout in seconds
# default value is 30s
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout = 60
# the base path to store log files
base_path = /data/fastdfs_data
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
tracker_server = 192.168.0.196:22122
tracker_server = 192.168.0.197:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker = false
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf
#HTTP settings
http.tracker_server_port = 80
#use "#include" directive to include HTTP other settiongs
##include http.conf

View File

@ -0,0 +1,29 @@
# HTTP default content type
http.default_content_type = application/octet-stream
# MIME types mapping filename
# MIME types file format: MIME_type extensions
# such as: image/jpeg jpeg jpg jpe
# you can use apache's MIME file: mime.types
http.mime_types_filename = mime.types
# if use token to anti-steal
# default value is false (0)
http.anti_steal.check_token = false
# token TTL (time to live), seconds
# default value is 600
http.anti_steal.token_ttl = 900
# secret key to generate anti-steal token
# this parameter must be set when http.anti_steal.check_token set to true
# the length of the secret key should not exceed 128 bytes
http.anti_steal.secret_key = FastDFS1234567890
# return the content of the file when check token fail
# default value is empty (no file sepecified)
http.anti_steal.token_check_fail = /home/yuqing/fastdfs/conf/anti-steal.jpg
# if support multi regions for HTTP Range
# default value is true
http.multi_range.enabed = true

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,137 @@
# connect timeout in seconds
# default value is 30s
connect_timeout=15
# network recv and send timeout in seconds
# default value is 30s
network_timeout=30
# the base path to store log files
base_path=/data/fastdfs_data
# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true
# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf
# FastDFS tracker_server can ocur more than once, and tracker_server format is
# "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122
# the port of the local storage server
# the default value is 23000
storage_server_port=23000
# the group name of the local storage server
group_name=group1
# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true
# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/data/fastdfs/upload/path0
#store_path1=/home/yuqing/fastdfs1
# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=
# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=
# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf
# if support flv
# default value is false
# since v1.15
flv_support = true
# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv
## 如果在此存储服务器上支持多组时有几组就设置几组。单组为0.
## 一台服务器没有必要运行多个group的storage因为stroage本身支持多存储目录的
# set the group count
# set to none zero to support multi-group on this storage server
# set to 0 for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 0
## 如果在此存储服务器上支持多组时,有几组就设置几组
# group settings for group #1
# since v1.14
# when support multi-group on this storage server, uncomment following section
#[group1]
#group_name=group1
#storage_server_port=23000
#store_path_count=2
#store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs1
# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
#[group2]
#group_name=group2
#storage_server_port=23000
#store_path_count=1
#store_path0=/home/yuqing/fastdfs

View File

@ -0,0 +1,353 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled = false
# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configured correctly.
group_name = group1
# bind an address of this host
# empty for bind all addresses of this host
bind_addr =
# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
client_bind = true
# the storage server port
port = 23000
# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
heart_beat_interval = 30
# disk usage report interval in seconds
# the storage server send disk usage report to tracker server periodically
# default value is 300
stat_report_interval = 60
# the base path to store data and log files
# NOTE: the binlog files maybe are large, make sure
# the base path has enough disk space,
# eg. the disk free space should > 50GB
base_path = /data/fastdfs_data
# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024
# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
buff_size = 256KB
# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4
# if disk read / write separated
## false for mixed read and write
## true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true
# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec = 50
# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval = 0
# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time = 00:00
# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time = 23:59
# write to the mark file after sync N files
# default value is 500
write_mark_file_freq = 500
# disk recovery thread count
# default value is 1
# since V6.04
disk_recovery_threads = 3
# store path (disk or mount point) count, default value is 1
store_path_count = 1
# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
# the store paths' order is very important, don't mess up!!!
# the base_path should be independent (different) of the store paths
store_path0 = /data/fastdfs/upload/path0
#store_path1 = /home/yuqing/fastdfs2
# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path = 256
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group =
#unix username to run this program,
#not set (empty) means run by current user
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *
# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode = 0
# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
file_distribute_rotate_count = 100
# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes = 0
# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval = 1
# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval = 1
# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval = 300
# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size = 512KB
# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority = 10
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix =
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate = 0
# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method = hash
# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace = FastDFS
# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive = 0
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# if log to access log
# default value is false
# since V4.00
use_access_log = false
# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false
# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time = 00:00
# if compress the old access log by gzip
# default value is false
# since V6.04
compress_old_access_log = false
# compress the access log days before
# default value is 1
# since V6.04
compress_access_log_days_before = 7
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record = false
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = true
# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time = 01:30
# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
check_store_path_mark = true
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name =
# the port of the web server on this storage server
http.server_port = 8888

View File

@ -0,0 +1,16 @@
# <id> <group_name> <ip_or_hostname[:port]>
#
# id is a natural number (1, 2, 3 etc.),
# 6 bits of the id length is enough, such as 100001
#
# storage ip or hostname can be dual IPs seperated by comma,
# one is an inner (intranet) IP and another is an outer (extranet) IP,
# or two different types of inner (intranet) IPs
# for example: 192.168.2.100,122.244.141.46
# another eg.: 192.168.1.10,172.17.4.21
#
# the port is optional. if you run more than one storaged instances
# in a server, you must specified the port to distinguish different instances.
#100001 group1 192.168.0.196
#100002 group1 192.168.0.197

View File

@ -0,0 +1,329 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled = false
# bind an address of this host
# empty for bind all addresses of this host
bind_addr =
# the tracker server port
port = 22122
# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# the base path to store data and log files
base_path = /data/fastdfs_data
# max concurrent connections this server support
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024
# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4
# the min network buff size
# default value 8KB
min_buff_size = 8KB
# the max network buff size
# default value 128KB
max_buff_size = 128KB
# the method for selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup = 2
# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group = group2
# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server = 0
# which path (means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path = 0
# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server = 0
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as: reserved_storage_space = 10%
reserved_storage_space = 20%
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 1
# check storage server alive interval seconds
check_active_interval = 120
# thread stack size, should >= 64KB
# default value is 256KB
thread_stack_size = 256KB
# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true
# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400
# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300
# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false
# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256
# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <= this value
# default value is 16MB
# since V3.00
slot_max_size = 1MB
# the alignment size to allocate the trunk space
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
# fragmentation, but the more space is wasted.
trunk_alloc_alignment_size = 256
# if merge contiguous free spaces of trunk file
# default value is false
# since V6.05
trunk_free_space_merge = true
# if delete / reclaim the unused trunk files
# default value is false
# since V6.05
delete_unused_trunk_files = false
# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB
# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false
# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00
# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400
# the threshold to create trunk file
# when the free trunk file size less than the threshold,
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G
# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false
# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false
# the min interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 86400
# the interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400
# compress the trunk binlog time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00
# max backups for the trunk binlog file
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7
# if use storage server ID instead of IP address
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf
# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = id
# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# HTTP port on this tracker server
http.server_port = 8080
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval = 30
# check storage HTTP server alive type, values are:
# tcp : connect to the storge server with HTTP port only,
# do not request and get response
# http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type = tcp
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri = /status.html

View File

@ -0,0 +1,36 @@
#http server
#
server {
listen 9088;
server_name localhost;
#open() “/usr/local/nginx/html/favicon.ico” failed (2: No such file or directory),关闭它即可
location = /favicon.ico {
log_not_found off;
access_log off;
}
#将http文件访问请求反向代理给扩展模块不打印请求日志
location ~/group[0-9]/ {
ngx_fastdfs_module;
log_not_found off;
access_log off;
}
# location ~ /group1/M00 {
# alias /data/fastdfs/upload/path0;
# ngx_fastdfs_module;
# }
# location ~ /group1/M01 {
# alias /data/fastdfs/upload/path1;
# ngx_fastdfs_module;
# }
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

View File

@ -0,0 +1,33 @@
worker_processes 1;
worker_rlimit_nofile 65535; #务必先修改服务器的max open files 数。
error_log /data/fastdfs_data/logs/nginx-error.log;
events {
use epoll; #服务器若是Linux 2.6+你应该使用epoll。
worker_connections 65535;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /data/fastdfs_data/logs/nginx-access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_min_length 2k;
gzip_buffers 8 32k;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
gzip_vary on;
include /usr/local/nginx/conf.d/*.conf;
}

View File

@ -0,0 +1,55 @@
#!/bin/sh
# fastdfs 配置文件,我设置的存储路径,需要提前创建
FASTDFS_BASE_PATH=/data/fastdfs_data \
FASTDFS_STORE_PATH=/data/fastdfs/upload \
# 启用参数
# - tracker : 启动tracker_server 服务
# - storage : 启动storage 服务
start_parameter=$1
if [ ! -d "$FASTDFS_BASE_PATH" ]; then
mkdir -p ${FASTDFS_BASE_PATH};
fi
function start_tracker(){
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
tail -f /data/fastdfs_data/logs/trackerd.log
}
function start_storage(){
if [ ! -d "$FASTDFS_STORE_PATH" ]; then
mkdir -p ${FASTDFS_STORE_PATH}/{path0,path1,path2,path3};
fi
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf;
sleep 5
# nginx日志存储目录为/data/fastdfs_data/logs/手动创建一下防止storage启动慢还没有来得及创建logs目录
if [ ! -d "$FASTDFS_BASE_PATH/logs" ]; then
mkdir -p ${FASTDFS_BASE_PATH}/logs;
fi
/usr/local/nginx/sbin/nginx;
tail -f /data/fastdfs_data/logs/storaged.log;
}
function run (){
case ${start_parameter} in
tracker)
echo "启动tracker"
start_tracker
;;
storage)
echo "启动storage"
start_storage
;;
*)
echo "请指定要启动哪个服务tracker还是storage二选一传参为tracker | storage"
esac
}
run

View File

@ -0,0 +1,83 @@
# 选择系统镜像作为基础镜像可以使用超小的Linux镜像alpine
#FROM centos:7
FROM alpine:3.16
LABEL MAINTAINER liyanjing 284223249@qq.com
# 注意
# v6.0.9 依赖libfastcommon和libserverframe, v6.0.8及以下依赖libevent和libfastcommon两个库其中libfastcommon是 FastDFS 官方提供的
# v6.0.9 适配fastdfs-nginx-module-1.23v6.0.8及以下是fastdfs-nginx-module-1.22
# 0.安装包位置fdfs的基本目录和存储目录
ENV INSTALL_PATH=/usr/local/src \
LIBFASTCOMMON_VERSION="1.0.60" \
LIBSERVERFRAME_VERSION="1.1.19" \
FASTDFS_VERSION="V6.09" \
FASTDFS_NGINX_MODULE_VERSION="1.23" \
NGINX_VERSION="1.22.0" \
TENGINE_VERSION="2.3.3"
# 0.change the system source for installing libs
RUN echo "http://mirrors.aliyun.com/alpine/v3.16/main" > /etc/apk/repositories \
&& echo "http://mirrors.aliyun.com/alpine/v3.16/community" >> /etc/apk/repositories
# 1.复制安装包
ADD soft ${INSTALL_PATH}
# 2.环境安装
# - 创建fdfs的存储目录
# - 安装依赖
# - 安装libfastcommon
# - 安装fastdfs
# - 安装nginx,设置nginx和fastdfs联合环境
#Run yum -y install -y gcc gcc-c++ libevent libevent-devel make automake autoconf libtool perl pcre pcre-devel zlib zlib-devel openssl openssl-devel zip unzip net-tools wget vim lsof \
RUN apk update && apk add --no-cache --virtual .build-deps bash autoconf gcc libc-dev make pcre-dev zlib-dev linux-headers gnupg libxslt-dev gd-dev geoip-dev wget \
&& cd ${INSTALL_PATH} \
&& tar -zxf libfastcommon-${LIBFASTCOMMON_VERSION}.tar.gz \
&& tar -zxf libserverframe-${LIBSERVERFRAME_VERSION}.tar.gz \
&& tar -zxf fastdfs-${FASTDFS_VERSION}.tar.gz \
&& tar -zxf fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}.tar.gz \
&& tar -zxf nginx-${NGINX_VERSION}.tar.gz \
\
&& cd ${INSTALL_PATH}/libfastcommon-${LIBFASTCOMMON_VERSION}/ \
&& ./make.sh \
&& ./make.sh install \
&& cd ${INSTALL_PATH}/libserverframe-${LIBSERVERFRAME_VERSION}/ \
&& ./make.sh \
&& ./make.sh install \
&& cd ${INSTALL_PATH}/fastdfs-${FASTDFS_VERSION}/ \
&& ./make.sh \
&& ./make.sh install \
\
&& cd ${INSTALL_PATH}/nginx-${NGINX_VERSION}/ \
&& ./configure --prefix=/usr/local/nginx --pid-path=/var/run/nginx/nginx.pid --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-stream=dynamic \
--add-module=${INSTALL_PATH}/fastdfs-nginx-module-${FASTDFS_NGINX_MODULE_VERSION}/src/ \
&& make \
&& make install \
\
&& rm -rf ${INSTALL_PATH}/* \
&& apk del .build-deps gcc libc-dev make linux-headers gnupg libxslt-dev gd-dev geoip-dev wget
# 3.添加配置文件,目标路径以/结尾docker会把它当作目录不存在时会自动创建
COPY conf/*.* /etc/fdfs/
COPY nginx_conf/nginx.conf /usr/local/nginx/conf/
COPY nginx_conf.d/*.conf /usr/local/nginx/conf.d/
COPY start.sh /
ENV TZ=Asia/Shanghai
# 4.更改启动脚本执行权限,设置时区为中国时间
RUN chmod u+x /start.sh \
&& apk add --no-cache bash pcre-dev zlib-dev \
\
&& apk add -U tzdata \
&& ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone \
&& apk del tzdata && rm -rf /var/cache/apk/*
EXPOSE 22122 23000 9088
WORKDIR /
# 镜像启动
ENTRYPOINT ["/bin/bash","/start.sh"]

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -0,0 +1,71 @@
# connect timeout in seconds
# default value is 30s
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout = 60
# the base path to store log files
base_path = /data/fastdfs_data
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
tracker_server = 192.168.0.196:22122
tracker_server = 192.168.0.197:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker = false
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf
#HTTP settings
http.tracker_server_port = 80
#use "#include" directive to include HTTP other settiongs
##include http.conf

View File

@ -0,0 +1,29 @@
# HTTP default content type
http.default_content_type = application/octet-stream
# MIME types mapping filename
# MIME types file format: MIME_type extensions
# such as: image/jpeg jpeg jpg jpe
# you can use apache's MIME file: mime.types
http.mime_types_filename = mime.types
# if use token to anti-steal
# default value is false (0)
http.anti_steal.check_token = false
# token TTL (time to live), seconds
# default value is 600
http.anti_steal.token_ttl = 900
# secret key to generate anti-steal token
# this parameter must be set when http.anti_steal.check_token set to true
# the length of the secret key should not exceed 128 bytes
http.anti_steal.secret_key = FastDFS1234567890
# return the content of the file when check token fail
# default value is empty (no file sepecified)
http.anti_steal.token_check_fail = /home/yuqing/fastdfs/conf/anti-steal.jpg
# if support multi regions for HTTP Range
# default value is true
http.multi_range.enabed = true

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,137 @@
# connect timeout in seconds
# default value is 30s
connect_timeout=15
# network recv and send timeout in seconds
# default value is 30s
network_timeout=30
# the base path to store log files
base_path=/data/fastdfs_data
# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true
# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf
# FastDFS tracker_server can ocur more than once, and tracker_server format is
# "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122
# the port of the local storage server
# the default value is 23000
storage_server_port=23000
# the group name of the local storage server
group_name=group1
# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true
# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1
# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/data/fastdfs/upload/path0
#store_path1=/home/yuqing/fastdfs1
# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info
# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=
# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=
# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf
# if support flv
# default value is false
# since v1.15
flv_support = true
# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv
## 如果在此存储服务器上支持多组时有几组就设置几组。单组为0.
## 一台服务器没有必要运行多个group的storage因为stroage本身支持多存储目录的
# set the group count
# set to none zero to support multi-group on this storage server
# set to 0 for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 0
## 如果在此存储服务器上支持多组时,有几组就设置几组
# group settings for group #1
# since v1.14
# when support multi-group on this storage server, uncomment following section
#[group1]
#group_name=group1
#storage_server_port=23000
#store_path_count=2
#store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs1
# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
#[group2]
#group_name=group2
#storage_server_port=23000
#store_path_count=1
#store_path0=/home/yuqing/fastdfs

View File

@ -0,0 +1,353 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled = false
# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configured correctly.
group_name = group1
# bind an address of this host
# empty for bind all addresses of this host
bind_addr =
# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
client_bind = true
# the storage server port
port = 23000
# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
heart_beat_interval = 30
# disk usage report interval in seconds
# the storage server send disk usage report to tracker server periodically
# default value is 300
stat_report_interval = 60
# the base path to store data and log files
# NOTE: the binlog files maybe are large, make sure
# the base path has enough disk space,
# eg. the disk free space should > 50GB
base_path = /data/fastdfs_data
# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024
# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
buff_size = 256KB
# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4
# if disk read / write separated
## false for mixed read and write
## true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true
# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1
# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1
# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec = 50
# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval = 0
# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time = 00:00
# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time = 23:59
# write to the mark file after sync N files
# default value is 500
write_mark_file_freq = 500
# disk recovery thread count
# default value is 1
# since V6.04
disk_recovery_threads = 3
# store path (disk or mount point) count, default value is 1
store_path_count = 1
# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
# the store paths' order is very important, don't mess up!!!
# the base_path should be independent (different) of the store paths
store_path0 = /data/fastdfs/upload/path0
#store_path1 = /home/yuqing/fastdfs2
# subdir_count * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path = 256
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
tracker_server = 192.168.209.121:22122
tracker_server = 192.168.209.122:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group =
#unix username to run this program,
#not set (empty) means run by current user
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *
# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode = 0
# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
file_distribute_rotate_count = 100
# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes = 0
# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval = 1
# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval = 1
# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval = 300
# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size = 512KB
# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority = 10
# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix =
# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate = 0
# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method = hash
# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace = FastDFS
# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive = 0
# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf
# if log to access log
# default value is false
# since V4.00
use_access_log = false
# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false
# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time = 00:00
# if compress the old access log by gzip
# default value is false
# since V6.04
compress_old_access_log = false
# compress the access log days before
# default value is 1
# since V6.04
compress_access_log_days_before = 7
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record = false
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = true
# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time = 01:30
# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
check_store_path_mark = true
# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name =
# the port of the web server on this storage server
http.server_port = 8888

View File

@ -0,0 +1,16 @@
# <id> <group_name> <ip_or_hostname[:port]>
#
# id is a natural number (1, 2, 3 etc.),
# 6 bits of the id length is enough, such as 100001
#
# storage ip or hostname can be dual IPs seperated by comma,
# one is an inner (intranet) IP and another is an outer (extranet) IP,
# or two different types of inner (intranet) IPs
# for example: 192.168.2.100,122.244.141.46
# another eg.: 192.168.1.10,172.17.4.21
#
# the port is optional. if you run more than one storaged instances
# in a server, you must specified the port to distinguish different instances.
#100001 group1 192.168.0.196
#100002 group1 192.168.0.197

View File

@ -0,0 +1,329 @@
# is this config file disabled
# false for enabled
# true for disabled
disabled = false
# bind an address of this host
# empty for bind all addresses of this host
bind_addr =
# the tracker server port
port = 22122
# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60
# the base path to store data and log files
base_path = /data/fastdfs_data
# max concurrent connections this server support
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024
# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1
# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4
# the min network buff size
# default value 8KB
min_buff_size = 8KB
# the max network buff size
# default value 128KB
max_buff_size = 128KB
# the method for selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup = 2
# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group = group2
# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server = 0
# which path (means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path = 0
# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server = 0
# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space, no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as: reserved_storage_space = 10%
reserved_storage_space = 20%
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=
#unix username to run this program,
#not set (empty) means run by current user
run_by_user =
# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *
# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 1
# check storage server alive interval seconds
check_active_interval = 120
# thread stack size, should >= 64KB
# default value is 256KB
thread_stack_size = 256KB
# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true
# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400
# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300
# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false
# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256
# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <= this value
# default value is 16MB
# since V3.00
slot_max_size = 1MB
# the alignment size to allocate the trunk space
# default value is 0 (never align)
# since V6.05
# NOTE: the larger the alignment size, the less likely of disk
# fragmentation, but the more space is wasted.
trunk_alloc_alignment_size = 256
# if merge contiguous free spaces of trunk file
# default value is false
# since V6.05
trunk_free_space_merge = true
# if delete / reclaim the unused trunk files
# default value is false
# since V6.05
delete_unused_trunk_files = false
# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB
# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false
# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00
# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400
# the threshold to create trunk file
# when the free trunk file size less than the threshold,
# will create he trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G
# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false
# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false
# the min interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V5.01
trunk_compress_binlog_min_interval = 86400
# the interval for compressing the trunk binlog file
# unit: second, 0 means never compress
# recommand to set this parameter to 86400 (one day)
# default value is 0
# since V6.05
trunk_compress_binlog_interval = 86400
# compress the trunk binlog time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 03:00
# since V6.05
trunk_compress_binlog_time_base = 03:00
# max backups for the trunk binlog file
# default value is 0 (never backup)
# since V6.05
trunk_binlog_max_backups = 7
# if use storage server ID instead of IP address
# if you want to use dual IPs for storage server, you MUST set
# this parameter to true, and configure the dual IPs in the file
# configured by following item "storage_ids_filename", such as storage_ids.conf
# default value is false
# since V4.00
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# this parameter is valid only when use_storage_id set to true
# since V4.00
storage_ids_filename = storage_ids.conf
# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = id
# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false
# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false
# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00
# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false
# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7
# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0
# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = true
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# HTTP port on this tracker server
http.server_port = 8080
# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval = 30
# check storage HTTP server alive type, values are:
# tcp : connect to the storge server with HTTP port only,
# do not request and get response
# http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type = tcp
# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri = /status.html

View File

@ -0,0 +1,36 @@
#http server
#
server {
listen 9088;
server_name localhost;
#open() “/usr/local/nginx/html/favicon.ico” failed (2: No such file or directory),关闭它即可
location = /favicon.ico {
log_not_found off;
access_log off;
}
#将http文件访问请求反向代理给扩展模块不打印请求日志
location ~/group[0-9]/ {
ngx_fastdfs_module;
log_not_found off;
access_log off;
}
# location ~ /group1/M00 {
# alias /data/fastdfs/upload/path0;
# ngx_fastdfs_module;
# }
# location ~ /group1/M01 {
# alias /data/fastdfs/upload/path1;
# ngx_fastdfs_module;
# }
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

View File

@ -0,0 +1,33 @@
worker_processes 1;
worker_rlimit_nofile 65535; #务必先修改服务器的max open files 数。
error_log /data/fastdfs_data/logs/nginx-error.log;
events {
use epoll; #服务器若是Linux 2.6+你应该使用epoll。
worker_connections 65535;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /data/fastdfs_data/logs/nginx-access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_min_length 2k;
gzip_buffers 8 32k;
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_types text/plain text/css text/javascript application/json application/javascript application/x-javascript application/xml;
gzip_vary on;
include /usr/local/nginx/conf.d/*.conf;
}

View File

@ -0,0 +1,55 @@
#!/bin/sh
# fastdfs 配置文件,我设置的存储路径,需要提前创建
FASTDFS_BASE_PATH=/data/fastdfs_data \
FASTDFS_STORE_PATH=/data/fastdfs/upload \
# 启用参数
# - tracker : 启动tracker_server 服务
# - storage : 启动storage 服务
start_parameter=$1
if [ ! -d "$FASTDFS_BASE_PATH" ]; then
mkdir -p ${FASTDFS_BASE_PATH};
fi
function start_tracker(){
/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf
tail -f /data/fastdfs_data/logs/trackerd.log
}
function start_storage(){
if [ ! -d "$FASTDFS_STORE_PATH" ]; then
mkdir -p ${FASTDFS_STORE_PATH}/{path0,path1,path2,path3};
fi
/usr/bin/fdfs_storaged /etc/fdfs/storage.conf;
sleep 5
# nginx日志存储目录为/data/fastdfs_data/logs/手动创建一下防止storage启动慢还没有来得及创建logs目录
if [ ! -d "$FASTDFS_BASE_PATH/logs" ]; then
mkdir -p ${FASTDFS_BASE_PATH}/logs;
fi
/usr/local/nginx/sbin/nginx;
tail -f /data/fastdfs_data/logs/storaged.log;
}
function run (){
case ${start_parameter} in
tracker)
echo "启动tracker"
start_tracker
;;
storage)
echo "启动storage"
start_storage
;;
*)
echo "请指定要启动哪个服务tracker还是storage二选一传参为tracker | storage"
esac
}
run

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -0,0 +1,71 @@
# connect timeout in seconds
# default value is 30s
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5
# network timeout in seconds
# default value is 30s
network_timeout = 60
# the base path to store log files
base_path = /data/fastdfs_data
# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
# the HOST can be hostname or ip address,
# and the HOST can be dual IPs or hostnames seperated by comma,
# the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
# or two different types of inner (intranet) IPs.
# for example: 192.168.2.100,122.244.141.46:22122
# another eg.: 192.168.1.10,172.17.4.21:22122
tracker_server = 192.168.0.196:22122
tracker_server = 192.168.0.197:22122
#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info
# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false
# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600
# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker = false
# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false
# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf
#HTTP settings
http.tracker_server_port = 80
#use "#include" directive to include HTTP other settiongs
##include http.conf

View File

@ -0,0 +1,29 @@
# HTTP default content type
http.default_content_type = application/octet-stream
# MIME types mapping filename
# MIME types file format: MIME_type extensions
# such as: image/jpeg jpeg jpg jpe
# you can use apache's MIME file: mime.types
http.mime_types_filename = mime.types
# if use token to anti-steal
# default value is false (0)
http.anti_steal.check_token = false
# token TTL (time to live), seconds
# default value is 600
http.anti_steal.token_ttl = 900
# secret key to generate anti-steal token
# this parameter must be set when http.anti_steal.check_token set to true
# the length of the secret key should not exceed 128 bytes
http.anti_steal.secret_key = FastDFS1234567890
# return the content of the file when check token fail
# default value is empty (no file sepecified)
http.anti_steal.token_check_fail = /home/yuqing/fastdfs/conf/anti-steal.jpg
# if support multi regions for HTTP Range
# default value is true
http.multi_range.enabed = true

Some files were not shown because too many files have changed in this diff Show More