触动精灵出现未知错误Reconnect Server怎么解决

[Edit] Make sure to check my much more advanced example
The System-on-chip () has recently came out of nowhere and has been taking by storm the IoT DIY world. It is a
Wi-Fi capable chip that has, obsoleting overnight a number of similar products that are out there. The price factor,
make this chip quite attractive.
The most common usage for the chip so far has been to interface it to MCU and utilize the that are available in the default firmware, however that is quite an abuse of the power under the hood of the ESP8266. I’ve decided not to use external MCU and utilize the chip itself for a proof-of-concept project:
The project shall be able to
Be configurable through web interface
Provide web UI for switching on/off a LED connected to GPIO pin
Provide web UI for reading a temperature+humidity sensor ()
There are few different versions of the breakouts available, most popular is the . However that version only has GPIO0 and GPIO2 routed to the header, so I also purchased the
that has all the pins available. I have made couple interface strip-board PCBs so that I can interface the modules with a “pure&#V FTDI cable:
The capacitor is to provide extra juice during active WiFi operations as the FTDI cable can only provide ~50mA while bursts of Wifi may be in the area of ~200mA. I used a 1000uF capacitor on the PCBs, but a 470uF also worked with no issues.
The SMD version is more interesting, as more of the GPIO pins are available:
To program new firmware on the ESP-03 the following pin connections must be done:
1. CH_PD to VCC
2. GPIO02 to VCC
3. GPIO00 to GND
4. GPIO15 to GND
For normal start CH_PD to VCC and GPIO15 to GND.
Thanks to the great community at
that project is possible with minimal effort. The
published by Sprite_tm is an incredible piece of art that allows you to run a HTTP server and simple
on the chip. One of the challenges with embedded systems is the difficulty in connection/misc configuration. The project overcomes this by providing a web UI that one can use to manage the settings.
If upon startup the chip isn’t able to connect to a WiFi hotspot using the saved credentials, it would automatically activate Access Point mode and you’ll be able to see
“ESP_XXXXXX” where XXXXXX are the last 6 digits of the esp;s MAC address:
You can connect to that open AP and navigate to http://192.168.4.1 to scan for wifi networks and enter the connection password:
WiFi settings page:
The password will be saved and from now on the module will automatically connect to that network. You don’t need to do that, all the other functions are fully accessible without the module being connected to the Internet. I can think of at least dozen use cases where that could be useful. However for my particular project, I’d need the module to be available over the Internet.
Once connected to a network, you’ll probably be wondering what the IP address of the module is. The module uses DHCP, so the address of the module will vary. You can set up a static IP lease if your router allows it, or find the ip address every time you need to use it.
I use the following linux command to find the IP address of the ESP8266 modules connected to my network:
sudo arp-scan –retry 7 –quiet –localnet –interface=wlan0 | grep -s -i 18:fe:34
Navigating to the IP address of the module opens the same UI we were seeing before, same functionality as well. Below are the LED control and DHT22 sensor reading pages:
I have a LED connected to GPIO13, but that could be a relay for example.
The DHT22 page is a simple html page, but it could just be a JSON string that can be periodically polled by http://freeboard.io/ dashboard or Node-RED flow for example. The module can also be set to push the readings to a pre-configured URL a UI interface could be set to configure destination URL, push frequency and other settings: all in my to-do list. DHT22 code by .
support on the esp8266 is a matter of time, so the web configurable settings and MQTT support will make the module excellent choice for a number of home automation tasks.
On the weaknesses side is the power consumption. During active WiFi operations it can draw up to 250mA, making battery mode quite challenging. I’ll probably stick to my
nodes for low power battery operated tasks, those last more than a year on single AAA battery with a boost regulator.
The application source code is available here: , includes the binaries so you can directly flash those without having to set up the SDK environment:
sudo ./ –port /dev/ttyUSB0 write_flash 0x00000 firmware/0x00000.bin
sudo ./esptool.py –port /dev/ttyUSB0 write_flash 0x40000 firmware/0x40000.bin
sudo ./esptool.py –port /dev/ttyUSB0 write_flash 0x12000 webpages.espfs
As a conclusion I’d say that this chip is a game changer. I am loving it and will be using it in my next home automation projects a lot.
Page views: 226586
Related posts你的浏览器禁用了JavaScript, 请开启后刷新浏览器获得更好的体验!
环境:2台主机:client和master
client上安装了kafka,master也安装了kafka
在master上配置了zookeeper-server
client和master的kafka的配置文件server.properties相同(已配置zookeeper.connect=Host0:2181),除了broker.id不同
client中broker.id=1,master中broker.id=0
client和master中都启动kafka:bin/kafka-server-start.sh config/server.properties
查看终端输出,client中日志正常,而master中问题,问题如下:
[ 10:02:41,078] INFO Reconnect due to socket error: java.nio.channels.ClosedChannelException (kafka.consumer.SimpleConsumer)
[ 10:02:42,078] WARN [ReplicaFetcherThread-0-1], Error in fetch Name: FetchR Version: 0; CorrelationId: 2100; ClientId: ReplicaFetcherThread-0-1; ReplicaId: 0; MaxWait: 500 MinBytes: 1 RequestInfo: [1,0] -& PartitionFetchInfo(0,1048576),[1,2] -& PartitionFetchInfo(0,1048576),[replicated1,1] -& PartitionFetchInfo(0,1048576). Possible cause: java.nio.channels.ClosedChannelException (kafka.server.ReplicaFetcherThread)
master的终端中重复报上述错误,请问老师,这是什么原因?应该怎么修改
创建topic 1:bin/kafka-topics.sh --create --zookeeper Host0:2181 --replication-factor 2 --partitions 3 --topic 1
查看topic 1:bin/kafka-topics.sh --describe --zookeepr Host0:2181 --topic 1
PartitionCount:3
ReplicationFactor:2
Partition: 0
Replicas: 1,0
Partition: 1
Replicas: 0,1
Partition: 2
Replicas: 1,0
我在client中,模拟生产者和消费者,消息传输正常
模拟生产者:[root@client kafka_2.10-0.8.2.0]# bin/kafka-console-producer.sh --broker-list client:9092 --topic 1
模拟消费者:bin/kafka-console-consumer.sh --zookeeper Host0:2181 --from-beginning --topic 1
消息传输正常。
但是,client作为生产者,master作为消费者时,master接收不到消息,并且master报错。错误信息如下:
client生产者:bin/kafka-console-producer.sh --broker-list client:9092 --topic 1
master作为消费者:bin/kafka-console-consumer.sh --zookeeper Host0:2181 --from-beginning --topic 1
[ 10:03:28,723] WARN Fetching topic metadata with correlation id 76 for topics [Set(1)] from broker [id:1,host:client,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
[ 10:03:30,731] WARN [console-consumer-17327_Host0-0-91c6332d-leader-finder-thread], Failed to add leader for partitions [1,0],[1,1],[1,2]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
[ 10:03:31,943] WARN Fetching topic metadata with correlation id 77 for topics [Set(1)] from broker [id:1,host:client,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
[ 10:03:33,986] WARN [console-consumer-17327_Host0-0-91c6332d-leader-finder-thread], Failed to add leader for partitions [1,0],[1,1],[1,2]; will retry (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
at kafka.consumer.SimpleConsumer.earliestOrLatestOffset(SimpleConsumer.scala:166)
at kafka.consumer.ConsumerFetcherThread.handleOffsetOutOfRange(ConsumerFetcherThread.scala:60)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:177)
at kafka.server.AbstractFetcherThread$$anonfun$addPartitions$2.apply(AbstractFetcherThread.scala:172)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherThread.addPartitions(AbstractFetcherThread.scala:172)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:87)
at kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:77)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.immutable.Map$Map2.foreach(Map.scala:130)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:77)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:95)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
请问老师上述怎么解决?谢谢
我测试了正常啊,Host0作为消费者
我登录上后,发现机器删没有Kafka的服务启动。
使用命令bin/kafka-server-start.sh config/server.properties &
切记一定要加上后台运行命令“&”,
若没加,则当前操作界面窗口关闭时该服务也会关闭的。
加上后,就算我退出了操作界面,你登录后可以看到Kafka服务正常运行。
报错信息看连接是关闭的,jps -ml查看下目前kafka服务是不是正常运行。
建议启动服务的时候,请加上在后台运行的指令符号&。如:
client和master中都启动kafka:bin/kafka-server-start.sh config/server.properties &
我能登录上你的环境吗
要回复问题请先或
浏览: 2067
关注: 2 人[技术讨论]mongodb驱动的正确使用方法 - CNode技术社区
这家伙很懒,什么个性签名都没有留下。
mongo数据库在nodejs平台有2个常用驱动,mongodb和mongoose,mongodb接口非常接近mongo数据库原生的操作方式,是helloworld之类演示代码的首选mongo数据库连接驱动,因此成为大部分nodejs初学者最先接触的mongo数据库驱动。初学者在学会mongo连接的同时,却也可悲的被helloword这种演示性质的数据库操作习惯潜移默化了。
cat test.js
var server_options={};
var db_options={
w:-1,// 设置w=-1是mongodb 1.2后的强制要求,见官方api文档
doDebug:true,
debug:function(msg,obj){
console.log('[debug]',msg);
log:function(msg,obj){
console.log('[log]',msg);
error:function(msg,obj){
console.log('[error]',msg);
var mongodb = require(&mongodb&),
mongoserver = new mongodb.Server('localhost', 27017,server_options ),
db = new mongodb.Db('test', mongoserver, db_options);
function test_save(){
//前一个db和后一个db,是同一个对象。
db.open(function(err,db){
if(err)return console.error(err);
console.log('* mongodb connected');
db.collection('foo').save({test:1},function(err,result){
console.log(result);
db.close();
test_save();
这是个随处可见,大家非常熟悉的mongo数据库helloword代码:设置连接参数,open, 访问collection, close。 唯一不同的是为了显示代码背后的实际运作,我给db_options加上一个日志(logger)选项。
node test.js, 客户端输出信息:
[debug] opened connection
[debug] opened connection
[debug] opened connection
[debug] opened connection
[debug] opened connection
[debug] writing command to mongodb
* mongodb connected
[debug] writing command to mongodb
{ test: 1, _id: f6ad00c000001 }
[debug] closed connection
[debug] closed connection
[debug] closed connection
[debug] closed connection
[debug] closed connection
服务端mongo数据库的输出日志:
Mon May 13 12:54:33 [initandlisten] connection accepted from 127.0.0.1:2815 #51
Mon May 13 12:54:33 [initandlisten] connection accepted from 127.0.0.1:2816 #52
Mon May 13 12:54:33 [initandlisten] connection accepted from 127.0.0.1:2817 #53
Mon May 13 12:54:33 [initandlisten] connection accepted from 127.0.0.1:2818 #54
Mon May 13 12:54:33 [initandlisten] connection accepted from 127.0.0.1:2819 #55
Mon May 13 12:54:33 [conn51] end connection 127.0.0.1:2815
Mon May 13 12:54:33 [conn52] end connection 127.0.0.1:2816
Mon May 13 12:54:33 [conn53] end connection 127.0.0.1:2817
Mon May 13 12:54:33 [conn54] end connection 127.0.0.1:2818
Mon May 13 12:54:33 [conn55] end connection 127.0.0.1:2819
客户端和服务器端的日志都表明,db.open了5个连接,而并非一般同学所想象的1个连接。why?这是因为mongoserver = new mongodb.Server('localhost', 27017,server_options )的server_options有个poolSize选项,默认值是5(见)。db对象不仅扮演着与mongo数据库通讯的中间人角色,还同时是一个连接池。默认设置的情况下,helloword代码打开一个有着5个连接的连接池,然后再关闭这个连接池。作为ran and quit的演示,这个操作流程当然没问题,但如果放到http server的应用场景就成大问题了。每次http请求都打开5个数据库连接而后又关闭5个数据库连接,对性能影响可想而知,更为糟糕是open and close的操作流程会导致一个潜在的并发错误。
cat server_1.js
var server_options={};
var db_options={w:-1};
var mongodb = require(&mongodb&),
mongoserver = new mongodb.Server('localhost', 27017,server_options ),
db = new mongodb.Db('test', mongoserver, db_options);
var http=require('http');
var server=http.createServer(function(req,res){
db.open(function(err,db){
if(err)return console.error(err);
console.log('* mongodb connected');
db.collection('foo').save({test:1},function(err,result){
res.end(JSON.stringify(result,null,2));
db.close();
server.listen(8080,function(){
console.log('server listen to %d',this.address().port);
setTimeout(function(){
//http.get('http://localhost:8080',function(res){console.log('request ok')});
//http.get('http://localhost:8080',function(res){console.log('request ok')});
node server.js, 浏览器访问ok,但如果做ab类并发测试或者把倒数2、3行的注释去掉,问题就来了。
c:\nodejs\node_modules\mongodb\lib\mongodb\db.js:224
throw new Error(&db object already connecting, open cannot be called multi
Error: db object already connecting, open cannot be called multiple times
想象db对象是一扇门,open操作相当于开门,开门后才能阅读房间里面的书籍(数据)。当请求1开门后,紧随而来的请求2也想开门但是开不了,因为请求1还没关门(db.close),门还处于“打开”的状态。其实呢,请求2完全没必要再开门,直接尾随请求1进门即可。错误的根源在于我们要打开一扇已经打开的门。how to fix? easy, 从open and close 行为改为 open once and reuse anywhere。程序启动的时候db.open一次,每次http请求直接访问数据库,扔掉db.open/db.close这2个多余的操作。
cat server_2.js
var server_options={'auto_reconnect':true,poolSize:5};
var db_options={w:-1};
var mongodb = require(&mongodb&),
mongoserver = new mongodb.Server('localhost', 27017,server_options ),
db = new mongodb.Db('test', mongoserver, db_options);
db.open(function(err,db){
('mongodb connected');
var http=require('http');
var server=http.createServer(function(req,res){
db.collection('foo').save({test:1},function(err,result){
res.end(JSON.stringify(result,null,2));
server.listen(8080,function(){
console.log('server listen to %d',this.address().port);
setTimeout(function(){
http.get('http://localhost:8080',function(res){console.log('request ok')});
http.get('http://localhost:8080',function(res){console.log('request ok')});
这样改后,虽然没报错了,却引入另一个潜在的问题:当并发访问&5时,因为同时可用的底层数据库连接只有5,从而导致了阻塞。
&span&===================================我是分隔线=====================================&/span&
实际应用场景中,直接引用db对象并不是一个好主意。默认情况下,db的poolSize=5,意味着并发只有5, 要提高并发的话,把poolSize拉到10? 20? 50? 100? NO,我们需要的是能动态调整连接数的连接池,既能满足高峰期的连接数要求,也能在空闲期释放闲置的连接,而不是象mongodb的内置连接池那样保持固定连接数。怎么办?重新发明轮子吗?不,重用已有的连接池模块generic_pool。
cat server_3.js
var http=require('http'),
mongodb = require(&mongodb&),
poolModule = require('generic-pool');
var pool = poolModule.Pool({
: 'mongodb',
: function(callback) {
var server_options={'auto_reconnect':false,poolSize:1};
var db_options={w:-1};
var mongoserver = new mongodb.Server('localhost', 27017,server_options );
var db=new mongodb.Db('test', mongoserver, db_options);
db.open(function(err,db){
if(err)return callback(err);
callback(null,db);
: function(db) { db.close(); },
: 10,//根据应用的可能最高并发数设置
idleTimeoutMillis : 30000,
log : false
var server=http.createServer(function(req,res){
pool.acquire(function(err, db) {
if (err) {
res.statusCode=500;
res.end(JSON.stringify(err,null,2));
db.collection('foo').save({test:1},function(err,result){
res.end(JSON.stringify(result,null,2));
pool.release(db);
server.listen(8080,function(){
console.log('server listen to %d',this.address().port);
setTimeout(function(){
http.get('http://localhost:8080',function(res){console.log('request ok')});
http.get('http://localhost:8080',function(res){console.log('request ok')});
将poolSize设为1,一个db对象只负责一个底层的数据库连接,generic_pool通过控制db对象的数目,间接控制实际的数据库连接数目。如果poolSize还采取默认值5,1db=5连接,由于每次http请求期间我们实际使用到的只是1个连接,其他4个连接根本用不上,处于闲置状态,其结果是浪费资源,拖慢响应速度。
备注1:本文mongo数据库设置均采用本机安装的默认设置。
备注2:当需建模的业务对象较多时,使用mongoose驱动是个好主意,它自带的连接池比mongodb实用性强。
备注3:mongodb自1.2以后,官方推荐的连接方式改为MongoClient,将参数设置简化为一个URL(详细见)。目前monodb的版本已经1.3,大家也该是时候转变了吧。
以上文generic_pool连接池的初始化为例,看看新旧连接的对比:
旧连接方法
var pool = poolModule.Pool({
: 'mongodb',
: function(callback) {
var server_options={'auto_reconnect':false,poolSize:1};
var db_options={w:-1};
var mongoserver = new mongodb.Server('localhost', 27017,server_options );
var db=new mongodb.Db('test', mongoserver, db_options);
db.open(function(err,db){
if(err)return callback(err);
callback(null,db);
//......more code here
新连接方法
var pool = poolModule.Pool({
: 'mongodb',
: function(callback) {
mongodb.MongoClient.connect('mongodb://localhost/test', {
server:{poolSize:1}
}, function(err,db){
callback(err,db);
//more code here
好文,收藏。尤其连接池方面,很多人在论坛问。
绝对的好东西啊,收藏了。
前段时间在做项目的时候发现这个问题了,我就自己造轮子了,自己写了一个对象池。不过也就几行代码
不错,高质量的好文啊~
一样的,就是opiton里的create函数换一下就可以啦, 可以看一下我写的另外一篇文章,
在用poolModule包装感觉有点多此一举了,就多了个idleTimeoutMillis参数,无形中增加了复杂度
有一个叫的库~
好文一篇,对于我这种菜鸟很适合!呵呵
好文啊,感谢楼上让它浮上来
T17:13:40.171+0800 [clientcursormon]
connections:15
T17:16:48.734+0800 [initandlisten] connection accepted from 127.0.0.1:
connections now open)
T17:16:50.453+0800 [initandlisten] connection accepted from 127.0.0.1:
connections now open)
T17:16:52.078+0800 [initandlisten] connection accepted from 127.0.0.1:
connections now open)
我想问的问题是,这种方式连接数一直增长到最高并发数吗?什么时候关闭连接?新手求解答!
使用你给的方法,测试了一性能问题。发现了一个大的疑问!
当连接池poolSize设为10时,插入100万条数据,用时78472毫秒。当连接池poolSize设为1时,插入100万条数据,用时53454毫秒。
也就是说当连接数为1时,反而比连接数为10时用时更少!那要连接池还有什么用呢????
一下是测试代码,在Linux下运行
var server_options={‘auto_reconnect’:true,poolSize:1};
console.log(“poolSize:”+server_options.poolSize);
var db_options={w:-1};
var mongodb = require(“mongodb”),
mongoserver = new mongodb.Server(‘localhost’, 27017,server_options ),
db = new mongodb.Db(‘test’, mongoserver, db_options);
db.open(function(err,db){
(‘mongodb connected’);
function a(x){
db.collection(‘foo’).save({test:1},function(err,result){
if(x==1000000){
var now1=new Date();
var time_end=now1.getTime();
(‘diff:’+(time_end-time_start));
setTimeout(function(){
var now=new Date();
time_start=now.getTime();
for(var i=1;i&=1000000;i++)
还有一个问题,多个JS怎么共用一个链接?
今天遇到了这个问题,在使用generic-pool的时候用了第一种写法,即是在使用时创建mongo,结果在达到连接池的min数量后出现 Error: db object already connecting, open cannot be called multiple times,如果达到了连接池的最小连接数,再往上不会自动调用create返回一个新的连接吗?而出现的错误似乎是表明我们重复引用了同一个数据库连接,望解答 T_T
mongodb 自己已经有pool了。既然是pool,就不要反复的去open close。
今天刚想通是为什么,tks
官网的doc好像都用MongoClient了 貌似不用用pool 了?
CNode 社区为国内最专业的 Node.js 开源技术社区,致力于 Node.js 的技术研究。
服务器赞助商为
,存储赞助商为
,由提供应用性能服务。
新手搭建 Node.js 服务器,推荐使用无需备案的

我要回帖

更多关于 触动精灵出现未知错误 的文章

 

随机推荐