.
+- Click **Create Issue** - Please provide as much information as possible about the issue type and how to reproduce it.
+
+Bug reports in JIRA for all driver projects (i.e. NODE, PYTHON, CSHARP, JAVA) and the
+Core Server (i.e. SERVER) project are **public**.
+
+### Questions and Bug Reports
+
+ * mailing list: https://groups.google.com/forum/#!forum/node-mongodb-native
+ * jira: http://jira.mongodb.org/
+
+### Change Log
+
+http://jira.mongodb.org/browse/NODE
+
+# Installation
+
+The recommended way to get started using the Node.js 2.0 driver is by using the `NPM` (Node Package Manager) to install the dependency in your project.
+
+## MongoDB Driver
+
+Given that you have created your own project using `npm init` we install the mongodb driver and it's dependencies by executing the following `NPM` command.
+
+```
+npm install mongodb --save
+```
+
+This will download the MongoDB driver and add a dependency entry in your `package.json` file.
+
+## Troubleshooting
+
+The MongoDB driver depends on several other packages. These are.
+
+* mongodb-core
+* bson
+* kerberos
+* node-gyp
+
+The `kerberos` package is a C++ extension that requires a build environment to be installed on your system. You must be able to build node.js itself to be able to compile and install the `kerberos` module. Furthermore the `kerberos` module requires the MIT Kerberos package to correctly compile on UNIX operating systems. Consult your UNIX operation system package manager what libraries to install.
+
+{{% note class="important" %}}
+Windows already contains the SSPI API used for Kerberos authentication. However you will need to install a full compiler tool chain using visual studio C++ to correctly install the kerberos extension.
+{{% /note %}}
+
+### Diagnosing on UNIX
+
+If you don’t have the build essentials it won’t build. In the case of linux you will need gcc and g++, node.js with all the headers and python. The easiest way to figure out what’s missing is by trying to build the kerberos project. You can do this by performing the following steps.
+
+```
+git clone https://github.com/christkv/kerberos.git
+cd kerberos
+npm install
+```
+
+If all the steps complete you have the right toolchain installed. If you get node-gyp not found you need to install it globally by doing.
+
+```
+npm install -g node-gyp
+```
+
+If correctly compiles and runs the tests you are golden. We can now try to install the mongod driver by performing the following command.
+
+```
+cd yourproject
+npm install mongodb --save
+```
+
+If it still fails the next step is to examine the npm log. Rerun the command but in this case in verbose mode.
+
+```
+npm --loglevel verbose install mongodb
+```
+
+This will print out all the steps npm is performing while trying to install the module.
+
+### Diagnosing on Windows
+
+A known compiler tool chain known to work for compiling `kerberos` on windows is the following.
+
+* Visual Studio c++ 2010 (do not use higher versions)
+* Windows 7 64bit SDK
+* Python 2.7 or higher
+
+Open visual studio command prompt. Ensure node.exe is in your path and install node-gyp.
+
+```
+npm install -g node-gyp
+```
+
+Next you will have to build the project manually to test it. Use any tool you use with git and grab the repo.
+
+```
+git clone https://github.com/christkv/kerberos.git
+cd kerberos
+npm install
+node-gyp rebuild
+```
+
+This should rebuild the driver successfully if you have everything set up correctly.
+
+### Other possible issues
+
+Your python installation might be hosed making gyp break. I always recommend that you test your deployment environment first by trying to build node itself on the server in question as this should unearth any issues with broken packages (and there are a lot of broken packages out there).
+
+Another thing is to ensure your user has write permission to wherever the node modules are being installed.
+
+QuickStart
+==========
+The quick start guide will show you how to setup a simple application using node.js and MongoDB. Its scope is only how to set up the driver and perform the simple crud operations. For more in depth coverage we encourage reading the tutorials.
+
+Create the package.json file
+----------------------------
+Let's create a directory where our application will live. In our case we will put this under our projects directory.
+
+```
+mkdir myproject
+cd myproject
+```
+
+Enter the following command and answer the questions to create the initial structure for your new project
+
+```
+npm init
+```
+
+Next we need to edit the generated package.json file to add the dependency for the MongoDB driver. The package.json file below is just an example and your will look different depending on how you answered the questions after entering `npm init`
+
+```
+{
+ "name": "myproject",
+ "version": "1.0.0",
+ "description": "My first project",
+ "main": "index.js",
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/christkv/myfirstproject.git"
+ },
+ "dependencies": {
+ "mongodb": "~2.0"
+ },
+ "author": "Christian Kvalheim",
+ "license": "Apache 2.0",
+ "bugs": {
+ "url": "https://github.com/christkv/myfirstproject/issues"
+ },
+ "homepage": "https://github.com/christkv/myfirstproject"
+}
+```
+
+Save the file and return to the shell or command prompt and use **NPM** to install all the dependencies.
+
+```
+npm install
+```
+
+You should see **NPM** download a lot of files. Once it's done you'll find all the downloaded packages under the **node_modules** directory.
+
+Booting up a MongoDB Server
+---------------------------
+Let's boot up a MongoDB server instance. Download the right MongoDB version from [MongoDB](http://www.mongodb.org), open a new shell or command line and ensure the **mongod** command is in the shell or command line path. Now let's create a database directory (in our case under **/data**).
+
+```
+mongod --dbpath=/data --port 27017
+```
+
+You should see the **mongod** process start up and print some status information.
+
+Connecting to MongoDB
+---------------------
+Let's create a new **app.js** file that we will use to show the basic CRUD operations using the MongoDB driver.
+
+First let's add code to connect to the server and the database **myproject**.
+
+```js
+var MongoClient = require('mongodb').MongoClient
+ , assert = require('assert');
+
+// Connection URL
+var url = 'mongodb://localhost:27017/myproject';
+// Use connect method to connect to the Server
+MongoClient.connect(url, function(err, db) {
+ assert.equal(null, err);
+ console.log("Connected correctly to server");
+
+ db.close();
+});
+```
+
+Given that you booted up the **mongod** process earlier the application should connect successfully and print **Connected correctly to server** to the console.
+
+Let's Add some code to show the different CRUD operations available.
+
+Inserting a Document
+--------------------
+Let's create a function that will insert some documents for us.
+
+```js
+var insertDocuments = function(db, callback) {
+ // Get the documents collection
+ var collection = db.collection('documents');
+ // Insert some documents
+ collection.insertMany([
+ {a : 1}, {a : 2}, {a : 3}
+ ], function(err, result) {
+ assert.equal(err, null);
+ assert.equal(3, result.result.n);
+ assert.equal(3, result.ops.length);
+ console.log("Inserted 3 documents into the document collection");
+ callback(result);
+ });
+}
+```
+
+The insert command will return a results object that contains several fields that might be useful.
+
+* **result** Contains the result document from MongoDB
+* **ops** Contains the documents inserted with added **_id** fields
+* **connection** Contains the connection used to perform the insert
+
+Let's add call the **insertDocuments** command to the **MongoClient.connect** method callback.
+
+```js
+var MongoClient = require('mongodb').MongoClient
+ , assert = require('assert');
+
+// Connection URL
+var url = 'mongodb://localhost:27017/myproject';
+// Use connect method to connect to the Server
+MongoClient.connect(url, function(err, db) {
+ assert.equal(null, err);
+ console.log("Connected correctly to server");
+
+ insertDocuments(db, function() {
+ db.close();
+ });
+});
+```
+
+We can now run the update **app.js** file.
+
+```
+node app.js
+```
+
+You should see the following output after running the **app.js** file.
+
+```
+Connected correctly to server
+Inserted 3 documents into the document collection
+```
+
+Updating a document
+-------------------
+Let's look at how to do a simple document update by adding a new field **b** to the document that has the field **a** set to **2**.
+
+```js
+var updateDocument = function(db, callback) {
+ // Get the documents collection
+ var collection = db.collection('documents');
+ // Update document where a is 2, set b equal to 1
+ collection.updateOne({ a : 2 }
+ , { $set: { b : 1 } }, function(err, result) {
+ assert.equal(err, null);
+ assert.equal(1, result.result.n);
+ console.log("Updated the document with the field a equal to 2");
+ callback(result);
+ });
+}
+```
+
+The method will update the first document where the field **a** is equal to **2** by adding a new field **b** to the document set to **1**. Let's update the callback function from **MongoClient.connect** to include the update method.
+
+```js
+var MongoClient = require('mongodb').MongoClient
+ , assert = require('assert');
+
+// Connection URL
+var url = 'mongodb://localhost:27017/myproject';
+// Use connect method to connect to the Server
+MongoClient.connect(url, function(err, db) {
+ assert.equal(null, err);
+ console.log("Connected correctly to server");
+
+ insertDocuments(db, function() {
+ updateDocument(db, function() {
+ db.close();
+ });
+ });
+});
+```
+
+Delete a document
+-----------------
+Next lets delete the document where the field **a** equals to **3**.
+
+```js
+var deleteDocument = function(db, callback) {
+ // Get the documents collection
+ var collection = db.collection('documents');
+ // Insert some documents
+ collection.deleteOne({ a : 3 }, function(err, result) {
+ assert.equal(err, null);
+ assert.equal(1, result.result.n);
+ console.log("Removed the document with the field a equal to 3");
+ callback(result);
+ });
+}
+```
+
+This will delete the first document where the field **a** equals to **3**. Let's add the method to the **MongoClient
+.connect** callback function.
+
+```js
+var MongoClient = require('mongodb').MongoClient
+ , assert = require('assert');
+
+// Connection URL
+var url = 'mongodb://localhost:27017/myproject';
+// Use connect method to connect to the Server
+MongoClient.connect(url, function(err, db) {
+ assert.equal(null, err);
+ console.log("Connected correctly to server");
+
+ insertDocuments(db, function() {
+ updateDocument(db, function() {
+ deleteDocument(db, function() {
+ db.close();
+ });
+ });
+ });
+});
+```
+
+Finally let's retrieve all the documents using a simple find.
+
+Find All Documents
+------------------
+We will finish up the Quickstart CRUD methods by performing a simple query that returns all the documents matching the query.
+
+```js
+var findDocuments = function(db, callback) {
+ // Get the documents collection
+ var collection = db.collection('documents');
+ // Find some documents
+ collection.find({}).toArray(function(err, docs) {
+ assert.equal(err, null);
+ assert.equal(2, docs.length);
+ console.log("Found the following records");
+ console.dir(docs);
+ callback(docs);
+ });
+}
+```
+
+This query will return all the documents in the **documents** collection. Since we deleted a document the total
+documents returned is **2**. Finally let's add the findDocument method to the **MongoClient.connect** callback.
+
+```js
+var MongoClient = require('mongodb').MongoClient
+ , assert = require('assert');
+
+// Connection URL
+var url = 'mongodb://localhost:27017/myproject';
+// Use connect method to connect to the Server
+MongoClient.connect(url, function(err, db) {
+ assert.equal(null, err);
+ console.log("Connected correctly to server");
+
+ insertDocuments(db, function() {
+ updateDocument(db, function() {
+ deleteDocument(db, function() {
+ findDocuments(db, function() {
+ db.close();
+ });
+ });
+ });
+ });
+});
+```
+
+This concludes the QuickStart of connecting and performing some Basic operations using the MongoDB Node.js driver. For more detailed information you can look at the tutorials covering more specific topics of interest.
+
+## Next Steps
+
+ * [MongoDB Documentation](http://mongodb.org/)
+ * [Read about Schemas](http://learnmongodbthehardway.com/)
+ * [Star us on GitHub](https://github.com/mongodb/node-mongodb-native)
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/boot_auth.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/boot_auth.js
new file mode 100644
index 0000000..95956c7
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/boot_auth.js
@@ -0,0 +1,52 @@
+var ReplSetManager = require('mongodb-topology-manager').ReplSet,
+ f = require('util').format;
+
+var rsOptions = {
+ server: {
+ keyFile: __dirname + '/test/functional/data/keyfile.txt',
+ auth: null,
+ replSet: 'rs'
+ },
+ client: {
+ replSet: 'rs'
+ }
+}
+
+// Set up the nodes
+var nodes = [{
+ options: {
+ bind_ip: 'localhost', port: 31000,
+ dbpath: f('%s/../db/31000', __dirname),
+ }
+}, {
+ options: {
+ bind_ip: 'localhost', port: 31001,
+ dbpath: f('%s/../db/31001', __dirname),
+ }
+}, {
+ options: {
+ bind_ip: 'localhost', port: 31002,
+ dbpath: f('%s/../db/31002', __dirname),
+ }
+}]
+
+// Merge in any node start up options
+for(var i = 0; i < nodes.length; i++) {
+ for(var name in rsOptions.server) {
+ nodes[i].options[name] = rsOptions.server[name];
+ }
+}
+
+// Create a manager
+var replicasetManager = new ReplSetManager('mongod', nodes, rsOptions.client);
+// Purge the set
+replicasetManager.purge().then(function() {
+ // Start the server
+ replicasetManager.start().then(function() {
+ process.exit(0);
+ }).catch(function(e) {
+ console.log("====== ")
+ console.dir(e)
+ // // console.dir(e);
+ });
+});
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/conf.json b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/conf.json
new file mode 100644
index 0000000..aba0b7a
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/conf.json
@@ -0,0 +1,73 @@
+{
+ "plugins": ["plugins/markdown", "docs/lib/jsdoc/examples_plugin.js"],
+ "source": {
+ "include": [
+ "test/functional/operation_example_tests.js",
+ "test/functional/operation_promises_example_tests.js",
+ "test/functional/operation_generators_example_tests.js",
+ "test/functional/authentication_tests.js",
+ "test/functional/gridfs_stream_tests.js",
+ "lib/admin.js",
+ "lib/collection.js",
+ "lib/cursor.js",
+ "lib/aggregation_cursor.js",
+ "lib/command_cursor.js",
+ "lib/db.js",
+ "lib/mongo_client.js",
+ "lib/mongos.js",
+ "lib/read_preference.js",
+ "lib/replset.js",
+ "lib/server.js",
+ "lib/bulk/common.js",
+ "lib/bulk/ordered.js",
+ "lib/bulk/unordered.js",
+ "lib/gridfs/grid_store.js",
+ "node_modules/mongodb-core/lib/error.js",
+ "lib/gridfs-stream/index.js",
+ "lib/gridfs-stream/download.js",
+ "lib/gridfs-stream/upload.js",
+ "node_modules/mongodb-core/lib/connection/logger.js",
+ "node_modules/bson/lib/bson/binary.js",
+ "node_modules/bson/lib/bson/code.js",
+ "node_modules/bson/lib/bson/db_ref.js",
+ "node_modules/bson/lib/bson/double.js",
+ "node_modules/bson/lib/bson/long.js",
+ "node_modules/bson/lib/bson/objectid.js",
+ "node_modules/bson/lib/bson/symbol.js",
+ "node_modules/bson/lib/bson/timestamp.js",
+ "node_modules/bson/lib/bson/max_key.js",
+ "node_modules/bson/lib/bson/min_key.js"
+ ]
+ },
+ "templates": {
+ "cleverLinks": true,
+ "monospaceLinks": true,
+ "default": {
+ "outputSourceFiles" : true
+ },
+ "applicationName": "Node.js MongoDB Driver API",
+ "disqus": true,
+ "googleAnalytics": "UA-29229787-1",
+ "openGraph": {
+ "title": "",
+ "type": "website",
+ "image": "",
+ "site_name": "",
+ "url": ""
+ },
+ "meta": {
+ "title": "",
+ "description": "",
+ "keyword": ""
+ },
+ "linenums": true
+ },
+ "markdown": {
+ "parser": "gfm",
+ "hardwrap": true,
+ "tags": ["examples"]
+ },
+ "examples": {
+ "indent": 4
+ }
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/index.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/index.js
new file mode 100644
index 0000000..5808750
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/index.js
@@ -0,0 +1,55 @@
+// Core module
+var core = require('mongodb-core'),
+ Instrumentation = require('./lib/apm');
+
+// Set up the connect function
+var connect = require('./lib/mongo_client').connect;
+
+// Expose error class
+connect.MongoError = core.MongoError;
+
+// Actual driver classes exported
+connect.Admin = require('./lib/admin');
+connect.MongoClient = require('./lib/mongo_client');
+connect.Db = require('./lib/db');
+connect.Collection = require('./lib/collection');
+connect.Server = require('./lib/server');
+connect.ReplSet = require('./lib/replset');
+connect.Mongos = require('./lib/mongos');
+connect.ReadPreference = require('./lib/read_preference');
+connect.GridStore = require('./lib/gridfs/grid_store');
+connect.Chunk = require('./lib/gridfs/chunk');
+connect.Logger = core.Logger;
+connect.Cursor = require('./lib/cursor');
+connect.GridFSBucket = require('./lib/gridfs-stream');
+// Exported to be used in tests not to be used anywhere else
+connect.CoreServer = require('mongodb-core').Server;
+connect.CoreConnection = require('mongodb-core').Connection;
+
+// BSON types exported
+connect.Binary = core.BSON.Binary;
+connect.Code = core.BSON.Code;
+connect.Map = core.BSON.Map;
+connect.DBRef = core.BSON.DBRef;
+connect.Double = core.BSON.Double;
+connect.Int32 = core.BSON.Int32;
+connect.Long = core.BSON.Long;
+connect.MinKey = core.BSON.MinKey;
+connect.MaxKey = core.BSON.MaxKey;
+connect.ObjectID = core.BSON.ObjectID;
+connect.ObjectId = core.BSON.ObjectID;
+connect.Symbol = core.BSON.Symbol;
+connect.Timestamp = core.BSON.Timestamp;
+connect.Decimal128 = core.BSON.Decimal128;
+
+// Add connect method
+connect.connect = connect;
+
+// Set up the instrumentation method
+connect.instrument = function(options, callback) {
+ if(typeof options == 'function') callback = options, options = {};
+ return new Instrumentation(core, options, callback);
+}
+
+// Set our exports to be the connect function
+module.exports = connect;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/insert_bench.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/insert_bench.js
new file mode 100644
index 0000000..5c4d0b9
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/insert_bench.js
@@ -0,0 +1,231 @@
+var MongoClient = require('./').MongoClient,
+ assert = require('assert');
+
+// var memwatch = require('memwatch-next');
+// memwatch.on('leak', function(info) {
+// console.log("======== leak")
+// });
+//
+// memwatch.on('stats', function(stats) {
+// console.log("======== stats")
+// console.dir(stats)
+// });
+
+// // Take first snapshot
+// var hd = new memwatch.HeapDiff();
+
+MongoClient.connect('mongodb://localhost:27017/bench', function(err, db) {
+ var docs = [];
+ var total = 1000;
+ var count = total;
+ var measurements = [];
+
+ // Insert a bunch of documents
+ for(var i = 0; i < 100; i++) {
+ docs.push(JSON.parse(data));
+ }
+
+ var col = db.collection('inserts');
+
+ function execute(col, callback) {
+ var start = new Date().getTime();
+
+ col.find({}).limit(100).toArray(function(e, docs) {
+ measurements.push(new Date().getTime() - start);
+ assert.equal(null, e);
+ callback();
+ });
+ }
+
+ console.log("== insert documents")
+ col.insert(docs, function(e, r) {
+ docs = [];
+ assert.equal(null, e);
+
+ console.log("== start bench")
+ for(var i = 0; i < total; i++) {
+ execute(col, function(e) {
+ count = count - 1;
+
+ if(count == 0) {
+ // Calculate total execution time for operations
+ var totalTime = measurements.reduce(function(prev, curr) {
+ return prev + curr;
+ }, 0);
+
+ console.log("===========================================");
+ console.log("total time: " + totalTime)
+
+ // var diff = hd.end();
+ // console.log("===========================================");
+ // console.log(JSON.stringify(diff, null, 2))
+
+ db.close();
+ process.exit(0)
+ }
+ });
+ }
+ });
+});
+
+var data = JSON.stringify({
+ "data": [
+ {
+ "_id": 1,
+ "x": 11
+ },
+ {
+ "_id": 2,
+ "x": 22
+ },
+ {
+ "_id": 3,
+ "x": 33
+ }
+ ],
+ "collection_name": "test",
+ "database_name": "command-monitoring-tests",
+ "tests": [
+ {
+ "description": "A successful mixed bulk write",
+ "operation": {
+ "name": "bulkWrite",
+ "arguments": {
+ "requests": [
+ {
+ "insertOne": {
+ "document": {
+ "_id": 4,
+ "x": 44
+ }
+ }
+ },
+ {
+ "updateOne": {
+ "filter": {
+ "_id": 3
+ },
+ "update": {
+ "set": {
+ "x": 333
+ }
+ }
+ }
+ }
+ ]
+ }
+ },
+ "expectations": [
+ {
+ "command_started_event": {
+ "command": {
+ "insert": "test",
+ "documents": [
+ {
+ "_id": 4,
+ "x": 44
+ }
+ ],
+ "ordered": true
+ },
+ "command_name": "insert",
+ "database_name": "command-monitoring-tests"
+ }
+ },
+ {
+ "command_succeeded_event": {
+ "reply": {
+ "ok": 1.0,
+ "n": 1
+ },
+ "command_name": "insert"
+ }
+ },
+ {
+ "command_started_event": {
+ "command": {
+ "update": "test",
+ "updates": [
+ {
+ "q": {
+ "_id": 3
+ },
+ "u": {
+ "set": {
+ "x": 333
+ }
+ },
+ "upsert": false,
+ "multi": false
+ }
+ ],
+ "ordered": true
+ },
+ "command_name": "update",
+ "database_name": "command-monitoring-tests"
+ }
+ },
+ {
+ "command_succeeded_event": {
+ "reply": {
+ "ok": 1.0,
+ "n": 1
+ },
+ "command_name": "update"
+ }
+ }
+ ]
+ },
+ {
+ "description": "A successful unordered bulk write with an unacknowledged write concern",
+ "operation": {
+ "name": "bulkWrite",
+ "arguments": {
+ "requests": [
+ {
+ "insertOne": {
+ "document": {
+ "_id": 4,
+ "x": 44
+ }
+ }
+ }
+ ],
+ "ordered": false,
+ "writeConcern": {
+ "w": 0
+ }
+ }
+ },
+ "expectations": [
+ {
+ "command_started_event": {
+ "command": {
+ "insert": "test",
+ "documents": [
+ {
+ "_id": 4,
+ "x": 44
+ }
+ ],
+ "ordered": false,
+ "writeConcern": {
+ "w": 0
+ }
+ },
+ "command_name": "insert",
+ "database_name": "command-monitoring-tests"
+ }
+ },
+ {
+ "command_succeeded_event": {
+ "reply": {
+ "ok": 1.0
+ },
+ "command_name": "insert"
+ }
+ }
+ ]
+ }
+ ]
+});
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/admin.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/admin.js
new file mode 100644
index 0000000..528d582
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/admin.js
@@ -0,0 +1,581 @@
+"use strict";
+
+var toError = require('./utils').toError,
+ Define = require('./metadata'),
+ shallowClone = require('./utils').shallowClone;
+
+/**
+ * @fileOverview The **Admin** class is an internal class that allows convenient access to
+ * the admin functionality and commands for MongoDB.
+ *
+ * **ADMIN Cannot directly be instantiated**
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Use the admin database for the operation
+ * var adminDb = db.admin();
+ *
+ * // List all the available databases
+ * adminDb.listDatabases(function(err, dbs) {
+ * test.equal(null, err);
+ * test.ok(dbs.databases.length > 0);
+ * db.close();
+ * });
+ * });
+ */
+
+/**
+ * Create a new Admin instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @return {Admin} a collection instance.
+ */
+var Admin = function(db, topology, promiseLibrary) {
+ if(!(this instanceof Admin)) return new Admin(db, topology);
+ var self = this;
+
+ // Internal state
+ this.s = {
+ db: db
+ , topology: topology
+ , promiseLibrary: promiseLibrary
+ }
+}
+
+var define = Admin.define = new Define('Admin', Admin, false);
+
+/**
+ * The callback format for results
+ * @callback Admin~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Execute a command
+ * @method
+ * @param {object} command The command hash
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.maxTimeMS=null] Number of milliseconds to wait before aborting the query.
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.command = function(command, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return this.s.db.executeDbAdminCommand(command, options, function(err, doc) {
+ return callback != null ? callback(err, doc) : null;
+ });
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.executeDbAdminCommand(command, options, function(err, doc) {
+ if(err) return reject(err);
+ resolve(doc);
+ });
+ });
+}
+
+define.classMethod('command', {callback: true, promise:true});
+
+/**
+ * Retrieve the server information for the current
+ * instance of the db client
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.buildInfo = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return this.serverInfo(callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.serverInfo(function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('buildInfo', {callback: true, promise:true});
+
+/**
+ * Retrieve the server information for the current
+ * instance of the db client
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.serverInfo = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return this.s.db.executeDbAdminCommand({buildinfo:1}, function(err, doc) {
+ if(err != null) return callback(err, null);
+ callback(null, doc);
+ });
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.executeDbAdminCommand({buildinfo:1}, function(err, doc) {
+ if(err) return reject(err);
+ resolve(doc);
+ });
+ });
+}
+
+define.classMethod('serverInfo', {callback: true, promise:true});
+
+/**
+ * Retrieve this db's server status.
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.serverStatus = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return serverStatus(self, callback)
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ serverStatus(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var serverStatus = function(self, callback) {
+ self.s.db.executeDbAdminCommand({serverStatus: 1}, function(err, doc) {
+ if(err == null && doc.ok === 1) {
+ callback(null, doc);
+ } else {
+ if(err) return callback(err, false);
+ return callback(toError(doc), false);
+ }
+ });
+}
+
+define.classMethod('serverStatus', {callback: true, promise:true});
+
+/**
+ * Retrieve the current profiling Level for MongoDB
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.profilingLevel = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return profilingLevel(self, callback)
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ profilingLevel(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var profilingLevel = function(self, callback) {
+ self.s.db.executeDbAdminCommand({profile:-1}, function(err, doc) {
+ doc = doc;
+
+ if(err == null && doc.ok === 1) {
+ var was = doc.was;
+ if(was == 0) return callback(null, "off");
+ if(was == 1) return callback(null, "slow_only");
+ if(was == 2) return callback(null, "all");
+ return callback(new Error("Error: illegal profiling level value " + was), null);
+ } else {
+ err != null ? callback(err, null) : callback(new Error("Error with profile command"), null);
+ }
+ });
+}
+
+define.classMethod('profilingLevel', {callback: true, promise:true});
+
+/**
+ * Ping the MongoDB server and retrieve results
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.ping = function(options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+
+ // Execute using callback
+ if(typeof callback == 'function') return this.s.db.executeDbAdminCommand({ping: 1}, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.executeDbAdminCommand({ping: 1}, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('ping', {callback: true, promise:true});
+
+/**
+ * Authenticate a user against the server.
+ * @method
+ * @param {string} username The username.
+ * @param {string} [password] The password.
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.authenticate = function(username, password, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = shallowClone(options);
+ options.authdb = 'admin';
+
+ // Execute using callback
+ if(typeof callback == 'function') return this.s.db.authenticate(username, password, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.authenticate(username, password, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('authenticate', {callback: true, promise:true});
+
+/**
+ * Logout user from server, fire off on all connections and remove all auth info
+ * @method
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.logout = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return this.s.db.logout({dbName: 'admin'}, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.logout({dbName: 'admin'}, function(err, r) {
+ if(err) return reject(err);
+ resolve(true);
+ });
+ });
+}
+
+define.classMethod('logout', {callback: true, promise:true});
+
+// Get write concern
+var writeConcern = function(options, db) {
+ options = shallowClone(options);
+
+ // If options already contain write concerns return it
+ if(options.w || options.wtimeout || options.j || options.fsync) {
+ return options;
+ }
+
+ // Set db write concern if available
+ if(db.writeConcern) {
+ if(options.w) options.w = db.writeConcern.w;
+ if(options.wtimeout) options.wtimeout = db.writeConcern.wtimeout;
+ if(options.j) options.j = db.writeConcern.j;
+ if(options.fsync) options.fsync = db.writeConcern.fsync;
+ }
+
+ // Return modified options
+ return options;
+}
+
+/**
+ * Add a user to the database.
+ * @method
+ * @param {string} username The username.
+ * @param {string} password The password.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.fsync=false] Specify a file sync write concern.
+ * @param {object} [options.customData=null] Custom data associated with the user (only Mongodb 2.6 or higher)
+ * @param {object[]} [options.roles=null] Roles associated with the created user (only Mongodb 2.6 or higher)
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.addUser = function(username, password, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() : {};
+ options = options || {};
+ // Get the options
+ options = writeConcern(options, self.s.db)
+ // Set the db name to admin
+ options.dbName = 'admin';
+
+ // Execute using callback
+ if(typeof callback == 'function')
+ return self.s.db.addUser(username, password, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.addUser(username, password, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('addUser', {callback: true, promise:true});
+
+/**
+ * Remove a user from a database
+ * @method
+ * @param {string} username The username.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.fsync=false] Specify a file sync write concern.
+ * @param {Admin~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.removeUser = function(username, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() : {};
+ options = options || {};
+ // Get the options
+ options = writeConcern(options, self.s.db)
+ // Set the db name
+ options.dbName = 'admin';
+
+ // Execute using callback
+ if(typeof callback == 'function')
+ return self.s.db.removeUser(username, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.removeUser(username, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('removeUser', {callback: true, promise:true});
+
+/**
+ * Set the current profiling level of MongoDB
+ *
+ * @param {string} level The new profiling level (off, slow_only, all).
+ * @param {Admin~resultCallback} [callback] The command result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.setProfilingLevel = function(level, callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return setProfilingLevel(self, level, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ setProfilingLevel(self, level, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var setProfilingLevel = function(self, level, callback) {
+ var command = {};
+ var profile = 0;
+
+ if(level == "off") {
+ profile = 0;
+ } else if(level == "slow_only") {
+ profile = 1;
+ } else if(level == "all") {
+ profile = 2;
+ } else {
+ return callback(new Error("Error: illegal profiling level value " + level));
+ }
+
+ // Set up the profile number
+ command['profile'] = profile;
+
+ self.s.db.executeDbAdminCommand(command, function(err, doc) {
+ doc = doc;
+
+ if(err == null && doc.ok === 1)
+ return callback(null, level);
+ return err != null ? callback(err, null) : callback(new Error("Error with profile command"), null);
+ });
+}
+
+define.classMethod('setProfilingLevel', {callback: true, promise:true});
+
+/**
+ * Retrive the current profiling information for MongoDB
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.profilingInfo = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return profilingInfo(self, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ profilingInfo(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var profilingInfo = function(self, callback) {
+ try {
+ self.s.topology.cursor("admin.system.profile", { find: 'system.profile', query: {}}, {}).toArray(callback);
+ } catch (err) {
+ return callback(err, null);
+ }
+}
+
+define.classMethod('profilingLevel', {callback: true, promise:true});
+
+/**
+ * Validate an existing collection
+ *
+ * @param {string} collectionName The name of the collection to validate.
+ * @param {object} [options=null] Optional settings.
+ * @param {Admin~resultCallback} [callback] The command result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.validateCollection = function(collectionName, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() : {};
+ options = options || {};
+
+ // Execute using callback
+ if(typeof callback == 'function')
+ return validateCollection(self, collectionName, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ validateCollection(self, collectionName, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var validateCollection = function(self, collectionName, options, callback) {
+ var command = {validate: collectionName};
+ var keys = Object.keys(options);
+
+ // Decorate command with extra options
+ for(var i = 0; i < keys.length; i++) {
+ if(options.hasOwnProperty(keys[i])) {
+ command[keys[i]] = options[keys[i]];
+ }
+ }
+
+ self.s.db.command(command, function(err, doc) {
+ if(err != null) return callback(err, null);
+
+ if(doc.ok === 0)
+ return callback(new Error("Error with validate command"), null);
+ if(doc.result != null && doc.result.constructor != String)
+ return callback(new Error("Error with validation data"), null);
+ if(doc.result != null && doc.result.match(/exception|corrupt/) != null)
+ return callback(new Error("Error: invalid collection " + collectionName), null);
+ if(doc.valid != null && !doc.valid)
+ return callback(new Error("Error: invalid collection " + collectionName), null);
+
+ return callback(null, doc);
+ });
+}
+
+define.classMethod('validateCollection', {callback: true, promise:true});
+
+/**
+ * List the available databases
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.listDatabases = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return self.s.db.executeDbAdminCommand({listDatabases:1}, {}, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.executeDbAdminCommand({listDatabases:1}, {}, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('listDatabases', {callback: true, promise:true});
+
+/**
+ * Get ReplicaSet status
+ *
+ * @param {Admin~resultCallback} [callback] The command result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Admin.prototype.replSetGetStatus = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return replSetGetStatus(self, callback);
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ replSetGetStatus(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var replSetGetStatus = function(self, callback) {
+ self.s.db.executeDbAdminCommand({replSetGetStatus:1}, function(err, doc) {
+ if(err == null && doc.ok === 1)
+ return callback(null, doc);
+ if(err) return callback(err, false);
+ callback(toError(doc), false);
+ });
+}
+
+define.classMethod('replSetGetStatus', {callback: true, promise:true});
+
+module.exports = Admin;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/aggregation_cursor.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/aggregation_cursor.js
new file mode 100644
index 0000000..7546ee3
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/aggregation_cursor.js
@@ -0,0 +1,444 @@
+"use strict";
+
+var inherits = require('util').inherits
+ , f = require('util').format
+ , toError = require('./utils').toError
+ , getSingleProperty = require('./utils').getSingleProperty
+ , formattedOrderClause = require('./utils').formattedOrderClause
+ , handleCallback = require('./utils').handleCallback
+ , Logger = require('mongodb-core').Logger
+ , EventEmitter = require('events').EventEmitter
+ , ReadPreference = require('./read_preference')
+ , MongoError = require('mongodb-core').MongoError
+ , Readable = require('stream').Readable || require('readable-stream').Readable
+ , Define = require('./metadata')
+ , CoreCursor = require('./cursor')
+ , Query = require('mongodb-core').Query;
+
+/**
+ * @fileOverview The **AggregationCursor** class is an internal class that embodies an aggregation cursor on MongoDB
+ * allowing for iteration over the results returned from the underlying query. It supports
+ * one by one document iteration, conversion to an array or can be iterated as a Node 0.10.X
+ * or higher stream
+ *
+ * **AGGREGATIONCURSOR Cannot directly be instantiated**
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Create a collection we want to drop later
+ * var col = db.collection('createIndexExample1');
+ * // Insert a bunch of documents
+ * col.insert([{a:1, b:1}
+ * , {a:2, b:2}, {a:3, b:3}
+ * , {a:4, b:4}], {w:1}, function(err, result) {
+ * test.equal(null, err);
+ * // Show that duplicate records got dropped
+ * col.aggregation({}, {cursor: {}}).toArray(function(err, items) {
+ * test.equal(null, err);
+ * test.equal(4, items.length);
+ * db.close();
+ * });
+ * });
+ * });
+ */
+
+/**
+ * Namespace provided by the browser.
+ * @external Readable
+ */
+
+/**
+ * Creates a new Aggregation Cursor instance (INTERNAL TYPE, do not instantiate directly)
+ * @class AggregationCursor
+ * @extends external:Readable
+ * @fires AggregationCursor#data
+ * @fires AggregationCursor#end
+ * @fires AggregationCursor#close
+ * @fires AggregationCursor#readable
+ * @return {AggregationCursor} an AggregationCursor instance.
+ */
+var AggregationCursor = function(bson, ns, cmd, options, topology, topologyOptions) {
+ CoreCursor.apply(this, Array.prototype.slice.call(arguments, 0));
+ var self = this;
+ var state = AggregationCursor.INIT;
+ var streamOptions = {};
+
+ // MaxTimeMS
+ var maxTimeMS = null;
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Set up
+ Readable.call(this, {objectMode: true});
+
+ // Internal state
+ this.s = {
+ // MaxTimeMS
+ maxTimeMS: maxTimeMS
+ // State
+ , state: state
+ // Stream options
+ , streamOptions: streamOptions
+ // BSON
+ , bson: bson
+ // Namespae
+ , ns: ns
+ // Command
+ , cmd: cmd
+ // Options
+ , options: options
+ // Topology
+ , topology: topology
+ // Topology Options
+ , topologyOptions: topologyOptions
+ // Promise library
+ , promiseLibrary: promiseLibrary
+ }
+}
+
+/**
+ * AggregationCursor stream data event, fired for each document in the cursor.
+ *
+ * @event AggregationCursor#data
+ * @type {object}
+ */
+
+/**
+ * AggregationCursor stream end event
+ *
+ * @event AggregationCursor#end
+ * @type {null}
+ */
+
+/**
+ * AggregationCursor stream close event
+ *
+ * @event AggregationCursor#close
+ * @type {null}
+ */
+
+/**
+ * AggregationCursor stream readable event
+ *
+ * @event AggregationCursor#readable
+ * @type {null}
+ */
+
+// Inherit from Readable
+inherits(AggregationCursor, Readable);
+
+// Set the methods to inherit from prototype
+var methodsToInherit = ['_next', 'next', 'each', 'forEach', 'toArray'
+ , 'rewind', 'bufferedCount', 'readBufferedDocuments', 'close', 'isClosed', 'kill'
+ , '_find', '_getmore', '_killcursor', 'isDead', 'explain', 'isNotified'];
+
+// Extend the Cursor
+for(var name in CoreCursor.prototype) {
+ AggregationCursor.prototype[name] = CoreCursor.prototype[name];
+}
+
+var define = AggregationCursor.define = new Define('AggregationCursor', AggregationCursor, true);
+
+/**
+ * Set the batch size for the cursor.
+ * @method
+ * @param {number} value The batchSize for the cursor.
+ * @throws {MongoError}
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.batchSize = function(value) {
+ if(this.s.state == AggregationCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true });
+ if(typeof value != 'number') throw MongoError.create({message: "batchSize requires an integer", drvier:true });
+ if(this.s.cmd.cursor) this.s.cmd.cursor.batchSize = value;
+ this.setCursorBatchSize(value);
+ return this;
+}
+
+define.classMethod('batchSize', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a geoNear stage to the aggregation pipeline
+ * @method
+ * @param {object} document The geoNear stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.geoNear = function(document) {
+ this.s.cmd.pipeline.push({$geoNear: document});
+ return this;
+}
+
+define.classMethod('geoNear', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a group stage to the aggregation pipeline
+ * @method
+ * @param {object} document The group stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.group = function(document) {
+ this.s.cmd.pipeline.push({$group: document});
+ return this;
+}
+
+define.classMethod('group', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a limit stage to the aggregation pipeline
+ * @method
+ * @param {number} value The state limit value.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.limit = function(value) {
+ this.s.cmd.pipeline.push({$limit: value});
+ return this;
+}
+
+define.classMethod('limit', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a match stage to the aggregation pipeline
+ * @method
+ * @param {object} document The match stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.match = function(document) {
+ this.s.cmd.pipeline.push({$match: document});
+ return this;
+}
+
+define.classMethod('match', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a maxTimeMS stage to the aggregation pipeline
+ * @method
+ * @param {number} value The state maxTimeMS value.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.maxTimeMS = function(value) {
+ if(this.s.topology.lastIsMaster().minWireVersion > 2) {
+ this.s.cmd.maxTimeMS = value;
+ }
+ return this;
+}
+
+define.classMethod('maxTimeMS', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a out stage to the aggregation pipeline
+ * @method
+ * @param {number} destination The destination name.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.out = function(destination) {
+ this.s.cmd.pipeline.push({$out: destination});
+ return this;
+}
+
+define.classMethod('out', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a project stage to the aggregation pipeline
+ * @method
+ * @param {object} document The project stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.project = function(document) {
+ this.s.cmd.pipeline.push({$project: document});
+ return this;
+}
+
+define.classMethod('project', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a lookup stage to the aggregation pipeline
+ * @method
+ * @param {object} document The lookup stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.lookup = function(document) {
+ this.s.cmd.pipeline.push({$lookup: document});
+ return this;
+}
+
+define.classMethod('lookup', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a redact stage to the aggregation pipeline
+ * @method
+ * @param {object} document The redact stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.redact = function(document) {
+ this.s.cmd.pipeline.push({$redact: document});
+ return this;
+}
+
+define.classMethod('redact', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a skip stage to the aggregation pipeline
+ * @method
+ * @param {number} value The state skip value.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.skip = function(value) {
+ this.s.cmd.pipeline.push({$skip: value});
+ return this;
+}
+
+define.classMethod('skip', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a sort stage to the aggregation pipeline
+ * @method
+ * @param {object} document The sort stage document.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.sort = function(document) {
+ this.s.cmd.pipeline.push({$sort: document});
+ return this;
+}
+
+define.classMethod('sort', {callback: false, promise:false, returns: [AggregationCursor]});
+
+/**
+ * Add a unwind stage to the aggregation pipeline
+ * @method
+ * @param {number} field The unwind field name.
+ * @return {AggregationCursor}
+ */
+AggregationCursor.prototype.unwind = function(field) {
+ this.s.cmd.pipeline.push({$unwind: field});
+ return this;
+}
+
+define.classMethod('unwind', {callback: false, promise:false, returns: [AggregationCursor]});
+
+AggregationCursor.prototype.get = AggregationCursor.prototype.toArray;
+
+// Inherited methods
+define.classMethod('toArray', {callback: true, promise:true});
+define.classMethod('each', {callback: true, promise:false});
+define.classMethod('forEach', {callback: true, promise:false});
+define.classMethod('next', {callback: true, promise:true});
+define.classMethod('close', {callback: true, promise:true});
+define.classMethod('isClosed', {callback: false, promise:false, returns: [Boolean]});
+define.classMethod('rewind', {callback: false, promise:false});
+define.classMethod('bufferedCount', {callback: false, promise:false, returns: [Number]});
+define.classMethod('readBufferedDocuments', {callback: false, promise:false, returns: [Array]});
+
+/**
+ * Get the next available document from the cursor, returns null if no more documents are available.
+ * @function AggregationCursor.prototype.next
+ * @param {AggregationCursor~resultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * The callback format for results
+ * @callback AggregationCursor~toArrayResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object[]} documents All the documents the satisfy the cursor.
+ */
+
+/**
+ * Returns an array of documents. The caller is responsible for making sure that there
+ * is enough memory to store the results. Note that the array only contain partial
+ * results when this cursor had been previouly accessed. In that case,
+ * cursor.rewind() can be used to reset the cursor.
+ * @method AggregationCursor.prototype.toArray
+ * @param {AggregationCursor~toArrayResultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * The callback format for results
+ * @callback AggregationCursor~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {(object|null)} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Iterates over all the documents for this cursor. As with **{cursor.toArray}**,
+ * not all of the elements will be iterated if this cursor had been previouly accessed.
+ * In that case, **{cursor.rewind}** can be used to reset the cursor. However, unlike
+ * **{cursor.toArray}**, the cursor will only hold a maximum of batch size elements
+ * at any given time if batch size is specified. Otherwise, the caller is responsible
+ * for making sure that the entire result can fit the memory.
+ * @method AggregationCursor.prototype.each
+ * @param {AggregationCursor~resultCallback} callback The result callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+
+/**
+ * Close the cursor, sending a AggregationCursor command and emitting close.
+ * @method AggregationCursor.prototype.close
+ * @param {AggregationCursor~resultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * Is the cursor closed
+ * @method AggregationCursor.prototype.isClosed
+ * @return {boolean}
+ */
+
+/**
+ * Execute the explain for the cursor
+ * @method AggregationCursor.prototype.explain
+ * @param {AggregationCursor~resultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * Clone the cursor
+ * @function AggregationCursor.prototype.clone
+ * @return {AggregationCursor}
+ */
+
+/**
+ * Resets the cursor
+ * @function AggregationCursor.prototype.rewind
+ * @return {AggregationCursor}
+ */
+
+/**
+ * The callback format for the forEach iterator method
+ * @callback AggregationCursor~iteratorCallback
+ * @param {Object} doc An emitted document for the iterator
+ */
+
+/**
+ * The callback error format for the forEach iterator method
+ * @callback AggregationCursor~endCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ */
+
+/*
+ * Iterates over all the documents for this cursor using the iterator, callback pattern.
+ * @method AggregationCursor.prototype.forEach
+ * @param {AggregationCursor~iteratorCallback} iterator The iteration callback.
+ * @param {AggregationCursor~endCallback} callback The end callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+
+AggregationCursor.INIT = 0;
+AggregationCursor.OPEN = 1;
+AggregationCursor.CLOSED = 2;
+
+module.exports = AggregationCursor;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/apm.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/apm.js
new file mode 100644
index 0000000..3810c3f
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/apm.js
@@ -0,0 +1,613 @@
+var EventEmitter = require('events').EventEmitter,
+ inherits = require('util').inherits;
+
+// Get prototypes
+var AggregationCursor = require('./aggregation_cursor'),
+ CommandCursor = require('./command_cursor'),
+ OrderedBulkOperation = require('./bulk/ordered').OrderedBulkOperation,
+ UnorderedBulkOperation = require('./bulk/unordered').UnorderedBulkOperation,
+ GridStore = require('./gridfs/grid_store'),
+ Server = require('./server'),
+ ReplSet = require('./replset'),
+ Mongos = require('./mongos'),
+ Cursor = require('./cursor'),
+ Collection = require('./collection'),
+ Db = require('./db'),
+ Admin = require('./admin');
+
+var basicOperationIdGenerator = {
+ operationId: 1,
+
+ next: function() {
+ return this.operationId++;
+ }
+}
+
+var basicTimestampGenerator = {
+ current: function() {
+ return new Date().getTime();
+ },
+
+ duration: function(start, end) {
+ return end - start;
+ }
+}
+
+var senstiveCommands = ['authenticate', 'saslStart', 'saslContinue', 'getnonce',
+ 'createUser', 'updateUser', 'copydbgetnonce', 'copydbsaslstart', 'copydb'];
+
+var Instrumentation = function(core, options, callback) {
+ options = options || {};
+
+ // Optional id generators
+ var operationIdGenerator = options.operationIdGenerator || basicOperationIdGenerator;
+ // Optional timestamp generator
+ var timestampGenerator = options.timestampGenerator || basicTimestampGenerator;
+ // Extend with event emitter functionality
+ EventEmitter.call(this);
+
+ // Contains all the instrumentation overloads
+ this.overloads = [];
+
+ // ---------------------------------------------------------
+ //
+ // Instrument prototype
+ //
+ // ---------------------------------------------------------
+
+ var instrumentPrototype = function(callback) {
+ var instrumentations = []
+
+ // Classes to support
+ var classes = [GridStore, OrderedBulkOperation, UnorderedBulkOperation,
+ CommandCursor, AggregationCursor, Cursor, Collection, Db];
+
+ // Add instrumentations to the available list
+ for(var i = 0; i < classes.length; i++) {
+ if(classes[i].define) {
+ instrumentations.push(classes[i].define.generate());
+ }
+ }
+
+ // Return the list of instrumentation points
+ callback(null, instrumentations);
+ }
+
+ // Did the user want to instrument the prototype
+ if(typeof callback == 'function') {
+ instrumentPrototype(callback);
+ }
+
+ // ---------------------------------------------------------
+ //
+ // Server
+ //
+ // ---------------------------------------------------------
+
+ // Reference
+ var self = this;
+ // Names of methods we need to wrap
+ var methods = ['command', 'insert', 'update', 'remove'];
+ // Prototype
+ var proto = core.Server.prototype;
+ // Core server method we are going to wrap
+ methods.forEach(function(x) {
+ var func = proto[x];
+
+ // Add to overloaded methods
+ self.overloads.push({proto: proto, name:x, func:func});
+
+ // The actual prototype
+ proto[x] = function() {
+ var requestId = core.Query.nextRequestId();
+ // Get the aruments
+ var args = Array.prototype.slice.call(arguments, 0);
+ var ns = args[0];
+ var commandObj = args[1];
+ var options = args[2] || {};
+ var keys = Object.keys(commandObj);
+ var commandName = keys[0];
+ var db = ns.split('.')[0];
+
+ // Do we have a legacy insert/update/remove command
+ if(x == 'insert') { //} && !this.lastIsMaster().maxWireVersion) {
+ commandName = 'insert';
+ // Get the collection
+ var col = ns.split('.');
+ col.shift();
+ col = col.join('.');
+
+ // Re-write the command
+ commandObj = {
+ insert: col, documents: commandObj
+ }
+
+ if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
+ commandObj.writeConcern = options.writeConcern;
+ }
+
+ commandObj.ordered = options.ordered != undefined ? options.ordered : true;
+ } else if(x == 'update') { // && !this.lastIsMaster().maxWireVersion) {
+ commandName = 'update';
+
+ // Get the collection
+ var col = ns.split('.');
+ col.shift();
+ col = col.join('.');
+
+ // Re-write the command
+ commandObj = {
+ update: col, updates: commandObj
+ }
+
+ if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
+ commandObj.writeConcern = options.writeConcern;
+ }
+
+ commandObj.ordered = options.ordered != undefined ? options.ordered : true;
+ } else if(x == 'remove') { //&& !this.lastIsMaster().maxWireVersion) {
+ commandName = 'delete';
+
+ // Get the collection
+ var col = ns.split('.');
+ col.shift();
+ col = col.join('.');
+
+ // Re-write the command
+ commandObj = {
+ delete: col, deletes: commandObj
+ }
+
+ if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
+ commandObj.writeConcern = options.writeConcern;
+ }
+
+ commandObj.ordered = options.ordered != undefined ? options.ordered : true;
+ // } else if(x == 'insert' || x == 'update' || x == 'remove' && this.lastIsMaster().maxWireVersion >= 2) {
+ // // Skip the insert/update/remove commands as they are executed as actual write commands in 2.6 or higher
+ // return func.apply(this, args);
+ }
+
+ // console.log("=== APM 0")
+
+ // Get the callback
+ var callback = args.pop();
+ // Set current callback operation id from the current context or create
+ // a new one
+ var ourOpId = callback.operationId || operationIdGenerator.next();
+ // console.log("=== APM 1")
+
+ // Get a connection reference for this server instance
+ var connection = this.s.pool.get()
+
+ // console.log("=== APM 2")
+ // Emit the start event for the command
+ var command = {
+ // Returns the command.
+ command: commandObj,
+ // Returns the database name.
+ databaseName: db,
+ // Returns the command name.
+ commandName: commandName,
+ // Returns the driver generated request id.
+ requestId: requestId,
+ // Returns the driver generated operation id.
+ // This is used to link events together such as bulk write operations. OPTIONAL.
+ operationId: ourOpId,
+ // Returns the connection id for the command. For languages that do not have this,
+ // this MUST return the driver equivalent which MUST include the server address and port.
+ // The name of this field is flexible to match the object that is returned from the driver.
+ connectionId: connection
+ };
+
+ // console.log("=== APM 3")
+
+ // Filter out any sensitive commands
+ if(senstiveCommands.indexOf(commandName.toLowerCase())) {
+ command.commandObj = {};
+ command.commandObj[commandName] = true;
+ }
+
+ // Emit the started event
+ self.emit('started', command)
+
+ // Start time
+ var startTime = timestampGenerator.current();
+
+ // Push our handler callback
+ args.push(function(err, r) {
+ var endTime = timestampGenerator.current();
+ var command = {
+ duration: timestampGenerator.duration(startTime, endTime),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: ourOpId,
+ connectionId: connection
+ };
+
+ // If we have an error
+ if(err || (r && r.result && r.result.ok == 0)) {
+ command.failure = err || r.result.writeErrors || r.result;
+
+ // Filter out any sensitive commands
+ if(senstiveCommands.indexOf(commandName.toLowerCase())) {
+ command.failure = {};
+ }
+
+ self.emit('failed', command);
+ } else if(commandObj && commandObj.writeConcern
+ && commandObj.writeConcern.w == 0) {
+ // If we have write concern 0
+ command.reply = {ok:1};
+ self.emit('succeeded', command);
+ } else {
+ command.reply = r && r.result ? r.result : r;
+
+ // Filter out any sensitive commands
+ if(senstiveCommands.indexOf(commandName.toLowerCase()) != -1) {
+ command.reply = {};
+ }
+
+ self.emit('succeeded', command);
+ }
+
+ // Return to caller
+ callback(err, r);
+ });
+
+ // Apply the call
+ func.apply(this, args);
+ }
+ });
+
+ // ---------------------------------------------------------
+ //
+ // Bulk Operations
+ //
+ // ---------------------------------------------------------
+
+ // Inject ourselves into the Bulk methods
+ var methods = ['execute'];
+ var prototypes = [
+ require('./bulk/ordered').Bulk.prototype,
+ require('./bulk/unordered').Bulk.prototype
+ ]
+
+ prototypes.forEach(function(proto) {
+ // Core server method we are going to wrap
+ methods.forEach(function(x) {
+ var func = proto[x];
+
+ // Add to overloaded methods
+ self.overloads.push({proto: proto, name:x, func:func});
+
+ // The actual prototype
+ proto[x] = function() {
+ var bulk = this;
+ // Get the aruments
+ var args = Array.prototype.slice.call(arguments, 0);
+ // Set an operation Id on the bulk object
+ this.operationId = operationIdGenerator.next();
+
+ // Get the callback
+ var callback = args.pop();
+ // If we have a callback use this
+ if(typeof callback == 'function') {
+ args.push(function(err, r) {
+ // Return to caller
+ callback(err, r);
+ });
+
+ // Apply the call
+ func.apply(this, args);
+ } else {
+ return func.apply(this, args);
+ }
+ }
+ });
+ });
+
+ // ---------------------------------------------------------
+ //
+ // Cursor
+ //
+ // ---------------------------------------------------------
+
+ // Inject ourselves into the Cursor methods
+ var methods = ['_find', '_getmore', '_killcursor'];
+ var prototypes = [
+ require('./cursor').prototype,
+ require('./command_cursor').prototype,
+ require('./aggregation_cursor').prototype
+ ]
+
+ // Command name translation
+ var commandTranslation = {
+ '_find': 'find', '_getmore': 'getMore', '_killcursor': 'killCursors', '_explain': 'explain'
+ }
+
+ prototypes.forEach(function(proto) {
+
+ // Core server method we are going to wrap
+ methods.forEach(function(x) {
+ var func = proto[x];
+
+ // Add to overloaded methods
+ self.overloads.push({proto: proto, name:x, func:func});
+
+ // The actual prototype
+ proto[x] = function() {
+ var cursor = this;
+ var requestId = core.Query.nextRequestId();
+ var ourOpId = operationIdGenerator.next();
+ var parts = this.ns.split('.');
+ var db = parts[0];
+
+ // Get the collection
+ parts.shift();
+ var collection = parts.join('.');
+
+ // Set the command
+ var command = this.query;
+ var cmd = this.s.cmd;
+
+ // If we have a find method, set the operationId on the cursor
+ if(x == '_find') {
+ cursor.operationId = ourOpId;
+ }
+
+ // Do we have a find command rewrite it
+ if(x == '_getmore') {
+ command = {
+ getMore: this.cursorState.cursorId,
+ collection: collection,
+ batchSize: cmd.batchSize
+ }
+
+ if(cmd.maxTimeMS) command.maxTimeMS = cmd.maxTimeMS;
+ } else if(x == '_killcursors') {
+ command = {
+ killCursors: collection,
+ cursors: [this.cursorState.cursorId]
+ }
+ } else if(cmd.find) {
+ command = {
+ find: collection, filter: cmd.query
+ }
+
+ if(cmd.sort) command.sort = cmd.sort;
+ if(cmd.fields) command.projection = cmd.fields;
+ if(cmd.limit && cmd.limit < 0) {
+ command.limit = Math.abs(cmd.limit);
+ command.singleBatch = true;
+ } else if(cmd.limit) {
+ command.limit = Math.abs(cmd.limit);
+ }
+
+ // Options
+ if(cmd.skip) command.skip = cmd.skip;
+ if(cmd.hint) command.hint = cmd.hint;
+ if(cmd.batchSize) command.batchSize = cmd.batchSize;
+ if(typeof cmd.returnKey == 'boolean') command.returnKey = cmd.returnKey;
+ if(cmd.comment) command.comment = cmd.comment;
+ if(cmd.min) command.min = cmd.min;
+ if(cmd.max) command.max = cmd.max;
+ if(cmd.maxScan) command.maxScan = cmd.maxScan;
+ if(cmd.maxTimeMS) command.maxTimeMS = cmd.maxTimeMS;
+
+ // Flags
+ if(typeof cmd.awaitData == 'boolean') command.awaitData = cmd.awaitData;
+ if(typeof cmd.snapshot == 'boolean') command.snapshot = cmd.snapshot;
+ if(typeof cmd.tailable == 'boolean') command.tailable = cmd.tailable;
+ if(typeof cmd.oplogReplay == 'boolean') command.oplogReplay = cmd.oplogReplay;
+ if(typeof cmd.noCursorTimeout == 'boolean') command.noCursorTimeout = cmd.noCursorTimeout;
+ if(typeof cmd.partial == 'boolean') command.partial = cmd.partial;
+ if(typeof cmd.showDiskLoc == 'boolean') command.showRecordId = cmd.showDiskLoc;
+
+ // Read Concern
+ if(cmd.readConcern) command.readConcern = cmd.readConcern;
+
+ // Override method
+ if(cmd.explain) command.explain = cmd.explain;
+ if(cmd.exhaust) command.exhaust = cmd.exhaust;
+
+ // If we have a explain flag
+ if(cmd.explain) {
+ // Create fake explain command
+ command = {
+ explain: command,
+ verbosity: 'allPlansExecution'
+ }
+
+ // Set readConcern on the command if available
+ if(cmd.readConcern) command.readConcern = cmd.readConcern
+
+ // Set up the _explain name for the command
+ x = '_explain';
+ }
+ } else {
+ command = cmd;
+ }
+
+ // Set up the connection
+ var connectionId = null;
+
+ // Set local connection
+ if(this.connection) connectionId = this.connection;
+ if(!connectionId && this.server && this.server.getConnection) connectionId = this.server.getConnection();
+
+ // Get the command Name
+ var commandName = x == '_find' ? Object.keys(command)[0] : commandTranslation[x];
+
+ // Emit the start event for the command
+ var command = {
+ // Returns the command.
+ command: command,
+ // Returns the database name.
+ databaseName: db,
+ // Returns the command name.
+ commandName: commandName,
+ // Returns the driver generated request id.
+ requestId: requestId,
+ // Returns the driver generated operation id.
+ // This is used to link events together such as bulk write operations. OPTIONAL.
+ operationId: this.operationId,
+ // Returns the connection id for the command. For languages that do not have this,
+ // this MUST return the driver equivalent which MUST include the server address and port.
+ // The name of this field is flexible to match the object that is returned from the driver.
+ connectionId: connectionId
+ };
+
+ // Get the aruments
+ var args = Array.prototype.slice.call(arguments, 0);
+
+ // Get the callback
+ var callback = args.pop();
+
+ // We do not have a callback but a Promise
+ if(typeof callback == 'function' || command.commandName == 'killCursors') {
+ var startTime = timestampGenerator.current();
+ // Emit the started event
+ self.emit('started', command)
+
+ // Emit succeeded event with killcursor if we have a legacy protocol
+ if(command.commandName == 'killCursors'
+ && this.server.lastIsMaster()
+ && this.server.lastIsMaster().maxWireVersion < 4) {
+ // Emit the succeeded command
+ var command = {
+ duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: cursor.operationId,
+ connectionId: cursor.server.getConnection(),
+ reply: [{ok:1}]
+ };
+
+ // Emit the command
+ return self.emit('succeeded', command)
+ }
+
+ // Add our callback handler
+ args.push(function(err, r) {
+ if(err) {
+ // Command
+ var command = {
+ duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: ourOpId,
+ connectionId: cursor.server.getConnection(),
+ failure: err };
+
+ // Emit the command
+ self.emit('failed', command)
+ } else {
+ // Do we have a getMore
+ if(commandName.toLowerCase() == 'getmore' && r == null) {
+ r = {
+ cursor: {
+ id: cursor.cursorState.cursorId,
+ ns: cursor.ns,
+ nextBatch: cursor.cursorState.documents
+ }, ok:1
+ }
+ } else if(commandName.toLowerCase() == 'find' && r == null) {
+ r = {
+ cursor: {
+ id: cursor.cursorState.cursorId,
+ ns: cursor.ns,
+ firstBatch: cursor.cursorState.documents
+ }, ok:1
+ }
+ } else if(commandName.toLowerCase() == 'killcursors' && r == null) {
+ r = {
+ cursorsUnknown:[cursor.cursorState.lastCursorId],
+ ok:1
+ }
+ }
+
+ // cursor id is zero, we can issue success command
+ var command = {
+ duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: cursor.operationId,
+ connectionId: cursor.server.getConnection(),
+ reply: r && r.result ? r.result : r
+ };
+
+ // Emit the command
+ self.emit('succeeded', command)
+ }
+
+ // Return
+ if(!callback) return;
+
+ // Return to caller
+ callback(err, r);
+ });
+
+ // Apply the call
+ func.apply(this, args);
+ } else {
+ // Assume promise, push back the missing value
+ args.push(callback);
+ // Get the promise
+ var promise = func.apply(this, args);
+ // Return a new promise
+ return new cursor.s.promiseLibrary(function(resolve, reject) {
+ var startTime = timestampGenerator.current();
+ // Emit the started event
+ self.emit('started', command)
+ // Execute the function
+ promise.then(function(r) {
+ // cursor id is zero, we can issue success command
+ var command = {
+ duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: cursor.operationId,
+ connectionId: cursor.server.getConnection(),
+ reply: cursor.cursorState.documents
+ };
+
+ // Emit the command
+ self.emit('succeeded', command)
+ }).catch(function(err) {
+ // Command
+ var command = {
+ duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
+ commandName: commandName,
+ requestId: requestId,
+ operationId: ourOpId,
+ connectionId: cursor.server.getConnection(),
+ failure: err };
+
+ // Emit the command
+ self.emit('failed', command)
+ // reject the promise
+ reject(err);
+ });
+ });
+ }
+ }
+ });
+ });
+}
+
+inherits(Instrumentation, EventEmitter);
+
+Instrumentation.prototype.uninstrument = function() {
+ for(var i = 0; i < this.overloads.length; i++) {
+ var obj = this.overloads[i];
+ obj.proto[obj.name] = obj.func;
+ }
+
+ // Remove all listeners
+ this.removeAllListeners('started');
+ this.removeAllListeners('succeeded');
+ this.removeAllListeners('failed');
+}
+
+module.exports = Instrumentation;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/common.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/common.js
new file mode 100644
index 0000000..ff723bb
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/common.js
@@ -0,0 +1,440 @@
+"use strict";
+
+var utils = require('../utils'),
+ Long = require('mongodb-core').BSON.Long,
+ Timestamp = require('mongodb-core').BSON.Timestamp;
+
+// Error codes
+var UNKNOWN_ERROR = 8;
+var INVALID_BSON_ERROR = 22;
+var WRITE_CONCERN_ERROR = 64;
+var MULTIPLE_ERROR = 65;
+
+// Insert types
+var INSERT = 1;
+var UPDATE = 2;
+var REMOVE = 3
+
+
+// Get write concern
+var writeConcern = function(target, col, options) {
+ var writeConcern = {};
+
+ // Collection level write concern
+ if(col.writeConcern && col.writeConcern.w != null) writeConcern.w = col.writeConcern.w;
+ if(col.writeConcern && col.writeConcern.j != null) writeConcern.j = col.writeConcern.j;
+ if(col.writeConcern && col.writeConcern.fsync != null) writeConcern.fsync = col.writeConcern.fsync;
+ if(col.writeConcern && col.writeConcern.wtimeout != null) writeConcern.wtimeout = col.writeConcern.wtimeout;
+
+ // Options level write concern
+ if(options && options.w != null) writeConcern.w = options.w;
+ if(options && options.wtimeout != null) writeConcern.wtimeout = options.wtimeout;
+ if(options && options.j != null) writeConcern.j = options.j;
+ if(options && options.fsync != null) writeConcern.fsync = options.fsync;
+
+ // Return write concern
+ return writeConcern;
+}
+
+/**
+ * Helper function to define properties
+ * @ignore
+ */
+var defineReadOnlyProperty = function(self, name, value) {
+ Object.defineProperty(self, name, {
+ enumerable: true
+ , get: function() {
+ return value;
+ }
+ });
+}
+
+/**
+ * Keeps the state of a unordered batch so we can rewrite the results
+ * correctly after command execution
+ * @ignore
+ */
+var Batch = function(batchType, originalZeroIndex) {
+ this.originalZeroIndex = originalZeroIndex;
+ this.currentIndex = 0;
+ this.originalIndexes = [];
+ this.batchType = batchType;
+ this.operations = [];
+ this.size = 0;
+ this.sizeBytes = 0;
+}
+
+/**
+ * Wraps a legacy operation so we can correctly rewrite it's error
+ * @ignore
+ */
+var LegacyOp = function(batchType, operation, index) {
+ this.batchType = batchType;
+ this.index = index;
+ this.operation = operation;
+}
+
+/**
+ * Create a new BulkWriteResult instance (INTERNAL TYPE, do not instantiate directly)
+ *
+ * @class
+ * @property {boolean} ok Did bulk operation correctly execute
+ * @property {number} nInserted number of inserted documents
+ * @property {number} nUpdated number of documents updated logically
+ * @property {number} nUpserted Number of upserted documents
+ * @property {number} nModified Number of documents updated physically on disk
+ * @property {number} nRemoved Number of removed documents
+ * @return {BulkWriteResult} a BulkWriteResult instance
+ */
+var BulkWriteResult = function(bulkResult) {
+ defineReadOnlyProperty(this, "ok", bulkResult.ok);
+ defineReadOnlyProperty(this, "nInserted", bulkResult.nInserted);
+ defineReadOnlyProperty(this, "nUpserted", bulkResult.nUpserted);
+ defineReadOnlyProperty(this, "nMatched", bulkResult.nMatched);
+ defineReadOnlyProperty(this, "nModified", bulkResult.nModified);
+ defineReadOnlyProperty(this, "nRemoved", bulkResult.nRemoved);
+
+ /**
+ * Return an array of inserted ids
+ *
+ * @return {object[]}
+ */
+ this.getInsertedIds = function() {
+ return bulkResult.insertedIds;
+ }
+
+ /**
+ * Return an array of upserted ids
+ *
+ * @return {object[]}
+ */
+ this.getUpsertedIds = function() {
+ return bulkResult.upserted;
+ }
+
+ /**
+ * Return the upserted id at position x
+ *
+ * @param {number} index the number of the upserted id to return, returns undefined if no result for passed in index
+ * @return {object}
+ */
+ this.getUpsertedIdAt = function(index) {
+ return bulkResult.upserted[index];
+ }
+
+ /**
+ * Return raw internal result
+ *
+ * @return {object}
+ */
+ this.getRawResponse = function() {
+ return bulkResult;
+ }
+
+ /**
+ * Returns true if the bulk operation contains a write error
+ *
+ * @return {boolean}
+ */
+ this.hasWriteErrors = function() {
+ return bulkResult.writeErrors.length > 0;
+ }
+
+ /**
+ * Returns the number of write errors off the bulk operation
+ *
+ * @return {number}
+ */
+ this.getWriteErrorCount = function() {
+ return bulkResult.writeErrors.length;
+ }
+
+ /**
+ * Returns a specific write error object
+ *
+ * @return {WriteError}
+ */
+ this.getWriteErrorAt = function(index) {
+ if(index < bulkResult.writeErrors.length) {
+ return bulkResult.writeErrors[index];
+ }
+ return null;
+ }
+
+ /**
+ * Retrieve all write errors
+ *
+ * @return {object[]}
+ */
+ this.getWriteErrors = function() {
+ return bulkResult.writeErrors;
+ }
+
+ /**
+ * Retrieve lastOp if available
+ *
+ * @return {object}
+ */
+ this.getLastOp = function() {
+ return bulkResult.lastOp;
+ }
+
+ /**
+ * Retrieve the write concern error if any
+ *
+ * @return {WriteConcernError}
+ */
+ this.getWriteConcernError = function() {
+ if(bulkResult.writeConcernErrors.length == 0) {
+ return null;
+ } else if(bulkResult.writeConcernErrors.length == 1) {
+ // Return the error
+ return bulkResult.writeConcernErrors[0];
+ } else {
+
+ // Combine the errors
+ var errmsg = "";
+ for(var i = 0; i < bulkResult.writeConcernErrors.length; i++) {
+ var err = bulkResult.writeConcernErrors[i];
+ errmsg = errmsg + err.errmsg;
+
+ // TODO: Something better
+ if(i == 0) errmsg = errmsg + " and ";
+ }
+
+ return new WriteConcernError({ errmsg : errmsg, code : WRITE_CONCERN_ERROR });
+ }
+ }
+
+ this.toJSON = function() {
+ return bulkResult;
+ }
+
+ this.toString = function() {
+ return "BulkWriteResult(" + this.toJSON(bulkResult) + ")";
+ }
+
+ this.isOk = function() {
+ return bulkResult.ok == 1;
+ }
+}
+
+/**
+ * Create a new WriteConcernError instance (INTERNAL TYPE, do not instantiate directly)
+ *
+ * @class
+ * @property {number} code Write concern error code.
+ * @property {string} errmsg Write concern error message.
+ * @return {WriteConcernError} a WriteConcernError instance
+ */
+var WriteConcernError = function(err) {
+ if(!(this instanceof WriteConcernError)) return new WriteConcernError(err);
+
+ // Define properties
+ defineReadOnlyProperty(this, "code", err.code);
+ defineReadOnlyProperty(this, "errmsg", err.errmsg);
+
+ this.toJSON = function() {
+ return {code: err.code, errmsg: err.errmsg};
+ }
+
+ this.toString = function() {
+ return "WriteConcernError(" + err.errmsg + ")";
+ }
+}
+
+/**
+ * Create a new WriteError instance (INTERNAL TYPE, do not instantiate directly)
+ *
+ * @class
+ * @property {number} code Write concern error code.
+ * @property {number} index Write concern error original bulk operation index.
+ * @property {string} errmsg Write concern error message.
+ * @return {WriteConcernError} a WriteConcernError instance
+ */
+var WriteError = function(err) {
+ if(!(this instanceof WriteError)) return new WriteError(err);
+
+ // Define properties
+ defineReadOnlyProperty(this, "code", err.code);
+ defineReadOnlyProperty(this, "index", err.index);
+ defineReadOnlyProperty(this, "errmsg", err.errmsg);
+
+ //
+ // Define access methods
+ this.getOperation = function() {
+ return err.op;
+ }
+
+ this.toJSON = function() {
+ return {code: err.code, index: err.index, errmsg: err.errmsg, op: err.op};
+ }
+
+ this.toString = function() {
+ return "WriteError(" + JSON.stringify(this.toJSON()) + ")";
+ }
+}
+
+/**
+ * Merges results into shared data structure
+ * @ignore
+ */
+var mergeBatchResults = function(ordered, batch, bulkResult, err, result) {
+ // If we have an error set the result to be the err object
+ if(err) {
+ result = err;
+ } else if(result && result.result) {
+ result = result.result;
+ } else if(result == null) {
+ return;
+ }
+
+ // Do we have a top level error stop processing and return
+ if(result.ok == 0 && bulkResult.ok == 1) {
+ bulkResult.ok = 0;
+
+ var writeError = {
+ index: 0
+ , code: result.code || 0
+ , errmsg: result.message
+ , op: batch.operations[0]
+ };
+
+ bulkResult.writeErrors.push(new WriteError(writeError));
+ return;
+ } else if(result.ok == 0 && bulkResult.ok == 0) {
+ return;
+ }
+
+ // Deal with opTime if available
+ if(result.opTime || result.lastOp) {
+ var opTime = result.lastOp || result.opTime;
+ var lastOpTS = null;
+ var lastOpT = null;
+
+ // We have a time stamp
+ if(opTime instanceof Timestamp) {
+ if(bulkResult.lastOp == null) {
+ bulkResult.lastOp = opTime;
+ } else if(opTime.greaterThan(bulkResult.lastOp)) {
+ bulkResult.lastOp = opTime;
+ }
+ } else {
+ // Existing TS
+ if(bulkResult.lastOp) {
+ lastOpTS = typeof bulkResult.lastOp.ts == 'number'
+ ? Long.fromNumber(bulkResult.lastOp.ts) : bulkResult.lastOp.ts;
+ lastOpT = typeof bulkResult.lastOp.t == 'number'
+ ? Long.fromNumber(bulkResult.lastOp.t) : bulkResult.lastOp.t;
+ }
+
+ // Current OpTime TS
+ var opTimeTS = typeof opTime.ts == 'number'
+ ? Long.fromNumber(opTime.ts) : opTime.ts;
+ var opTimeT = typeof opTime.t == 'number'
+ ? Long.fromNumber(opTime.t) : opTime.t;
+
+ // Compare the opTime's
+ if(bulkResult.lastOp == null) {
+ bulkResult.lastOp = opTime;
+ } else if(opTimeTS.greaterThan(lastOpTS)) {
+ bulkResult.lastOp = opTime;
+ } else if(opTimeTS.equals(lastOpTS)) {
+ if(opTimeT.greaterThan(lastOpT)) {
+ bulkResult.lastOp = opTime;
+ }
+ }
+ }
+ }
+
+ // If we have an insert Batch type
+ if(batch.batchType == INSERT && result.n) {
+ bulkResult.nInserted = bulkResult.nInserted + result.n;
+ }
+
+ // If we have an insert Batch type
+ if(batch.batchType == REMOVE && result.n) {
+ bulkResult.nRemoved = bulkResult.nRemoved + result.n;
+ }
+
+ var nUpserted = 0;
+
+ // We have an array of upserted values, we need to rewrite the indexes
+ if(Array.isArray(result.upserted)) {
+ nUpserted = result.upserted.length;
+
+ for(var i = 0; i < result.upserted.length; i++) {
+ bulkResult.upserted.push({
+ index: result.upserted[i].index + batch.originalZeroIndex
+ , _id: result.upserted[i]._id
+ });
+ }
+ } else if(result.upserted) {
+
+ nUpserted = 1;
+
+ bulkResult.upserted.push({
+ index: batch.originalZeroIndex
+ , _id: result.upserted
+ });
+ }
+
+ // If we have an update Batch type
+ if(batch.batchType == UPDATE && result.n) {
+ var nModified = result.nModified;
+ bulkResult.nUpserted = bulkResult.nUpserted + nUpserted;
+ bulkResult.nMatched = bulkResult.nMatched + (result.n - nUpserted);
+
+ if(typeof nModified == 'number') {
+ bulkResult.nModified = bulkResult.nModified + nModified;
+ } else {
+ bulkResult.nModified = null;
+ }
+ }
+
+ if(Array.isArray(result.writeErrors)) {
+ for(var i = 0; i < result.writeErrors.length; i++) {
+
+ var writeError = {
+ index: batch.originalZeroIndex + result.writeErrors[i].index
+ , code: result.writeErrors[i].code
+ , errmsg: result.writeErrors[i].errmsg
+ , op: batch.operations[result.writeErrors[i].index]
+ };
+
+ bulkResult.writeErrors.push(new WriteError(writeError));
+ }
+ }
+
+ if(result.writeConcernError) {
+ bulkResult.writeConcernErrors.push(new WriteConcernError(result.writeConcernError));
+ }
+}
+
+//
+// Clone the options
+var cloneOptions = function(options) {
+ var clone = {};
+ var keys = Object.keys(options);
+ for(var i = 0; i < keys.length; i++) {
+ clone[keys[i]] = options[keys[i]];
+ }
+
+ return clone;
+}
+
+// Exports symbols
+exports.BulkWriteResult = BulkWriteResult;
+exports.WriteError = WriteError;
+exports.Batch = Batch;
+exports.LegacyOp = LegacyOp;
+exports.mergeBatchResults = mergeBatchResults;
+exports.cloneOptions = cloneOptions;
+exports.writeConcern = writeConcern;
+exports.INVALID_BSON_ERROR = INVALID_BSON_ERROR;
+exports.WRITE_CONCERN_ERROR = WRITE_CONCERN_ERROR;
+exports.MULTIPLE_ERROR = MULTIPLE_ERROR;
+exports.UNKNOWN_ERROR = UNKNOWN_ERROR;
+exports.INSERT = INSERT;
+exports.UPDATE = UPDATE;
+exports.REMOVE = REMOVE;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/ordered.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/ordered.js
new file mode 100644
index 0000000..b6c183d
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/ordered.js
@@ -0,0 +1,539 @@
+"use strict";
+
+var common = require('./common')
+ , utils = require('../utils')
+ , toError = require('../utils').toError
+ , f = require('util').format
+ , handleCallback = require('../utils').handleCallback
+ , shallowClone = utils.shallowClone
+ , WriteError = common.WriteError
+ , BulkWriteResult = common.BulkWriteResult
+ , LegacyOp = common.LegacyOp
+ , ObjectID = require('mongodb-core').BSON.ObjectID
+ , Define = require('../metadata')
+ , BSON = require('mongodb-core').BSON
+ , Batch = common.Batch
+ , mergeBatchResults = common.mergeBatchResults;
+
+var bson = new BSON.BSONPure();
+
+/**
+ * Create a FindOperatorsOrdered instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @return {FindOperatorsOrdered} a FindOperatorsOrdered instance.
+ */
+var FindOperatorsOrdered = function(self) {
+ this.s = self.s;
+}
+
+/**
+ * Add a single update document to the bulk operation
+ *
+ * @method
+ * @param {object} doc update operations
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+FindOperatorsOrdered.prototype.update = function(updateDocument) {
+ // Perform upsert
+ var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
+
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , u: updateDocument
+ , multi: true
+ , upsert: upsert
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the update document to the list
+ return addToOperationsList(this, common.UPDATE, document);
+}
+
+/**
+ * Add a single update one document to the bulk operation
+ *
+ * @method
+ * @param {object} doc update operations
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+FindOperatorsOrdered.prototype.updateOne = function(updateDocument) {
+ // Perform upsert
+ var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
+
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , u: updateDocument
+ , multi: false
+ , upsert: upsert
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the update document to the list
+ return addToOperationsList(this, common.UPDATE, document);
+}
+
+/**
+ * Add a replace one operation to the bulk operation
+ *
+ * @method
+ * @param {object} doc the new document to replace the existing one with
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+FindOperatorsOrdered.prototype.replaceOne = function(updateDocument) {
+ this.updateOne(updateDocument);
+}
+
+/**
+ * Upsert modifier for update bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {FindOperatorsOrdered}
+ */
+FindOperatorsOrdered.prototype.upsert = function() {
+ this.s.currentOp.upsert = true;
+ return this;
+}
+
+/**
+ * Add a remove one operation to the bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+FindOperatorsOrdered.prototype.deleteOne = function() {
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , limit: 1
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the remove document to the list
+ return addToOperationsList(this, common.REMOVE, document);
+}
+
+// Backward compatibility
+FindOperatorsOrdered.prototype.removeOne = FindOperatorsOrdered.prototype.deleteOne;
+
+/**
+ * Add a remove operation to the bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+FindOperatorsOrdered.prototype.delete = function() {
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , limit: 0
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the remove document to the list
+ return addToOperationsList(this, common.REMOVE, document);
+}
+
+// Backward compatibility
+FindOperatorsOrdered.prototype.remove = FindOperatorsOrdered.prototype.delete;
+
+// Add to internal list of documents
+var addToOperationsList = function(_self, docType, document) {
+ // Get the bsonSize
+ var bsonSize = bson.calculateObjectSize(document, false);
+
+ // Throw error if the doc is bigger than the max BSON size
+ if(bsonSize >= _self.s.maxBatchSizeBytes) {
+ throw toError("document is larger than the maximum size " + _self.s.maxBatchSizeBytes);
+ }
+
+ // Create a new batch object if we don't have a current one
+ if(_self.s.currentBatch == null) _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
+
+ // Check if we need to create a new batch
+ if(((_self.s.currentBatchSize + 1) >= _self.s.maxWriteBatchSize)
+ || ((_self.s.currentBatchSizeBytes + _self.s.currentBatchSizeBytes) >= _self.s.maxBatchSizeBytes)
+ || (_self.s.currentBatch.batchType != docType)) {
+ // Save the batch to the execution stack
+ _self.s.batches.push(_self.s.currentBatch);
+
+ // Create a new batch
+ _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
+
+ // Reset the current size trackers
+ _self.s.currentBatchSize = 0;
+ _self.s.currentBatchSizeBytes = 0;
+ } else {
+ // Update current batch size
+ _self.s.currentBatchSize = _self.s.currentBatchSize + 1;
+ _self.s.currentBatchSizeBytes = _self.s.currentBatchSizeBytes + bsonSize;
+ }
+
+ if(docType == common.INSERT) {
+ _self.s.bulkResult.insertedIds.push({index: _self.s.currentIndex, _id: document._id});
+ }
+
+ // We have an array of documents
+ if(Array.isArray(document)) {
+ throw toError("operation passed in cannot be an Array");
+ } else {
+ _self.s.currentBatch.originalIndexes.push(_self.s.currentIndex);
+ _self.s.currentBatch.operations.push(document)
+ _self.s.currentBatchSizeBytes = _self.s.currentBatchSizeBytes + bsonSize;
+ _self.s.currentIndex = _self.s.currentIndex + 1;
+ }
+
+ // Return self
+ return _self;
+}
+
+/**
+ * Create a new OrderedBulkOperation instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @property {number} length Get the number of operations in the bulk.
+ * @return {OrderedBulkOperation} a OrderedBulkOperation instance.
+ */
+function OrderedBulkOperation(topology, collection, options) {
+ options = options == null ? {} : options;
+ // TODO Bring from driver information in isMaster
+ var self = this;
+ var executed = false;
+
+ // Current item
+ var currentOp = null;
+
+ // Handle to the bson serializer, used to calculate running sizes
+ var bson = topology.bson;
+
+ // Namespace for the operation
+ var namespace = collection.collectionName;
+
+ // Set max byte size
+ var maxBatchSizeBytes = topology.isMasterDoc && topology.isMasterDoc.maxBsonObjectSize
+ ? topology.isMasterDoc.maxBsonObjectSize : (1024*1025*16);
+ var maxWriteBatchSize = topology.isMasterDoc && topology.isMasterDoc.maxWriteBatchSize
+ ? topology.isMasterDoc.maxWriteBatchSize : 1000;
+
+ // Get the write concern
+ var writeConcern = common.writeConcern(shallowClone(options), collection, options);
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Current batch
+ var currentBatch = null;
+ var currentIndex = 0;
+ var currentBatchSize = 0;
+ var currentBatchSizeBytes = 0;
+ var batches = [];
+
+ // Final results
+ var bulkResult = {
+ ok: 1
+ , writeErrors: []
+ , writeConcernErrors: []
+ , insertedIds: []
+ , nInserted: 0
+ , nUpserted: 0
+ , nMatched: 0
+ , nModified: 0
+ , nRemoved: 0
+ , upserted: []
+ };
+
+ // Internal state
+ this.s = {
+ // Final result
+ bulkResult: bulkResult
+ // Current batch state
+ , currentBatch: null
+ , currentIndex: 0
+ , currentBatchSize: 0
+ , currentBatchSizeBytes: 0
+ , batches: []
+ // Write concern
+ , writeConcern: writeConcern
+ // Max batch size options
+ , maxBatchSizeBytes: maxBatchSizeBytes
+ , maxWriteBatchSize: maxWriteBatchSize
+ // Namespace
+ , namespace: namespace
+ // BSON
+ , bson: bson
+ // Topology
+ , topology: topology
+ // Options
+ , options: options
+ // Current operation
+ , currentOp: currentOp
+ // Executed
+ , executed: executed
+ // Collection
+ , collection: collection
+ // Promise Library
+ , promiseLibrary: promiseLibrary
+ // Fundamental error
+ , err: null
+ // Bypass validation
+ , bypassDocumentValidation: typeof options.bypassDocumentValidation == 'boolean' ? options.bypassDocumentValidation : false
+ }
+}
+
+var define = OrderedBulkOperation.define = new Define('OrderedBulkOperation', OrderedBulkOperation, false);
+
+OrderedBulkOperation.prototype.raw = function(op) {
+ var key = Object.keys(op)[0];
+
+ // Set up the force server object id
+ var forceServerObjectId = typeof this.s.options.forceServerObjectId == 'boolean'
+ ? this.s.options.forceServerObjectId : this.s.collection.s.db.options.forceServerObjectId;
+
+ // Update operations
+ if((op.updateOne && op.updateOne.q)
+ || (op.updateMany && op.updateMany.q)
+ || (op.replaceOne && op.replaceOne.q)) {
+ op[key].multi = op.updateOne || op.replaceOne ? false : true;
+ return addToOperationsList(this, common.UPDATE, op[key]);
+ }
+
+ // Crud spec update format
+ if(op.updateOne || op.updateMany || op.replaceOne) {
+ var multi = op.updateOne || op.replaceOne ? false : true;
+ var operation = {q: op[key].filter, u: op[key].update || op[key].replacement, multi: multi}
+ operation.upsert = op[key].upsert ? true: false;
+ if(op.collation) operation.collation = op.collation;
+ return addToOperationsList(this, common.UPDATE, operation);
+ }
+
+ // Remove operations
+ if(op.removeOne || op.removeMany || (op.deleteOne && op.deleteOne.q) || op.deleteMany && op.deleteMany.q) {
+ op[key].limit = op.removeOne ? 1 : 0;
+ return addToOperationsList(this, common.REMOVE, op[key]);
+ }
+
+ // Crud spec delete operations, less efficient
+ if(op.deleteOne || op.deleteMany) {
+ var limit = op.deleteOne ? 1 : 0;
+ var operation = {q: op[key].filter, limit: limit}
+ if(op.collation) operation.collation = op.collation;
+ return addToOperationsList(this, common.REMOVE, operation);
+ }
+
+ // Insert operations
+ if(op.insertOne && op.insertOne.document == null) {
+ if(forceServerObjectId !== true && op.insertOne._id == null) op.insertOne._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, op.insertOne);
+ } else if(op.insertOne && op.insertOne.document) {
+ if(forceServerObjectId !== true && op.insertOne.document._id == null) op.insertOne.document._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, op.insertOne.document);
+ }
+
+ if(op.insertMany) {
+ for(var i = 0; i < op.insertMany.length; i++) {
+ if(forceServerObjectId !== true && op.insertMany[i]._id == null) op.insertMany[i]._id = new ObjectID();
+ addToOperationsList(this, common.INSERT, op.insertMany[i]);
+ }
+
+ return;
+ }
+
+ // No valid type of operation
+ throw toError("bulkWrite only supports insertOne, insertMany, updateOne, updateMany, removeOne, removeMany, deleteOne, deleteMany");
+}
+
+/**
+ * Add a single insert document to the bulk operation
+ *
+ * @param {object} doc the document to insert
+ * @throws {MongoError}
+ * @return {OrderedBulkOperation}
+ */
+OrderedBulkOperation.prototype.insert = function(document) {
+ if(this.s.collection.s.db.options.forceServerObjectId !== true && document._id == null) document._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, document);
+}
+
+/**
+ * Initiate a find operation for an update/updateOne/remove/removeOne/replaceOne
+ *
+ * @method
+ * @param {object} selector The selector for the bulk operation.
+ * @throws {MongoError}
+ * @return {FindOperatorsOrdered}
+ */
+OrderedBulkOperation.prototype.find = function(selector) {
+ if (!selector) {
+ throw toError("Bulk find operation must specify a selector");
+ }
+
+ // Save a current selector
+ this.s.currentOp = {
+ selector: selector
+ }
+
+ return new FindOperatorsOrdered(this);
+}
+
+Object.defineProperty(OrderedBulkOperation.prototype, 'length', {
+ enumerable: true,
+ get: function() {
+ return this.s.currentIndex;
+ }
+});
+
+//
+// Execute next write command in a chain
+var executeCommands = function(self, callback) {
+ if(self.s.batches.length == 0) {
+ return handleCallback(callback, null, new BulkWriteResult(self.s.bulkResult));
+ }
+
+ // Ordered execution of the command
+ var batch = self.s.batches.shift();
+
+ var resultHandler = function(err, result) {
+ // Error is a driver related error not a bulk op error, terminate
+ if(err && err.driver || err && err.message) {
+ return handleCallback(callback, err);
+ }
+
+ // If we have and error
+ if(err) err.ok = 0;
+ // Merge the results together
+ var mergeResult = mergeBatchResults(true, batch, self.s.bulkResult, err, result);
+ if(mergeResult != null) {
+ return handleCallback(callback, null, new BulkWriteResult(self.s.bulkResult));
+ }
+
+ // If we are ordered and have errors and they are
+ // not all replication errors terminate the operation
+ if(self.s.bulkResult.writeErrors.length > 0) {
+ return handleCallback(callback, toError(self.s.bulkResult.writeErrors[0]), new BulkWriteResult(self.s.bulkResult));
+ }
+
+ // Execute the next command in line
+ executeCommands(self, callback);
+ }
+
+ var finalOptions = {ordered: true}
+ if(self.s.writeConcern != null) {
+ finalOptions.writeConcern = self.s.writeConcern;
+ }
+
+ // Set an operationIf if provided
+ if(self.operationId) {
+ resultHandler.operationId = self.operationId;
+ }
+
+ // Serialize functions
+ if(self.s.options.serializeFunctions) {
+ finalOptions.serializeFunctions = true
+ }
+
+ // Serialize functions
+ if(self.s.options.ignoreUndefined) {
+ finalOptions.ignoreUndefined = true
+ }
+
+ // Is the bypassDocumentValidation options specific
+ if(self.s.bypassDocumentValidation == true) {
+ finalOptions.bypassDocumentValidation = true;
+ }
+
+ try {
+ if(batch.batchType == common.INSERT) {
+ self.s.topology.insert(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ } else if(batch.batchType == common.UPDATE) {
+ self.s.topology.update(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ } else if(batch.batchType == common.REMOVE) {
+ self.s.topology.remove(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ }
+ } catch(err) {
+ // Force top level error
+ err.ok = 0;
+ // Merge top level error and return
+ handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, null));
+ }
+}
+
+/**
+ * The callback format for results
+ * @callback OrderedBulkOperation~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {BulkWriteResult} result The bulk write result.
+ */
+
+/**
+ * Execute the ordered bulk operation
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.fsync=false] Specify a file sync write concern.
+ * @param {OrderedBulkOperation~resultCallback} [callback] The result callback
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+OrderedBulkOperation.prototype.execute = function(_writeConcern, callback) {
+ var self = this;
+ if(this.s.executed) throw new toError("batch cannot be re-executed");
+ if(typeof _writeConcern == 'function') {
+ callback = _writeConcern;
+ } else {
+ this.s.writeConcern = _writeConcern;
+ }
+
+ // If we have current batch
+ if(this.s.currentBatch) this.s.batches.push(this.s.currentBatch)
+
+ // If we have no operations in the bulk raise an error
+ if(this.s.batches.length == 0) {
+ throw toError("Invalid Operation, No operations in bulk");
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') {
+ return executeCommands(this, callback);
+ }
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ executeCommands(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('execute', {callback: true, promise:false});
+
+/**
+ * Returns an unordered batch object
+ * @ignore
+ */
+var initializeOrderedBulkOp = function(topology, collection, options) {
+ return new OrderedBulkOperation(topology, collection, options);
+}
+
+initializeOrderedBulkOp.OrderedBulkOperation = OrderedBulkOperation;
+module.exports = initializeOrderedBulkOp;
+module.exports.Bulk = OrderedBulkOperation;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/unordered.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/unordered.js
new file mode 100644
index 0000000..ecb91db
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/bulk/unordered.js
@@ -0,0 +1,542 @@
+"use strict";
+
+var common = require('./common')
+ , utils = require('../utils')
+ , toError = require('../utils').toError
+ , f = require('util').format
+ , handleCallback = require('../utils').handleCallback
+ , shallowClone = utils.shallowClone
+ , WriteError = common.WriteError
+ , BulkWriteResult = common.BulkWriteResult
+ , LegacyOp = common.LegacyOp
+ , ObjectID = require('mongodb-core').BSON.ObjectID
+ , BSON = require('mongodb-core').BSON
+ , Define = require('../metadata')
+ , Batch = common.Batch
+ , mergeBatchResults = common.mergeBatchResults;
+
+var bson = new BSON.BSONPure();
+
+/**
+ * Create a FindOperatorsUnordered instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @property {number} length Get the number of operations in the bulk.
+ * @return {FindOperatorsUnordered} a FindOperatorsUnordered instance.
+ */
+var FindOperatorsUnordered = function(self) {
+ this.s = self.s;
+}
+
+/**
+ * Add a single update document to the bulk operation
+ *
+ * @method
+ * @param {object} doc update operations
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.update = function(updateDocument) {
+ // Perform upsert
+ var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
+
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , u: updateDocument
+ , multi: true
+ , upsert: upsert
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the update document to the list
+ return addToOperationsList(this, common.UPDATE, document);
+}
+
+/**
+ * Add a single update one document to the bulk operation
+ *
+ * @method
+ * @param {object} doc update operations
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.updateOne = function(updateDocument) {
+ // Perform upsert
+ var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
+
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , u: updateDocument
+ , multi: false
+ , upsert: upsert
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the update document to the list
+ return addToOperationsList(this, common.UPDATE, document);
+}
+
+/**
+ * Add a replace one operation to the bulk operation
+ *
+ * @method
+ * @param {object} doc the new document to replace the existing one with
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.replaceOne = function(updateDocument) {
+ this.updateOne(updateDocument);
+}
+
+/**
+ * Upsert modifier for update bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.upsert = function() {
+ this.s.currentOp.upsert = true;
+ return this;
+}
+
+/**
+ * Add a remove one operation to the bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.removeOne = function() {
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , limit: 1
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the remove document to the list
+ return addToOperationsList(this, common.REMOVE, document);
+}
+
+/**
+ * Add a remove operation to the bulk operation
+ *
+ * @method
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+FindOperatorsUnordered.prototype.remove = function() {
+ // Establish the update command
+ var document = {
+ q: this.s.currentOp.selector
+ , limit: 0
+ }
+
+ // Clear out current Op
+ this.s.currentOp = null;
+ // Add the remove document to the list
+ return addToOperationsList(this, common.REMOVE, document);
+}
+
+//
+// Add to the operations list
+//
+var addToOperationsList = function(_self, docType, document) {
+ // Get the bsonSize
+ var bsonSize = bson.calculateObjectSize(document, false);
+ // Throw error if the doc is bigger than the max BSON size
+ if(bsonSize >= _self.s.maxBatchSizeBytes) throw toError("document is larger than the maximum size " + _self.s.maxBatchSizeBytes);
+ // Holds the current batch
+ _self.s.currentBatch = null;
+ // Get the right type of batch
+ if(docType == common.INSERT) {
+ _self.s.currentBatch = _self.s.currentInsertBatch;
+ } else if(docType == common.UPDATE) {
+ _self.s.currentBatch = _self.s.currentUpdateBatch;
+ } else if(docType == common.REMOVE) {
+ _self.s.currentBatch = _self.s.currentRemoveBatch;
+ }
+
+ // Create a new batch object if we don't have a current one
+ if(_self.s.currentBatch == null) _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
+
+ // Check if we need to create a new batch
+ if(((_self.s.currentBatch.size + 1) >= _self.s.maxWriteBatchSize)
+ || ((_self.s.currentBatch.sizeBytes + bsonSize) >= _self.s.maxBatchSizeBytes)
+ || (_self.s.currentBatch.batchType != docType)) {
+ // Save the batch to the execution stack
+ _self.s.batches.push(_self.s.currentBatch);
+
+ // Create a new batch
+ _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
+ }
+
+ // We have an array of documents
+ if(Array.isArray(document)) {
+ throw toError("operation passed in cannot be an Array");
+ } else {
+ _self.s.currentBatch.operations.push(document);
+ _self.s.currentBatch.originalIndexes.push(_self.s.currentIndex);
+ _self.s.currentIndex = _self.s.currentIndex + 1;
+ }
+
+ // Save back the current Batch to the right type
+ if(docType == common.INSERT) {
+ _self.s.currentInsertBatch = _self.s.currentBatch;
+ _self.s.bulkResult.insertedIds.push({index: _self.s.currentIndex, _id: document._id});
+ } else if(docType == common.UPDATE) {
+ _self.s.currentUpdateBatch = _self.s.currentBatch;
+ } else if(docType == common.REMOVE) {
+ _self.s.currentRemoveBatch = _self.s.currentBatch;
+ }
+
+ // Update current batch size
+ _self.s.currentBatch.size = _self.s.currentBatch.size + 1;
+ _self.s.currentBatch.sizeBytes = _self.s.currentBatch.sizeBytes + bsonSize;
+
+ // Return self
+ return _self;
+}
+
+/**
+ * Create a new UnorderedBulkOperation instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @return {UnorderedBulkOperation} a UnorderedBulkOperation instance.
+ */
+var UnorderedBulkOperation = function(topology, collection, options) {
+ options = options == null ? {} : options;
+
+ // Contains reference to self
+ var self = this;
+ // Get the namesspace for the write operations
+ var namespace = collection.collectionName;
+ // Used to mark operation as executed
+ var executed = false;
+
+ // Current item
+ // var currentBatch = null;
+ var currentOp = null;
+ var currentIndex = 0;
+ var batches = [];
+
+ // The current Batches for the different operations
+ var currentInsertBatch = null;
+ var currentUpdateBatch = null;
+ var currentRemoveBatch = null;
+
+ // Handle to the bson serializer, used to calculate running sizes
+ var bson = topology.bson;
+
+ // Set max byte size
+ var maxBatchSizeBytes = topology.isMasterDoc && topology.isMasterDoc.maxBsonObjectSize
+ ? topology.isMasterDoc.maxBsonObjectSize : (1024*1025*16);
+ var maxWriteBatchSize = topology.isMasterDoc && topology.isMasterDoc.maxWriteBatchSize
+ ? topology.isMasterDoc.maxWriteBatchSize : 1000;
+
+ // Get the write concern
+ var writeConcern = common.writeConcern(shallowClone(options), collection, options);
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Final results
+ var bulkResult = {
+ ok: 1
+ , writeErrors: []
+ , writeConcernErrors: []
+ , insertedIds: []
+ , nInserted: 0
+ , nUpserted: 0
+ , nMatched: 0
+ , nModified: 0
+ , nRemoved: 0
+ , upserted: []
+ };
+
+ // Internal state
+ this.s = {
+ // Final result
+ bulkResult: bulkResult
+ // Current batch state
+ , currentInsertBatch: null
+ , currentUpdateBatch: null
+ , currentRemoveBatch: null
+ , currentBatch: null
+ , currentIndex: 0
+ , batches: []
+ // Write concern
+ , writeConcern: writeConcern
+ // Max batch size options
+ , maxBatchSizeBytes: maxBatchSizeBytes
+ , maxWriteBatchSize: maxWriteBatchSize
+ // Namespace
+ , namespace: namespace
+ // BSON
+ , bson: bson
+ // Topology
+ , topology: topology
+ // Options
+ , options: options
+ // Current operation
+ , currentOp: currentOp
+ // Executed
+ , executed: executed
+ // Collection
+ , collection: collection
+ // Promise Library
+ , promiseLibrary: promiseLibrary
+ // Bypass validation
+ , bypassDocumentValidation: typeof options.bypassDocumentValidation == 'boolean' ? options.bypassDocumentValidation : false
+ }
+}
+
+var define = UnorderedBulkOperation.define = new Define('UnorderedBulkOperation', UnorderedBulkOperation, false);
+
+/**
+ * Add a single insert document to the bulk operation
+ *
+ * @param {object} doc the document to insert
+ * @throws {MongoError}
+ * @return {UnorderedBulkOperation}
+ */
+UnorderedBulkOperation.prototype.insert = function(document) {
+ if(this.s.collection.s.db.options.forceServerObjectId !== true && document._id == null) document._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, document);
+}
+
+/**
+ * Initiate a find operation for an update/updateOne/remove/removeOne/replaceOne
+ *
+ * @method
+ * @param {object} selector The selector for the bulk operation.
+ * @throws {MongoError}
+ * @return {FindOperatorsUnordered}
+ */
+UnorderedBulkOperation.prototype.find = function(selector) {
+ if (!selector) {
+ throw toError("Bulk find operation must specify a selector");
+ }
+
+ // Save a current selector
+ this.s.currentOp = {
+ selector: selector
+ }
+
+ return new FindOperatorsUnordered(this);
+}
+
+Object.defineProperty(UnorderedBulkOperation.prototype, 'length', {
+ enumerable: true,
+ get: function() {
+ return this.s.currentIndex;
+ }
+});
+
+UnorderedBulkOperation.prototype.raw = function(op) {
+ var key = Object.keys(op)[0];
+
+ // Set up the force server object id
+ var forceServerObjectId = typeof this.s.options.forceServerObjectId == 'boolean'
+ ? this.s.options.forceServerObjectId : this.s.collection.s.db.options.forceServerObjectId;
+
+ // Update operations
+ if((op.updateOne && op.updateOne.q)
+ || (op.updateMany && op.updateMany.q)
+ || (op.replaceOne && op.replaceOne.q)) {
+ op[key].multi = op.updateOne || op.replaceOne ? false : true;
+ return addToOperationsList(this, common.UPDATE, op[key]);
+ }
+
+ // Crud spec update format
+ if(op.updateOne || op.updateMany || op.replaceOne) {
+ var multi = op.updateOne || op.replaceOne ? false : true;
+ var operation = {q: op[key].filter, u: op[key].update || op[key].replacement, multi: multi}
+ if(op[key].upsert) operation.upsert = true;
+ return addToOperationsList(this, common.UPDATE, operation);
+ }
+
+ // Remove operations
+ if(op.removeOne || op.removeMany || (op.deleteOne && op.deleteOne.q) || op.deleteMany && op.deleteMany.q) {
+ op[key].limit = op.removeOne ? 1 : 0;
+ return addToOperationsList(this, common.REMOVE, op[key]);
+ }
+
+ // Crud spec delete operations, less efficient
+ if(op.deleteOne || op.deleteMany) {
+ var limit = op.deleteOne ? 1 : 0;
+ var operation = {q: op[key].filter, limit: limit}
+ return addToOperationsList(this, common.REMOVE, operation);
+ }
+
+ // Insert operations
+ if(op.insertOne && op.insertOne.document == null) {
+ if(forceServerObjectId !== true && op.insertOne._id == null) op.insertOne._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, op.insertOne);
+ } else if(op.insertOne && op.insertOne.document) {
+ if(forceServerObjectId !== true && op.insertOne.document._id == null) op.insertOne.document._id = new ObjectID();
+ return addToOperationsList(this, common.INSERT, op.insertOne.document);
+ }
+
+ if(op.insertMany) {
+ for(var i = 0; i < op.insertMany.length; i++) {
+ if(forceServerObjectId !== true && op.insertMany[i]._id == null) op.insertMany[i]._id = new ObjectID();
+ addToOperationsList(this, common.INSERT, op.insertMany[i]);
+ }
+
+ return;
+ }
+
+ // No valid type of operation
+ throw toError("bulkWrite only supports insertOne, insertMany, updateOne, updateMany, removeOne, removeMany, deleteOne, deleteMany");
+}
+
+//
+// Execute the command
+var executeBatch = function(self, batch, callback) {
+ var finalOptions = {ordered: false}
+ if(self.s.writeConcern != null) {
+ finalOptions.writeConcern = self.s.writeConcern;
+ }
+
+ var resultHandler = function(err, result) {
+ // Error is a driver related error not a bulk op error, terminate
+ if(err && err.driver || err && err.message) {
+ return handleCallback(callback, err);
+ }
+
+ // If we have and error
+ if(err) err.ok = 0;
+ handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, result));
+ }
+
+ // Set an operationIf if provided
+ if(self.operationId) {
+ resultHandler.operationId = self.operationId;
+ }
+
+ // Serialize functions
+ if(self.s.options.serializeFunctions) {
+ finalOptions.serializeFunctions = true
+ }
+
+ // Is the bypassDocumentValidation options specific
+ if(self.s.bypassDocumentValidation == true) {
+ finalOptions.bypassDocumentValidation = true;
+ }
+
+ try {
+ if(batch.batchType == common.INSERT) {
+ self.s.topology.insert(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ } else if(batch.batchType == common.UPDATE) {
+ self.s.topology.update(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ } else if(batch.batchType == common.REMOVE) {
+ self.s.topology.remove(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
+ }
+ } catch(err) {
+ // Force top level error
+ err.ok = 0;
+ // Merge top level error and return
+ handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, null));
+ }
+}
+
+//
+// Execute all the commands
+var executeBatches = function(self, callback) {
+ var numberOfCommandsToExecute = self.s.batches.length;
+ var error = null;
+ // Execute over all the batches
+ for(var i = 0; i < self.s.batches.length; i++) {
+ executeBatch(self, self.s.batches[i], function(err, result) {
+ // Driver layer error capture it
+ if(err) error = err;
+ // Count down the number of commands left to execute
+ numberOfCommandsToExecute = numberOfCommandsToExecute - 1;
+
+ // Execute
+ if(numberOfCommandsToExecute == 0) {
+ // Driver level error
+ if(error) return handleCallback(callback, error);
+ // Treat write errors
+ var error = self.s.bulkResult.writeErrors.length > 0 ? toError(self.s.bulkResult.writeErrors[0]) : null;
+ handleCallback(callback, error, new BulkWriteResult(self.s.bulkResult));
+ }
+ });
+ }
+}
+
+/**
+ * The callback format for results
+ * @callback UnorderedBulkOperation~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {BulkWriteResult} result The bulk write result.
+ */
+
+/**
+ * Execute the ordered bulk operation
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.fsync=false] Specify a file sync write concern.
+ * @param {UnorderedBulkOperation~resultCallback} [callback] The result callback
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+UnorderedBulkOperation.prototype.execute = function(_writeConcern, callback) {
+ var self = this;
+ if(this.s.executed) throw toError("batch cannot be re-executed");
+ if(typeof _writeConcern == 'function') {
+ callback = _writeConcern;
+ } else {
+ this.s.writeConcern = _writeConcern;
+ }
+
+ // If we have current batch
+ if(this.s.currentInsertBatch) this.s.batches.push(this.s.currentInsertBatch);
+ if(this.s.currentUpdateBatch) this.s.batches.push(this.s.currentUpdateBatch);
+ if(this.s.currentRemoveBatch) this.s.batches.push(this.s.currentRemoveBatch);
+
+ // If we have no operations in the bulk raise an error
+ if(this.s.batches.length == 0) {
+ throw toError("Invalid Operation, No operations in bulk");
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return executeBatches(this, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ executeBatches(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('execute', {callback: true, promise:false});
+
+/**
+ * Returns an unordered batch object
+ * @ignore
+ */
+var initializeUnorderedBulkOp = function(topology, collection, options) {
+ return new UnorderedBulkOperation(topology, collection, options);
+}
+
+initializeUnorderedBulkOp.UnorderedBulkOperation = UnorderedBulkOperation;
+module.exports = initializeUnorderedBulkOp;
+module.exports.Bulk = UnorderedBulkOperation;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/collection.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/collection.js
new file mode 100644
index 0000000..bae5d5c
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/collection.js
@@ -0,0 +1,3360 @@
+"use strict";
+
+var checkCollectionName = require('./utils').checkCollectionName
+ , ObjectID = require('mongodb-core').BSON.ObjectID
+ , Long = require('mongodb-core').BSON.Long
+ , Code = require('mongodb-core').BSON.Code
+ , f = require('util').format
+ , AggregationCursor = require('./aggregation_cursor')
+ , MongoError = require('mongodb-core').MongoError
+ , shallowClone = require('./utils').shallowClone
+ , isObject = require('./utils').isObject
+ , toError = require('./utils').toError
+ , normalizeHintField = require('./utils').normalizeHintField
+ , handleCallback = require('./utils').handleCallback
+ , decorateCommand = require('./utils').decorateCommand
+ , formattedOrderClause = require('./utils').formattedOrderClause
+ , ReadPreference = require('./read_preference')
+ , CoreReadPreference = require('mongodb-core').ReadPreference
+ , CommandCursor = require('./command_cursor')
+ , Define = require('./metadata')
+ , Cursor = require('./cursor')
+ , unordered = require('./bulk/unordered')
+ , ordered = require('./bulk/ordered')
+ , assign = require('./utils').assign;
+
+/**
+ * @fileOverview The **Collection** class is an internal class that embodies a MongoDB collection
+ * allowing for insert/update/remove/find and other command operation on that MongoDB collection.
+ *
+ * **COLLECTION Cannot directly be instantiated**
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Create a collection we want to drop later
+ * var col = db.collection('createIndexExample1');
+ * // Show that duplicate records got dropped
+ * col.find({}).toArray(function(err, items) {
+ * test.equal(null, err);
+ * test.equal(4, items.length);
+ * db.close();
+ * });
+ * });
+ */
+
+/**
+ * Create a new Collection instance (INTERNAL TYPE, do not instantiate directly)
+ * @class
+ * @property {string} collectionName Get the collection name.
+ * @property {string} namespace Get the full collection namespace.
+ * @property {object} writeConcern The current write concern values.
+ * @property {object} readConcern The current read concern values.
+ * @property {object} hint Get current index hint for collection.
+ * @return {Collection} a Collection instance.
+ */
+var Collection = function(db, topology, dbName, name, pkFactory, options) {
+ checkCollectionName(name);
+ var self = this;
+ // Unpack variables
+ var internalHint = null;
+ var opts = options != null && ('object' === typeof options) ? options : {};
+ var slaveOk = options == null || options.slaveOk == null ? db.slaveOk : options.slaveOk;
+ var serializeFunctions = options == null || options.serializeFunctions == null ? db.s.options.serializeFunctions : options.serializeFunctions;
+ var raw = options == null || options.raw == null ? db.s.options.raw : options.raw;
+ var promoteLongs = options == null || options.promoteLongs == null ? db.s.options.promoteLongs : options.promoteLongs;
+ var promoteValues = options == null || options.promoteValues == null ? db.s.options.promoteValues : options.promoteValues;
+ var promoteBuffers = options == null || options.promoteBuffers == null ? db.s.options.promoteBuffers : options.promoteBuffers;
+ var readPreference = null;
+ var collectionHint = null;
+ var namespace = f("%s.%s", dbName, name);
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Assign the right collection level readPreference
+ if(options && options.readPreference) {
+ readPreference = options.readPreference;
+ } else if(db.options.readPreference) {
+ readPreference = db.options.readPreference;
+ }
+
+ // Set custom primary key factory if provided
+ pkFactory = pkFactory == null
+ ? ObjectID
+ : pkFactory;
+
+ // Internal state
+ this.s = {
+ // Set custom primary key factory if provided
+ pkFactory: pkFactory
+ // Db
+ , db: db
+ // Topology
+ , topology: topology
+ // dbName
+ , dbName: dbName
+ // Options
+ , options: options
+ // Namespace
+ , namespace: namespace
+ // Read preference
+ , readPreference: readPreference
+ // SlaveOK
+ , slaveOk: slaveOk
+ // Serialize functions
+ , serializeFunctions: serializeFunctions
+ // Raw
+ , raw: raw
+ // promoteLongs
+ , promoteLongs: promoteLongs
+ // promoteValues
+ , promoteValues: promoteValues
+ // promoteBuffers
+ , promoteBuffers: promoteBuffers
+ // internalHint
+ , internalHint: internalHint
+ // collectionHint
+ , collectionHint: collectionHint
+ // Name
+ , name: name
+ // Promise library
+ , promiseLibrary: promiseLibrary
+ // Read Concern
+ , readConcern: options.readConcern
+ }
+}
+
+var define = Collection.define = new Define('Collection', Collection, false);
+
+Object.defineProperty(Collection.prototype, 'collectionName', {
+ enumerable: true, get: function() { return this.s.name; }
+});
+
+Object.defineProperty(Collection.prototype, 'namespace', {
+ enumerable: true, get: function() { return this.s.namespace; }
+});
+
+Object.defineProperty(Collection.prototype, 'readConcern', {
+ enumerable: true, get: function() { return this.s.readConcern || {level: 'local'}; }
+});
+
+Object.defineProperty(Collection.prototype, 'writeConcern', {
+ enumerable:true,
+ get: function() {
+ var ops = {};
+ if(this.s.options.w != null) ops.w = this.s.options.w;
+ if(this.s.options.j != null) ops.j = this.s.options.j;
+ if(this.s.options.fsync != null) ops.fsync = this.s.options.fsync;
+ if(this.s.options.wtimeout != null) ops.wtimeout = this.s.options.wtimeout;
+ return ops;
+ }
+});
+
+/**
+ * @ignore
+ */
+Object.defineProperty(Collection.prototype, "hint", {
+ enumerable: true
+ , get: function () { return this.s.collectionHint; }
+ , set: function (v) { this.s.collectionHint = normalizeHintField(v); }
+});
+
+/**
+ * Creates a cursor for a query that can be used to iterate over results from MongoDB
+ * @method
+ * @param {object} query The cursor query object.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Collection.prototype.find = function() {
+ var options
+ , args = Array.prototype.slice.call(arguments, 0)
+ , has_callback = typeof args[args.length - 1] === 'function'
+ , has_weird_callback = typeof args[0] === 'function'
+ , callback = has_callback ? args.pop() : (has_weird_callback ? args.shift() : null)
+ , len = args.length
+ , selector = len >= 1 ? args[0] : {}
+ , fields = len >= 2 ? args[1] : undefined;
+
+ if(len === 1 && has_weird_callback) {
+ // backwards compat for callback?, options case
+ selector = {};
+ options = args[0];
+ }
+
+ if(len === 2 && fields !== undefined && !Array.isArray(fields)) {
+ var fieldKeys = Object.keys(fields);
+ var is_option = false;
+
+ for(var i = 0; i < fieldKeys.length; i++) {
+ if(testForFields[fieldKeys[i]] != null) {
+ is_option = true;
+ break;
+ }
+ }
+
+ if(is_option) {
+ options = fields;
+ fields = undefined;
+ } else {
+ options = {};
+ }
+ } else if(len === 2 && Array.isArray(fields) && !Array.isArray(fields[0])) {
+ var newFields = {};
+ // Rewrite the array
+ for(var i = 0; i < fields.length; i++) {
+ newFields[fields[i]] = 1;
+ }
+ // Set the fields
+ fields = newFields;
+ }
+
+ if(3 === len) {
+ options = args[2];
+ }
+
+ // Ensure selector is not null
+ selector = selector == null ? {} : selector;
+ // Validate correctness off the selector
+ var object = selector;
+ if(Buffer.isBuffer(object)) {
+ var object_size = object[0] | object[1] << 8 | object[2] << 16 | object[3] << 24;
+ if(object_size != object.length) {
+ var error = new Error("query selector raw message size does not match message header size [" + object.length + "] != [" + object_size + "]");
+ error.name = 'MongoError';
+ throw error;
+ }
+ }
+
+ // Validate correctness of the field selector
+ var object = fields;
+ if(Buffer.isBuffer(object)) {
+ var object_size = object[0] | object[1] << 8 | object[2] << 16 | object[3] << 24;
+ if(object_size != object.length) {
+ var error = new Error("query fields raw message size does not match message header size [" + object.length + "] != [" + object_size + "]");
+ error.name = 'MongoError';
+ throw error;
+ }
+ }
+
+ // Check special case where we are using an objectId
+ if(selector instanceof ObjectID || (selector != null && selector._bsontype == 'ObjectID')) {
+ selector = {_id:selector};
+ }
+
+ // If it's a serialized fields field we need to just let it through
+ // user be warned it better be good
+ if(options && options.fields && !(Buffer.isBuffer(options.fields))) {
+ fields = {};
+
+ if(Array.isArray(options.fields)) {
+ if(!options.fields.length) {
+ fields['_id'] = 1;
+ } else {
+ for (var i = 0, l = options.fields.length; i < l; i++) {
+ fields[options.fields[i]] = 1;
+ }
+ }
+ } else {
+ fields = options.fields;
+ }
+ }
+
+ if (!options) options = {};
+
+ var newOptions = {};
+ // Make a shallow copy of options
+ for (var key in options) {
+ newOptions[key] = options[key];
+ }
+
+ // Unpack options
+ newOptions.skip = len > 3 ? args[2] : options.skip ? options.skip : 0;
+ newOptions.limit = len > 3 ? args[3] : options.limit ? options.limit : 0;
+ newOptions.raw = options.raw != null && typeof options.raw === 'boolean' ? options.raw : this.s.raw;
+ newOptions.hint = options.hint != null ? normalizeHintField(options.hint) : this.s.collectionHint;
+ newOptions.timeout = len == 5 ? args[4] : typeof options.timeout === 'undefined' ? undefined : options.timeout;
+ // // If we have overridden slaveOk otherwise use the default db setting
+ newOptions.slaveOk = options.slaveOk != null ? options.slaveOk : this.s.db.slaveOk;
+
+ // Add read preference if needed
+ newOptions = getReadPreference(this, newOptions, this.s.db, this);
+
+ // Set slave ok to true if read preference different from primary
+ if(newOptions.readPreference != null
+ && (newOptions.readPreference != 'primary' || newOptions.readPreference.mode != 'primary')) {
+ newOptions.slaveOk = true;
+ }
+
+ // Ensure the query is an object
+ if(selector != null && typeof selector != 'object') {
+ throw MongoError.create({message: "query selector must be an object", driver:true });
+ }
+
+ // Build the find command
+ var findCommand = {
+ find: this.s.namespace
+ , limit: newOptions.limit
+ , skip: newOptions.skip
+ , query: selector
+ }
+
+ // Ensure we use the right await data option
+ if(typeof newOptions.awaitdata == 'boolean') {
+ newOptions.awaitData = newOptions.awaitdata
+ };
+
+ // Translate to new command option noCursorTimeout
+ if(typeof newOptions.timeout == 'boolean') newOptions.noCursorTimeout = newOptions.timeout;
+
+ // Merge in options to command
+ for(var name in newOptions) {
+ if(newOptions[name] != null) findCommand[name] = newOptions[name];
+ }
+
+ // Format the fields
+ var formatFields = function(fields) {
+ var object = {};
+ if(Array.isArray(fields)) {
+ for(var i = 0; i < fields.length; i++) {
+ if(Array.isArray(fields[i])) {
+ object[fields[i][0]] = fields[i][1];
+ } else {
+ object[fields[i][0]] = 1;
+ }
+ }
+ } else {
+ object = fields;
+ }
+
+ return object;
+ }
+
+ // Special treatment for the fields selector
+ if(fields) findCommand.fields = formatFields(fields);
+
+ // Add db object to the new options
+ newOptions.db = this.s.db;
+
+ // Add the promise library
+ newOptions.promiseLibrary = this.s.promiseLibrary;
+
+ // Set raw if available at collection level
+ if(newOptions.raw == null && typeof this.s.raw == 'boolean') newOptions.raw = this.s.raw;
+ // Set promoteLongs if available at collection level
+ if(newOptions.promoteLongs == null && typeof this.s.promoteLongs == 'boolean') newOptions.promoteLongs = this.s.promoteLongs;
+ if(newOptions.promoteValues == null && typeof this.s.promoteValues == 'boolean') newOptions.promoteValues = this.s.promoteValues;
+ if(newOptions.promoteBuffers == null && typeof this.s.promoteBuffers == 'boolean') newOptions.promoteBuffers = this.s.promoteBuffers;
+
+ // Sort options
+ if(findCommand.sort) {
+ findCommand.sort = formattedOrderClause(findCommand.sort);
+ }
+
+ // Set the readConcern
+ if(this.s.readConcern) {
+ findCommand.readConcern = this.s.readConcern;
+ }
+
+ // Decorate find command with collation options
+ decorateWithCollation(findCommand, this, options);
+
+ // Create the cursor
+ if(typeof callback == 'function') return handleCallback(callback, null, this.s.topology.cursor(this.s.namespace, findCommand, newOptions));
+ return this.s.topology.cursor(this.s.namespace, findCommand, newOptions);
+}
+
+define.classMethod('find', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Inserts a single document into MongoDB. If documents passed in do not contain the **_id** field,
+ * one will be added to each of the documents missing it by the driver, mutating the document. This behavior
+ * can be overridden by setting the **forceServerObjectId** flag.
+ *
+ * @method
+ * @param {object} doc Document to insert.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.forceServerObjectId=false] Force server to assign _id values instead of driver.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~insertOneWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.insertOne = function(doc, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ if(Array.isArray(doc) && typeof callback == 'function') {
+ return callback(MongoError.create({message: 'doc parameter must be an object', driver:true }));
+ } else if(Array.isArray(doc)) {
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ reject(MongoError.create({message: 'doc parameter must be an object', driver:true }));
+ });
+ }
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return insertOne(self, doc, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ insertOne(self, doc, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var insertOne = function(self, doc, options, callback) {
+ insertDocuments(self, [doc], options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ // Workaround for pre 2.6 servers
+ if(r == null) return callback(null, {result: {ok:1}});
+ // Add values to top level to ensure crud spec compatibility
+ r.insertedCount = r.result.n;
+ r.insertedId = doc._id;
+ if(callback) callback(null, r);
+ });
+}
+
+var mapInserManyResults = function(docs, r) {
+ var ids = r.getInsertedIds();
+ var keys = Object.keys(ids);
+ var finalIds = new Array(keys.length);
+
+ for(var i = 0; i < keys.length; i++) {
+ if(ids[keys[i]]._id) {
+ finalIds[ids[keys[i]].index] = ids[keys[i]]._id;
+ }
+ }
+
+ var finalResult = {
+ result: {ok: 1, n: r.insertedCount},
+ ops: docs,
+ insertedCount: r.insertedCount,
+ insertedIds: finalIds
+ };
+
+ if(r.getLastOp()) {
+ finalResult.result.opTime = r.getLastOp();
+ }
+
+ return finalResult;
+}
+
+define.classMethod('insertOne', {callback: true, promise:true});
+
+/**
+ * Inserts an array of documents into MongoDB. If documents passed in do not contain the **_id** field,
+ * one will be added to each of the documents missing it by the driver, mutating the document. This behavior
+ * can be overridden by setting the **forceServerObjectId** flag.
+ *
+ * @method
+ * @param {object[]} docs Documents to insert.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.forceServerObjectId=false] Force server to assign _id values instead of driver.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~insertWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.insertMany = function(docs, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {ordered:true};
+ if(!Array.isArray(docs) && typeof callback == 'function') {
+ return callback(MongoError.create({message: 'docs parameter must be an array of documents', driver:true }));
+ } else if(!Array.isArray(docs)) {
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ reject(MongoError.create({message: 'docs parameter must be an array of documents', driver:true }));
+ });
+ }
+
+ // Get the write concern options
+ if(typeof options.checkKeys != 'boolean') {
+ options.checkKeys = true;
+ }
+
+ // If keep going set unordered
+ options['serializeFunctions'] = options['serializeFunctions'] || self.s.serializeFunctions;
+
+ // Set up the force server object id
+ var forceServerObjectId = typeof options.forceServerObjectId == 'boolean'
+ ? options.forceServerObjectId : self.s.db.options.forceServerObjectId;
+
+ // Do we want to force the server to assign the _id key
+ if(forceServerObjectId !== true) {
+ // Add _id if not specified
+ for(var i = 0; i < docs.length; i++) {
+ if(docs[i]._id == null) docs[i]._id = self.s.pkFactory.createPk();
+ }
+ }
+
+ // Generate the bulk write operations
+ var operations = [{
+ insertMany: docs
+ }];
+
+ // Execute using callback
+ if(typeof callback == 'function') return bulkWrite(self, operations, options, function(err, r) {
+ if(err) return callback(err, r);
+ callback(null, mapInserManyResults(docs, r));
+ });
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ bulkWrite(self, operations, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(mapInserManyResults(docs, r));
+ });
+ });
+}
+
+define.classMethod('insertMany', {callback: true, promise:true});
+
+/**
+ * @typedef {Object} Collection~BulkWriteOpResult
+ * @property {number} insertedCount Number of documents inserted.
+ * @property {number} matchedCount Number of documents matched for update.
+ * @property {number} modifiedCount Number of documents modified.
+ * @property {number} deletedCount Number of documents deleted.
+ * @property {number} upsertedCount Number of documents upserted.
+ * @property {object} insertedIds Inserted document generated Id's, hash key is the index of the originating operation
+ * @property {object} upsertedIds Upserted document generated Id's, hash key is the index of the originating operation
+ * @property {object} result The command result object.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~bulkWriteOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~BulkWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Perform a bulkWrite operation without a fluent API
+ *
+ * Legal operation types are
+ *
+ * { insertOne: { document: { a: 1 } } }
+ *
+ * { updateOne: { filter: {a:2}, update: {$set: {a:2}}, upsert:true } }
+ *
+ * { updateMany: { filter: {a:2}, update: {$set: {a:2}}, upsert:true } }
+ *
+ * { deleteOne: { filter: {c:1} } }
+ *
+ * { deleteMany: { filter: {c:1} } }
+ *
+ * { replaceOne: { filter: {c:3}, replacement: {c:4}, upsert:true}}
+ *
+ * If documents passed in do not contain the **_id** field,
+ * one will be added to each of the documents missing it by the driver, mutating the document. This behavior
+ * can be overridden by setting the **forceServerObjectId** flag.
+ *
+ * @method
+ * @param {object[]} operations Bulk operations to perform.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.ordered=true] Execute write operation in ordered or unordered fashion.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~bulkWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.bulkWrite = function(operations, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {ordered:true};
+
+ if(!Array.isArray(operations)) {
+ throw MongoError.create({message: "operations must be an array of documents", driver:true });
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return bulkWrite(self, operations, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ bulkWrite(self, operations, options, function(err, r) {
+ if(err && r == null) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var bulkWrite = function(self, operations, options, callback) {
+ // Add ignoreUndfined
+ if(self.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = self.s.options.ignoreUndefined;
+ }
+
+ // Create the bulk operation
+ var bulk = options.ordered == true || options.ordered == null ? self.initializeOrderedBulkOp(options) : self.initializeUnorderedBulkOp(options);
+
+ // Do we have a collation
+ var collation = false;
+
+ // for each op go through and add to the bulk
+ try {
+ for(var i = 0; i < operations.length; i++) {
+ // Get the operation type
+ var key = Object.keys(operations[i])[0];
+ // Check if we have a collation
+ if(operations[i][key].collation) {
+ collation = true;
+ }
+
+ // Pass to the raw bulk
+ bulk.raw(operations[i]);
+ }
+ } catch(err) {
+ return callback(err, null);
+ }
+
+ // Final options for write concern
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+ var writeCon = finalOptions.writeConcern ? finalOptions.writeConcern : {};
+ var capabilities = self.s.topology.capabilities();
+
+ // Did the user pass in a collation, check if our write server supports it
+ if(collation && capabilities && !capabilities.commandsTakeCollation) {
+ return callback(new MongoError(f('server/primary/mongos does not support collation')));
+ }
+
+ // Execute the bulk
+ bulk.execute(writeCon, function(err, r) {
+ // We have connection level error
+ if(!r && err) return callback(err, null);
+ // We have single error
+ if(r && r.hasWriteErrors() && r.getWriteErrorCount() == 1) {
+ return callback(toError(r.getWriteErrorAt(0)), r);
+ }
+
+ r.insertedCount = r.nInserted;
+ r.matchedCount = r.nMatched;
+ r.modifiedCount = r.nModified || 0;
+ r.deletedCount = r.nRemoved;
+ r.upsertedCount = r.getUpsertedIds().length;
+ r.upsertedIds = {};
+ r.insertedIds = {};
+
+ // Update the n
+ r.n = r.insertedCount;
+
+ // Inserted documents
+ var inserted = r.getInsertedIds();
+ // Map inserted ids
+ for(var i = 0; i < inserted.length; i++) {
+ r.insertedIds[inserted[i].index] = inserted[i]._id;
+ }
+
+ // Upserted documents
+ var upserted = r.getUpsertedIds();
+ // Map upserted ids
+ for(var i = 0; i < upserted.length; i++) {
+ r.upsertedIds[upserted[i].index] = upserted[i]._id;
+ }
+
+ // Check if we have write errors
+ if(r.hasWriteErrors()) {
+ // Get all the errors
+ var errors = r.getWriteErrors();
+ // Return the MongoError object
+ return callback(toError({
+ message: 'write operation failed', code: errors[0].code, writeErrors: errors
+ }), r);
+ }
+
+ // Check if we have a writeConcern error
+ if(r.getWriteConcernError()) {
+ // Return the MongoError object
+ return callback(toError(r.getWriteConcernError()), r);
+ }
+
+ // Return the results
+ callback(null, r);
+ });
+}
+
+var insertDocuments = function(self, docs, options, callback) {
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ // Ensure we are operating on an array op docs
+ docs = Array.isArray(docs) ? docs : [docs];
+
+ // Get the write concern options
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+ if(typeof finalOptions.checkKeys != 'boolean') finalOptions.checkKeys = true;
+
+ // If keep going set unordered
+ if(finalOptions.keepGoing == true) finalOptions.ordered = false;
+ finalOptions['serializeFunctions'] = options['serializeFunctions'] || self.s.serializeFunctions;
+
+ // Set up the force server object id
+ var forceServerObjectId = typeof options.forceServerObjectId == 'boolean'
+ ? options.forceServerObjectId : self.s.db.options.forceServerObjectId;
+
+ // Add _id if not specified
+ if(forceServerObjectId !== true){
+ for(var i = 0; i < docs.length; i++) {
+ if(docs[i]._id == null) docs[i]._id = self.s.pkFactory.createPk();
+ }
+ }
+
+ // File inserts
+ self.s.topology.insert(self.s.namespace, docs, finalOptions, function(err, result) {
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err);
+ if(result == null) return handleCallback(callback, null, null);
+ if(result.result.code) return handleCallback(callback, toError(result.result));
+ if(result.result.writeErrors) return handleCallback(callback, toError(result.result.writeErrors[0]));
+ // Add docs to the list
+ result.ops = docs;
+ // Return the results
+ handleCallback(callback, null, result);
+ });
+}
+
+define.classMethod('bulkWrite', {callback: true, promise:true});
+
+/**
+ * @typedef {Object} Collection~WriteOpResult
+ * @property {object[]} ops All the documents inserted using insertOne/insertMany/replaceOne. Documents contain the _id field if forceServerObjectId == false for insertOne/insertMany
+ * @property {object} connection The connection object used for the operation.
+ * @property {object} result The command result object.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~writeOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~WriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * @typedef {Object} Collection~insertWriteOpResult
+ * @property {Number} insertedCount The total amount of documents inserted.
+ * @property {object[]} ops All the documents inserted using insertOne/insertMany/replaceOne. Documents contain the _id field if forceServerObjectId == false for insertOne/insertMany
+ * @property {ObjectId[]} insertedIds All the generated _id's for the inserted documents.
+ * @property {object} connection The connection object used for the operation.
+ * @property {object} result The raw command result object returned from MongoDB (content might vary by server version).
+ * @property {Number} result.ok Is 1 if the command executed correctly.
+ * @property {Number} result.n The total count of documents inserted.
+ */
+
+/**
+ * @typedef {Object} Collection~insertOneWriteOpResult
+ * @property {Number} insertedCount The total amount of documents inserted.
+ * @property {object[]} ops All the documents inserted using insertOne/insertMany/replaceOne. Documents contain the _id field if forceServerObjectId == false for insertOne/insertMany
+ * @property {ObjectId} insertedId The driver generated ObjectId for the insert operation.
+ * @property {object} connection The connection object used for the operation.
+ * @property {object} result The raw command result object returned from MongoDB (content might vary by server version).
+ * @property {Number} result.ok Is 1 if the command executed correctly.
+ * @property {Number} result.n The total count of documents inserted.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~insertWriteOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~insertWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~insertOneWriteOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~insertOneWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Inserts a single document or a an array of documents into MongoDB. If documents passed in do not contain the **_id** field,
+ * one will be added to each of the documents missing it by the driver, mutating the document. This behavior
+ * can be overridden by setting the **forceServerObjectId** flag.
+ *
+ * @method
+ * @param {(object|object[])} docs Documents to insert.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.forceServerObjectId=false] Force server to assign _id values instead of driver.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~insertWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use insertOne, insertMany or bulkWrite
+ */
+Collection.prototype.insert = function(docs, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {ordered:false};
+ docs = !Array.isArray(docs) ? [docs] : docs;
+
+ if(options.keepGoing == true) {
+ options.ordered = false;
+ }
+
+ return this.insertMany(docs, options, callback);
+}
+
+define.classMethod('insert', {callback: true, promise:true});
+
+/**
+ * @typedef {Object} Collection~updateWriteOpResult
+ * @property {Object} result The raw result returned from MongoDB, field will vary depending on server version.
+ * @property {Number} result.ok Is 1 if the command executed correctly.
+ * @property {Number} result.n The total count of documents scanned.
+ * @property {Number} result.nModified The total count of documents modified.
+ * @property {Object} connection The connection object used for the operation.
+ * @property {Number} matchedCount The number of documents that matched the filter.
+ * @property {Number} modifiedCount The number of documents that were modified.
+ * @property {Number} upsertedCount The number of documents upserted.
+ * @property {Object} upsertedId The upserted id.
+ * @property {ObjectId} upsertedId._id The upserted _id returned from the server.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~updateWriteOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~updateWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Update a single document on MongoDB
+ * @method
+ * @param {object} filter The Filter used to select the document to update
+ * @param {object} update The update operations to be applied to the document
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.upsert=false] Update operation is an upsert.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~updateWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.updateOne = function(filter, update, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = shallowClone(options)
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return updateOne(self, filter, update, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ updateOne(self, filter, update, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var updateOne = function(self, filter, update, options, callback) {
+ // Set single document update
+ options.multi = false;
+ // Execute update
+ updateDocuments(self, filter, update, options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ if(r == null) return callback(null, {result: {ok:1}});
+ r.matchedCount = r.result.n;
+ r.modifiedCount = r.result.nModified != null ? r.result.nModified : r.result.n;
+ r.upsertedId = Array.isArray(r.result.upserted) && r.result.upserted.length > 0 ? r.result.upserted[0] : null;
+ r.upsertedCount = Array.isArray(r.result.upserted) && r.result.upserted.length ? r.result.upserted.length : 0;
+ if(callback) callback(null, r);
+ });
+}
+
+define.classMethod('updateOne', {callback: true, promise:true});
+
+/**
+ * Replace a document on MongoDB
+ * @method
+ * @param {object} filter The Filter used to select the document to update
+ * @param {object} doc The Document that replaces the matching document
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.upsert=false] Update operation is an upsert.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~updateWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.replaceOne = function(filter, update, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = shallowClone(options)
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return replaceOne(self, filter, update, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ replaceOne(self, filter, update, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var replaceOne = function(self, filter, update, options, callback) {
+ // Set single document update
+ options.multi = false;
+ // Execute update
+ updateDocuments(self, filter, update, options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ if(r == null) return callback(null, {result: {ok:1}});
+ r.matchedCount = r.result.n;
+ r.modifiedCount = r.result.nModified != null ? r.result.nModified : r.result.n;
+ r.upsertedId = Array.isArray(r.result.upserted) && r.result.upserted.length > 0 ? r.result.upserted[0] : null;
+ r.upsertedCount = Array.isArray(r.result.upserted) && r.result.upserted.length ? r.result.upserted.length : 0;
+ r.ops = [update];
+ if(callback) callback(null, r);
+ });
+}
+
+define.classMethod('replaceOne', {callback: true, promise:true});
+
+/**
+ * Update multiple documents on MongoDB
+ * @method
+ * @param {object} filter The Filter used to select the document to update
+ * @param {object} update The update operations to be applied to the document
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.upsert=false] Update operation is an upsert.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~updateWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.updateMany = function(filter, update, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = shallowClone(options)
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return updateMany(self, filter, update, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ updateMany(self, filter, update, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var updateMany = function(self, filter, update, options, callback) {
+ // Set single document update
+ options.multi = true;
+ // Execute update
+ updateDocuments(self, filter, update, options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ if(r == null) return callback(null, {result: {ok:1}});
+ r.matchedCount = r.result.n;
+ r.modifiedCount = r.result.nModified != null ? r.result.nModified : r.result.n;
+ r.upsertedId = Array.isArray(r.result.upserted) && r.result.upserted.length > 0 ? r.result.upserted[0] : null;
+ r.upsertedCount = Array.isArray(r.result.upserted) && r.result.upserted.length ? r.result.upserted.length : 0;
+ if(callback) callback(null, r);
+ });
+}
+
+define.classMethod('updateMany', {callback: true, promise:true});
+
+var updateDocuments = function(self, selector, document, options, callback) {
+ if('function' === typeof options) callback = options, options = null;
+ if(options == null) options = {};
+ if(!('function' === typeof callback)) callback = null;
+
+ // If we are not providing a selector or document throw
+ if(selector == null || typeof selector != 'object') return callback(toError("selector must be a valid JavaScript object"));
+ if(document == null || typeof document != 'object') return callback(toError("document must be a valid JavaScript object"));
+
+ // Get the write concern options
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+
+ // Do we return the actual result document
+ // Either use override on the function, or go back to default on either the collection
+ // level or db
+ finalOptions['serializeFunctions'] = options['serializeFunctions'] || self.s.serializeFunctions;
+
+ // Execute the operation
+ var op = {q: selector, u: document};
+ op.upsert = typeof options.upsert == 'boolean' ? options.upsert : false;
+ op.multi = typeof options.multi == 'boolean' ? options.multi : false;
+
+ // Have we specified collation
+ decorateWithCollation(finalOptions, self, options);
+
+ // Update options
+ self.s.topology.update(self.s.namespace, [op], finalOptions, function(err, result) {
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err, null);
+ if(result == null) return handleCallback(callback, null, null);
+ if(result.result.code) return handleCallback(callback, toError(result.result));
+ if(result.result.writeErrors) return handleCallback(callback, toError(result.result.writeErrors[0]));
+ // Return the results
+ handleCallback(callback, null, result);
+ });
+}
+
+/**
+ * Updates documents.
+ * @method
+ * @param {object} selector The selector for the update operation.
+ * @param {object} document The update document.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.upsert=false] Update operation is an upsert.
+ * @param {boolean} [options.multi=false] Update one/all documents with operation.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Collection~writeOpCallback} [callback] The command result callback
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated use updateOne, updateMany or bulkWrite
+ */
+Collection.prototype.update = function(selector, document, options, callback) {
+ var self = this;
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return updateDocuments(self, selector, document, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ updateDocuments(self, selector, document, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('update', {callback: true, promise:true});
+
+/**
+ * @typedef {Object} Collection~deleteWriteOpResult
+ * @property {Object} result The raw result returned from MongoDB, field will vary depending on server version.
+ * @property {Number} result.ok Is 1 if the command executed correctly.
+ * @property {Number} result.n The total count of documents deleted.
+ * @property {Object} connection The connection object used for the operation.
+ * @property {Number} deletedCount The number of documents deleted.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~deleteWriteOpCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~deleteWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Delete a document on MongoDB
+ * @method
+ * @param {object} filter The Filter used to select the document to remove
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~deleteWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.deleteOne = function(filter, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ var options = shallowClone(options);
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return deleteOne(self, filter, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ deleteOne(self, filter, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var deleteOne = function(self, filter, options, callback) {
+ options.single = true;
+ removeDocuments(self, filter, options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ if(r == null) return callback(null, {result: {ok:1}});
+ r.deletedCount = r.result.n;
+ if(callback) callback(null, r);
+ });
+}
+
+define.classMethod('deleteOne', {callback: true, promise:true});
+
+Collection.prototype.removeOne = Collection.prototype.deleteOne;
+
+define.classMethod('removeOne', {callback: true, promise:true});
+
+/**
+ * Delete multiple documents on MongoDB
+ * @method
+ * @param {object} filter The Filter used to select the documents to remove
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~deleteWriteOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.deleteMany = function(filter, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ var options = shallowClone(options);
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return deleteMany(self, filter, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ deleteMany(self, filter, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var deleteMany = function(self, filter, options, callback) {
+ options.single = false;
+
+ removeDocuments(self, filter, options, function(err, r) {
+ if(callback == null) return;
+ if(err && callback) return callback(err);
+ if(r == null) return callback(null, {result: {ok:1}});
+ r.deletedCount = r.result.n;
+ if(callback) callback(null, r);
+ });
+}
+
+var removeDocuments = function(self, selector, options, callback) {
+ if(typeof options == 'function') {
+ callback = options, options = {};
+ } else if (typeof selector === 'function') {
+ callback = selector;
+ options = {};
+ selector = {};
+ }
+
+ // Create an empty options object if the provided one is null
+ options = options || {};
+
+ // Get the write concern options
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+
+ // If selector is null set empty
+ if(selector == null) selector = {};
+
+ // Build the op
+ var op = {q: selector, limit: 0};
+ if(options.single) op.limit = 1;
+
+ // Have we specified collation
+ decorateWithCollation(finalOptions, self, options);
+
+ // Execute the remove
+ self.s.topology.remove(self.s.namespace, [op], finalOptions, function(err, result) {
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err, null);
+ if(result == null) return handleCallback(callback, null, null);
+ if(result.result.code) return handleCallback(callback, toError(result.result));
+ if(result.result.writeErrors) return handleCallback(callback, toError(result.result.writeErrors[0]));
+ // Return the results
+ handleCallback(callback, null, result);
+ });
+}
+
+define.classMethod('deleteMany', {callback: true, promise:true});
+
+Collection.prototype.removeMany = Collection.prototype.deleteMany;
+
+define.classMethod('removeMany', {callback: true, promise:true});
+
+/**
+ * Remove documents.
+ * @method
+ * @param {object} selector The selector for the update operation.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.single=false] Removes the first document found.
+ * @param {Collection~writeOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated use deleteOne, deleteMany or bulkWrite
+ */
+Collection.prototype.remove = function(selector, options, callback) {
+ var self = this;
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return removeDocuments(self, selector, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ removeDocuments(self, selector, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('remove', {callback: true, promise:true});
+
+/**
+ * Save a document. Simple full document replacement function. Not recommended for efficiency, use atomic
+ * operators and update instead for more efficient operations.
+ * @method
+ * @param {object} doc Document to save
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~writeOpCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated use insertOne, insertMany, updateOne or updateMany
+ */
+Collection.prototype.save = function(doc, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Add ignoreUndfined
+ if(this.s.options.ignoreUndefined) {
+ options = shallowClone(options);
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return save(self, doc, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ save(self, doc, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var save = function(self, doc, options, callback) {
+ // Get the write concern options
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+ // Establish if we need to perform an insert or update
+ if(doc._id != null) {
+ finalOptions.upsert = true;
+ return updateDocuments(self, {_id: doc._id}, doc, finalOptions, callback);
+ }
+
+ // Insert the document
+ insertDocuments(self, [doc], options, function(err, r) {
+ if(callback == null) return;
+ if(doc == null) return handleCallback(callback, null, null);
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, r);
+ });
+}
+
+define.classMethod('save', {callback: true, promise:true});
+
+/**
+ * The callback format for results
+ * @callback Collection~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Fetches the first document that matches the query
+ * @method
+ * @param {object} query Query for find Operation
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.limit=0] Sets the limit of documents returned in the query.
+ * @param {(array|object)} [options.sort=null] Set to sort the documents coming back from the query. Array of indexes, [['a', 1]] etc.
+ * @param {object} [options.fields=null] The fields to return in the query. Object of fields to include or exclude (not both), {'a':1}
+ * @param {number} [options.skip=0] Set to skip N documents ahead in your query (useful for pagination).
+ * @param {Object} [options.hint=null] Tell the query to use specific indexes in the query. Object of indexes to use, {'_id':1}
+ * @param {boolean} [options.explain=false] Explain the query instead of returning the data.
+ * @param {boolean} [options.snapshot=false] Snapshot query.
+ * @param {boolean} [options.timeout=false] Specify if the cursor can timeout.
+ * @param {boolean} [options.tailable=false] Specify if the cursor is tailable.
+ * @param {number} [options.batchSize=0] Set the batchSize for the getMoreCommand when iterating over the query results.
+ * @param {boolean} [options.returnKey=false] Only return the index key.
+ * @param {number} [options.maxScan=null] Limit the number of items to scan.
+ * @param {number} [options.min=null] Set index bounds.
+ * @param {number} [options.max=null] Set index bounds.
+ * @param {boolean} [options.showDiskLoc=false] Show disk location of results.
+ * @param {string} [options.comment=null] You can put a $comment field on a query to make looking in the profiler logs simpler.
+ * @param {boolean} [options.raw=false] Return document results as raw BSON buffers.
+ * @param {boolean} [options.promoteLongs=true] Promotes Long values to number if they fit inside the 53 bits resolution.
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {boolean} [options.partial=false] Specify if the cursor should return partial results when querying against a sharded system
+ * @param {number} [options.maxTimeMS=null] Number of miliseconds to wait before aborting the query.
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.findOne = function() {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ var callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+
+ // Execute using callback
+ if(typeof callback == 'function') return findOne(self, args, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ findOne(self, args, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findOne = function(self, args, callback) {
+ var cursor = self.find.apply(self, args).limit(-1).batchSize(1);
+ // Return the item
+ cursor.next(function(err, item) {
+ if(err != null) return handleCallback(callback, toError(err), null);
+ handleCallback(callback, null, item);
+ });
+}
+
+define.classMethod('findOne', {callback: true, promise:true});
+
+/**
+ * The callback format for the collection method, must be used if strict is specified
+ * @callback Collection~collectionResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection} collection The collection instance.
+ */
+
+/**
+ * Rename the collection.
+ *
+ * @method
+ * @param {string} newName New name of of the collection.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.dropTarget=false] Drop the target name collection if it previously exists.
+ * @param {Collection~collectionResultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.rename = function(newName, opt, callback) {
+ var self = this;
+ if(typeof opt == 'function') callback = opt, opt = {};
+ opt = assign({}, opt, {readPreference: ReadPreference.PRIMARY});
+
+ // Execute using callback
+ if(typeof callback == 'function') return rename(self, newName, opt, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ rename(self, newName, opt, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var rename = function(self, newName, opt, callback) {
+ // Check the collection name
+ checkCollectionName(newName);
+ // Build the command
+ var renameCollection = f("%s.%s", self.s.dbName, self.s.name);
+ var toCollection = f("%s.%s", self.s.dbName, newName);
+ var dropTarget = typeof opt.dropTarget == 'boolean' ? opt.dropTarget : false;
+ var cmd = {'renameCollection':renameCollection, 'to':toCollection, 'dropTarget':dropTarget};
+
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(cmd, self, opt);
+
+ // Execute against admin
+ self.s.db.admin().command(cmd, opt, function(err, doc) {
+ if(err) return handleCallback(callback, err, null);
+ // We have an error
+ if(doc.errmsg) return handleCallback(callback, toError(doc), null);
+ try {
+ return handleCallback(callback, null, new Collection(self.s.db, self.s.topology, self.s.dbName, newName, self.s.pkFactory, self.s.options));
+ } catch(err) {
+ return handleCallback(callback, toError(err), null);
+ }
+ });
+}
+
+define.classMethod('rename', {callback: true, promise:true});
+
+/**
+ * Drop the collection from the database, removing it permanently. New accesses will create a new collection.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {Collection~resultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.drop = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return self.s.db.dropCollection(self.s.name, options, callback);
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.db.dropCollection(self.s.name, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('drop', {callback: true, promise:true});
+
+/**
+ * Returns the options of the collection.
+ *
+ * @method
+ * @param {Collection~resultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.options = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return options(self, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ options(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var options = function(self, callback) {
+ self.s.db.listCollections({name: self.s.name}).toArray(function(err, collections) {
+ if(err) return handleCallback(callback, err);
+ if(collections.length == 0) {
+ return handleCallback(callback, MongoError.create({message: f("collection %s not found", self.s.namespace), driver:true }));
+ }
+
+ handleCallback(callback, err, collections[0].options || null);
+ });
+}
+
+define.classMethod('options', {callback: true, promise:true});
+
+/**
+ * Returns if the collection is a capped collection
+ *
+ * @method
+ * @param {Collection~resultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.isCapped = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return isCapped(self, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ isCapped(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var isCapped = function(self, callback) {
+ self.options(function(err, document) {
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, document && document.capped);
+ });
+}
+
+define.classMethod('isCapped', {callback: true, promise:true});
+
+/**
+ * Creates an index on the db and collection collection.
+ * @method
+ * @param {(string|object)} fieldOrSpec Defines the index.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.unique=false] Creates an unique index.
+ * @param {boolean} [options.sparse=false] Creates a sparse index.
+ * @param {boolean} [options.background=false] Creates the index in the background, yielding whenever possible.
+ * @param {boolean} [options.dropDups=false] A unique index cannot be created on a key that has pre-existing duplicate values. If you would like to create the index anyway, keeping the first document the database indexes and deleting all subsequent documents that have duplicate value
+ * @param {number} [options.min=null] For geospatial indexes set the lower bound for the co-ordinates.
+ * @param {number} [options.max=null] For geospatial indexes set the high bound for the co-ordinates.
+ * @param {number} [options.v=null] Specify the format version of the indexes.
+ * @param {number} [options.expireAfterSeconds=null] Allows you to expire data on indexes applied to a data (MongoDB 2.2 or higher)
+ * @param {number} [options.name=null] Override the autogenerated index name (useful if the resulting name is larger than 128 bytes)
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.createIndex = function(fieldOrSpec, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+ options = typeof callback === 'function' ? options : callback;
+ options = options == null ? {} : options;
+
+ // Execute using callback
+ if(typeof callback == 'function') return createIndex(self, fieldOrSpec, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ createIndex(self, fieldOrSpec, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var createIndex = function(self, fieldOrSpec, options, callback) {
+ self.s.db.createIndex(self.s.name, fieldOrSpec, options, callback);
+}
+
+define.classMethod('createIndex', {callback: true, promise:true});
+
+/**
+ * Creates multiple indexes in the collection, this method is only supported for
+ * MongoDB 2.6 or higher. Earlier version of MongoDB will throw a command not supported
+ * error. Index specifications are defined at http://docs.mongodb.org/manual/reference/command/createIndexes/.
+ * @method
+ * @param {array} indexSpecs An array of index specifications to be created
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.createIndexes = function(indexSpecs, callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return createIndexes(self, indexSpecs, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ createIndexes(self, indexSpecs, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var createIndexes = function(self, indexSpecs, callback) {
+ var capabilities = self.s.topology.capabilities();
+
+ // Ensure we generate the correct name if the parameter is not set
+ for(var i = 0; i < indexSpecs.length; i++) {
+ if(indexSpecs[i].name == null) {
+ var keys = [];
+
+ // Did the user pass in a collation, check if our write server supports it
+ if(indexSpecs[i].collation && capabilities && !capabilities.commandsTakeCollation) {
+ return callback(new MongoError(f('server/primary/mongos does not support collation')));
+ }
+
+ for(var name in indexSpecs[i].key) {
+ keys.push(f('%s_%s', name, indexSpecs[i].key[name]));
+ }
+
+ // Set the name
+ indexSpecs[i].name = keys.join('_');
+ }
+ }
+
+ // Execute the index
+ self.s.db.command({
+ createIndexes: self.s.name, indexes: indexSpecs
+ }, { readPreference: ReadPreference.PRIMARY }, callback);
+}
+
+define.classMethod('createIndexes', {callback: true, promise:true});
+
+/**
+ * Drops an index from this collection.
+ * @method
+ * @param {string} indexName Name of the index to drop.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.dropIndex = function(indexName, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+ // Run only against primary
+ options.readPreference = ReadPreference.PRIMARY;
+
+ // Execute using callback
+ if(typeof callback == 'function') return dropIndex(self, indexName, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ dropIndex(self, indexName, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var dropIndex = function(self, indexName, options, callback) {
+ // Delete index command
+ var cmd = {'dropIndexes':self.s.name, 'index':indexName};
+
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(cmd, self, options);
+
+ // Execute command
+ self.s.db.command(cmd, options, function(err, result) {
+ if(typeof callback != 'function') return;
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, result);
+ });
+}
+
+define.classMethod('dropIndex', {callback: true, promise:true});
+
+/**
+ * Drops all indexes from this collection.
+ * @method
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.dropIndexes = function(options, callback) {
+ var self = this;
+
+ // Do we have options
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return dropIndexes(self, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ dropIndexes(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var dropIndexes = function(self, options, callback) {
+ self.dropIndex('*', options, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ handleCallback(callback, null, true);
+ });
+}
+
+define.classMethod('dropIndexes', {callback: true, promise:true});
+
+/**
+ * Drops all indexes from this collection.
+ * @method
+ * @deprecated use dropIndexes
+ * @param {Collection~resultCallback} callback The command result callback
+ * @return {Promise} returns Promise if no [callback] passed
+ */
+Collection.prototype.dropAllIndexes = Collection.prototype.dropIndexes;
+
+define.classMethod('dropAllIndexes', {callback: true, promise:true});
+
+/**
+ * Reindex all indexes on the collection
+ * Warning: reIndex is a blocking operation (indexes are rebuilt in the foreground) and will be slow for large collections.
+ * @method
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.reIndex = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return reIndex(self, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ reIndex(self, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var reIndex = function(self, options, callback) {
+ // Reindex
+ var cmd = {'reIndex':self.s.name};
+
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(cmd, self, options);
+
+ // Execute the command
+ self.s.db.command(cmd, options, function(err, result) {
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, result.ok ? true : false);
+ });
+}
+
+define.classMethod('reIndex', {callback: true, promise:true});
+
+/**
+ * Get the list of all indexes information for the collection.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.batchSize=null] The batchSize for the returned command cursor or if pre 2.8 the systems batch collection
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @return {CommandCursor}
+ */
+Collection.prototype.listIndexes = function(options) {
+ options = options || {};
+ // Clone the options
+ options = shallowClone(options);
+ // Determine the read preference in the options.
+ options = getReadPreference(this, options, this.s.db, this);
+ // Set the CommandCursor constructor
+ options.cursorFactory = CommandCursor;
+ // Set the promiseLibrary
+ options.promiseLibrary = this.s.promiseLibrary;
+
+ if(!this.s.topology.capabilities()) {
+ throw new MongoError('cannot connect to server');
+ }
+
+ // We have a list collections command
+ if(this.s.topology.capabilities().hasListIndexesCommand) {
+ // Cursor options
+ var cursor = options.batchSize ? {batchSize: options.batchSize} : {}
+ // Build the command
+ var command = { listIndexes: this.s.name, cursor: cursor };
+ // Execute the cursor
+ var cursor = this.s.topology.cursor(f('%s.$cmd', this.s.dbName), command, options);
+ // Do we have a readPreference, apply it
+ if(options.readPreference) cursor.setReadPreference(options.readPreference);
+ // Return the cursor
+ return cursor;
+ }
+
+ // Get the namespace
+ var ns = f('%s.system.indexes', this.s.dbName);
+ // Get the query
+ var cursor = this.s.topology.cursor(ns, {find: ns, query: {ns: this.s.namespace}}, options);
+ // Do we have a readPreference, apply it
+ if(options.readPreference) cursor.setReadPreference(options.readPreference);
+ // Set the passed in batch size if one was provided
+ if(options.batchSize) cursor = cursor.batchSize(options.batchSize);
+ // Return the cursor
+ return cursor;
+};
+
+define.classMethod('listIndexes', {callback: false, promise:false, returns: [CommandCursor]});
+
+/**
+ * Ensures that an index exists, if it does not it creates it
+ * @method
+ * @deprecated use createIndexes instead
+ * @param {(string|object)} fieldOrSpec Defines the index.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.unique=false] Creates an unique index.
+ * @param {boolean} [options.sparse=false] Creates a sparse index.
+ * @param {boolean} [options.background=false] Creates the index in the background, yielding whenever possible.
+ * @param {boolean} [options.dropDups=false] A unique index cannot be created on a key that has pre-existing duplicate values. If you would like to create the index anyway, keeping the first document the database indexes and deleting all subsequent documents that have duplicate value
+ * @param {number} [options.min=null] For geospatial indexes set the lower bound for the co-ordinates.
+ * @param {number} [options.max=null] For geospatial indexes set the high bound for the co-ordinates.
+ * @param {number} [options.v=null] Specify the format version of the indexes.
+ * @param {number} [options.expireAfterSeconds=null] Allows you to expire data on indexes applied to a data (MongoDB 2.2 or higher)
+ * @param {number} [options.name=null] Override the autogenerated index name (useful if the resulting name is larger than 128 bytes)
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.ensureIndex = function(fieldOrSpec, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return ensureIndex(self, fieldOrSpec, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ ensureIndex(self, fieldOrSpec, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var ensureIndex = function(self, fieldOrSpec, options, callback) {
+ self.s.db.ensureIndex(self.s.name, fieldOrSpec, options, callback);
+}
+
+define.classMethod('ensureIndex', {callback: true, promise:true});
+
+/**
+ * Checks if one or more indexes exist on the collection, fails on first non-existing index
+ * @method
+ * @param {(string|array)} indexes One or more index names to check.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.indexExists = function(indexes, callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') return indexExists(self, indexes, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ indexExists(self, indexes, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var indexExists = function(self, indexes, callback) {
+ self.indexInformation(function(err, indexInformation) {
+ // If we have an error return
+ if(err != null) return handleCallback(callback, err, null);
+ // Let's check for the index names
+ if(!Array.isArray(indexes)) return handleCallback(callback, null, indexInformation[indexes] != null);
+ // Check in list of indexes
+ for(var i = 0; i < indexes.length; i++) {
+ if(indexInformation[indexes[i]] == null) {
+ return handleCallback(callback, null, false);
+ }
+ }
+
+ // All keys found return true
+ return handleCallback(callback, null, true);
+ });
+}
+
+define.classMethod('indexExists', {callback: true, promise:true});
+
+/**
+ * Retrieves this collections index info.
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.full=false] Returns the full raw index information.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.indexInformation = function(options, callback) {
+ var self = this;
+ // Unpack calls
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return indexInformation(self, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ indexInformation(self, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var indexInformation = function(self, options, callback) {
+ self.s.db.indexInformation(self.s.name, options, callback);
+}
+
+define.classMethod('indexInformation', {callback: true, promise:true});
+
+/**
+ * The callback format for results
+ * @callback Collection~countCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {number} result The count of documents that matched the query.
+ */
+
+/**
+ * Count number of matching documents in the db to a query.
+ * @method
+ * @param {object} query The query for the count.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.limit=null] The limit of documents to count.
+ * @param {boolean} [options.skip=null] The number of documents to skip for the count.
+ * @param {string} [options.hint=null] An index name hint for the query.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.maxTimeMS=null] Number of miliseconds to wait before aborting the query.
+ * @param {Collection~countCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.count = function(query, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ var queryOption = args.length ? args.shift() || {} : {};
+ var optionsOption = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return count(self, queryOption, optionsOption, callback);
+
+ // Check if query is empty
+ query = query || {};
+ options = options || {};
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ count(self, query, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var count = function(self, query, options, callback) {
+ var skip = options.skip;
+ var limit = options.limit;
+ var hint = options.hint;
+ var maxTimeMS = options.maxTimeMS;
+
+ // Final query
+ var cmd = {
+ 'count': self.s.name, 'query': query
+ };
+
+ // Add limit, skip and maxTimeMS if defined
+ if(typeof skip == 'number') cmd.skip = skip;
+ if(typeof limit == 'number') cmd.limit = limit;
+ if(typeof maxTimeMS == 'number') cmd.maxTimeMS = maxTimeMS;
+ if(hint) options.hint = hint;
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ cmd.readConcern = self.s.readConcern;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(cmd, self, options);
+
+ // Execute command
+ self.s.db.command(cmd, options, function(err, result) {
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, result.n);
+ });
+}
+
+define.classMethod('count', {callback: true, promise:true});
+
+/**
+ * The distinct command returns returns a list of distinct values for the given key across a collection.
+ * @method
+ * @param {string} key Field of the document to find distinct values for.
+ * @param {object} query The query for filtering the set of documents to which we apply the distinct filter.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.maxTimeMS=null] Number of miliseconds to wait before aborting the query.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.distinct = function(key, query, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ var queryOption = args.length ? args.shift() || {} : {};
+ var optionsOption = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return distinct(self, key, queryOption, optionsOption, callback);
+
+ // Ensure the query and options are set
+ query = query || {};
+ options = options || {};
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ distinct(self, key, query, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var distinct = function(self, key, query, options, callback) {
+ // maxTimeMS option
+ var maxTimeMS = options.maxTimeMS;
+
+ // Distinct command
+ var cmd = {
+ 'distinct': self.s.name, 'key': key, 'query': query
+ };
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Add maxTimeMS if defined
+ if(typeof maxTimeMS == 'number')
+ cmd.maxTimeMS = maxTimeMS;
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ cmd.readConcern = self.s.readConcern;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(cmd, self, options);
+
+ // Execute the command
+ self.s.db.command(cmd, options, function(err, result) {
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, result.values);
+ });
+}
+
+define.classMethod('distinct', {callback: true, promise:true});
+
+/**
+ * Retrieve all the indexes on the collection.
+ * @method
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.indexes = function(callback) {
+ var self = this;
+ // Execute using callback
+ if(typeof callback == 'function') return indexes(self, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ indexes(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var indexes = function(self, callback) {
+ self.s.db.indexInformation(self.s.name, {full:true}, callback);
+}
+
+define.classMethod('indexes', {callback: true, promise:true});
+
+/**
+ * Get all the collection statistics.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.scale=null] Divide the returned sizes by scale value.
+ * @param {Collection~resultCallback} [callback] The collection result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.stats = function(options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ // Fetch all commands
+ options = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return stats(self, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ stats(self, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var stats = function(self, options, callback) {
+ // Build command object
+ var commandObject = {
+ collStats:self.s.name
+ }
+
+ // Check if we have the scale value
+ if(options['scale'] != null) commandObject['scale'] = options['scale'];
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Execute the command
+ self.s.db.command(commandObject, options, callback);
+}
+
+define.classMethod('stats', {callback: true, promise:true});
+
+/**
+ * @typedef {Object} Collection~findAndModifyWriteOpResult
+ * @property {object} value Document returned from findAndModify command.
+ * @property {object} lastErrorObject The raw lastErrorObject returned from the command.
+ * @property {Number} ok Is 1 if the command executed correctly.
+ */
+
+/**
+ * The callback format for inserts
+ * @callback Collection~findAndModifyCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection~findAndModifyWriteOpResult} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Find a document and delete it in one atomic operation, requires a write lock for the duration of the operation.
+ *
+ * @method
+ * @param {object} filter Document selection filter.
+ * @param {object} [options=null] Optional settings.
+ * @param {object} [options.projection=null] Limits the fields to return for all matching documents.
+ * @param {object} [options.sort=null] Determines which document the operation modifies if the query selects multiple documents.
+ * @param {number} [options.maxTimeMS=null] The maximum amount of time to allow the query to run.
+ * @param {Collection~findAndModifyCallback} [callback] The collection result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.findOneAndDelete = function(filter, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Basic validation
+ if(filter == null || typeof filter != 'object') throw toError('filter parameter must be an object');
+
+ // Execute using callback
+ if(typeof callback == 'function') return findOneAndDelete(self, filter, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ options = options || {};
+
+ findOneAndDelete(self, filter, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findOneAndDelete = function(self, filter, options, callback) {
+ // Final options
+ var finalOptions = shallowClone(options);
+ finalOptions['fields'] = options.projection;
+ finalOptions['remove'] = true;
+ // Execute find and Modify
+ self.findAndModify(
+ filter
+ , options.sort
+ , null
+ , finalOptions
+ , callback
+ );
+}
+
+define.classMethod('findOneAndDelete', {callback: true, promise:true});
+
+/**
+ * Find a document and replace it in one atomic operation, requires a write lock for the duration of the operation.
+ *
+ * @method
+ * @param {object} filter Document selection filter.
+ * @param {object} replacement Document replacing the matching document.
+ * @param {object} [options=null] Optional settings.
+ * @param {object} [options.projection=null] Limits the fields to return for all matching documents.
+ * @param {object} [options.sort=null] Determines which document the operation modifies if the query selects multiple documents.
+ * @param {number} [options.maxTimeMS=null] The maximum amount of time to allow the query to run.
+ * @param {boolean} [options.upsert=false] Upsert the document if it does not exist.
+ * @param {boolean} [options.returnOriginal=true] When false, returns the updated document rather than the original. The default is true.
+ * @param {Collection~findAndModifyCallback} [callback] The collection result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.findOneAndReplace = function(filter, replacement, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Basic validation
+ if(filter == null || typeof filter != 'object') throw toError('filter parameter must be an object');
+ if(replacement == null || typeof replacement != 'object') throw toError('replacement parameter must be an object');
+
+ // Execute using callback
+ if(typeof callback == 'function') return findOneAndReplace(self, filter, replacement, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ options = options || {};
+
+ findOneAndReplace(self, filter, replacement, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findOneAndReplace = function(self, filter, replacement, options, callback) {
+ // Final options
+ var finalOptions = shallowClone(options);
+ finalOptions['fields'] = options.projection;
+ finalOptions['update'] = true;
+ finalOptions['new'] = typeof options.returnOriginal == 'boolean' ? !options.returnOriginal : false;
+ finalOptions['upsert'] = typeof options.upsert == 'boolean' ? options.upsert : false;
+
+ // Execute findAndModify
+ self.findAndModify(
+ filter
+ , options.sort
+ , replacement
+ , finalOptions
+ , callback
+ );
+}
+
+define.classMethod('findOneAndReplace', {callback: true, promise:true});
+
+/**
+ * Find a document and update it in one atomic operation, requires a write lock for the duration of the operation.
+ *
+ * @method
+ * @param {object} filter Document selection filter.
+ * @param {object} update Update operations to be performed on the document
+ * @param {object} [options=null] Optional settings.
+ * @param {object} [options.projection=null] Limits the fields to return for all matching documents.
+ * @param {object} [options.sort=null] Determines which document the operation modifies if the query selects multiple documents.
+ * @param {number} [options.maxTimeMS=null] The maximum amount of time to allow the query to run.
+ * @param {boolean} [options.upsert=false] Upsert the document if it does not exist.
+ * @param {boolean} [options.returnOriginal=true] When false, returns the updated document rather than the original. The default is true.
+ * @param {Collection~findAndModifyCallback} [callback] The collection result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.findOneAndUpdate = function(filter, update, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Basic validation
+ if(filter == null || typeof filter != 'object') throw toError('filter parameter must be an object');
+ if(update == null || typeof update != 'object') throw toError('update parameter must be an object');
+
+ // Execute using callback
+ if(typeof callback == 'function') return findOneAndUpdate(self, filter, update, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ options = options || {};
+
+ findOneAndUpdate(self, filter, update, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findOneAndUpdate = function(self, filter, update, options, callback) {
+ // Final options
+ var finalOptions = shallowClone(options);
+ finalOptions['fields'] = options.projection;
+ finalOptions['update'] = true;
+ finalOptions['new'] = typeof options.returnOriginal == 'boolean' ? !options.returnOriginal : false;
+ finalOptions['upsert'] = typeof options.upsert == 'boolean' ? options.upsert : false;
+
+ // Execute findAndModify
+ self.findAndModify(
+ filter
+ , options.sort
+ , update
+ , finalOptions
+ , callback
+ );
+}
+
+define.classMethod('findOneAndUpdate', {callback: true, promise:true});
+
+/**
+ * Find and update a document.
+ * @method
+ * @param {object} query Query object to locate the object to modify.
+ * @param {array} sort If multiple docs match, choose the first one in the specified sort order as the object to manipulate.
+ * @param {object} doc The fields/vals to be updated.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.remove=false] Set to true to remove the object before returning.
+ * @param {boolean} [options.upsert=false] Perform an upsert operation.
+ * @param {boolean} [options.new=false] Set to true if you want to return the modified object rather than the original. Ignored for remove.
+ * @param {object} [options.fields=null] Object containing the field projection for the result returned from the operation.
+ * @param {Collection~findAndModifyCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated use findOneAndUpdate, findOneAndReplace or findOneAndDelete instead
+ */
+Collection.prototype.findAndModify = function(query, sort, doc, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ sort = args.length ? args.shift() || [] : [];
+ doc = args.length ? args.shift() : null;
+ options = args.length ? args.shift() || {} : {};
+
+ // Clone options
+ var options = shallowClone(options);
+ // Force read preference primary
+ options.readPreference = ReadPreference.PRIMARY;
+
+ // Execute using callback
+ if(typeof callback == 'function') return findAndModify(self, query, sort, doc, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ options = options || {};
+
+ findAndModify(self, query, sort, doc, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findAndModify = function(self, query, sort, doc, options, callback) {
+ // Create findAndModify command object
+ var queryObject = {
+ 'findandmodify': self.s.name
+ , 'query': query
+ };
+
+ sort = formattedOrderClause(sort);
+ if(sort) {
+ queryObject.sort = sort;
+ }
+
+ queryObject.new = options.new ? true : false;
+ queryObject.remove = options.remove ? true : false;
+ queryObject.upsert = options.upsert ? true : false;
+
+ if(options.fields) {
+ queryObject.fields = options.fields;
+ }
+
+ if(doc && !options.remove) {
+ queryObject.update = doc;
+ }
+
+ if(options.maxTimeMS)
+ queryObject.maxTimeMS = options.maxTimeMS;
+
+ // Either use override on the function, or go back to default on either the collection
+ // level or db
+ if(options['serializeFunctions'] != null) {
+ options['serializeFunctions'] = options['serializeFunctions'];
+ } else {
+ options['serializeFunctions'] = self.s.serializeFunctions;
+ }
+
+ // No check on the documents
+ options.checkKeys = false;
+
+ // Get the write concern settings
+ var finalOptions = writeConcern(options, self.s.db, self, options);
+
+ // Decorate the findAndModify command with the write Concern
+ if(finalOptions.writeConcern) {
+ queryObject.writeConcern = finalOptions.writeConcern;
+ }
+
+ // Have we specified bypassDocumentValidation
+ if(typeof finalOptions.bypassDocumentValidation == 'boolean') {
+ queryObject.bypassDocumentValidation = finalOptions.bypassDocumentValidation;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(queryObject, self, options);
+
+ // Execute the command
+ self.s.db.command(queryObject
+ , options, function(err, result) {
+ if(err) return handleCallback(callback, err, null);
+ return handleCallback(callback, null, result);
+ });
+}
+
+define.classMethod('findAndModify', {callback: true, promise:true});
+
+/**
+ * Find and remove a document.
+ * @method
+ * @param {object} query Query object to locate the object to modify.
+ * @param {array} sort If multiple docs match, choose the first one in the specified sort order as the object to manipulate.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated use findOneAndDelete instead
+ */
+Collection.prototype.findAndRemove = function(query, sort, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ sort = args.length ? args.shift() || [] : [];
+ options = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return findAndRemove(self, query, sort, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ findAndRemove(self, query, sort, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var findAndRemove = function(self, query, sort, options, callback) {
+ // Add the remove option
+ options['remove'] = true;
+ // Execute the callback
+ self.findAndModify(query, sort, null, options, callback);
+}
+
+define.classMethod('findAndRemove', {callback: true, promise:true});
+
+function decorateWithWriteConcern(command, self, options) {
+ // Do we support collation 3.4 and higher
+ var capabilities = self.s.topology.capabilities();
+ // Do we support write concerns 3.4 and higher
+ if(capabilities && capabilities.commandsTakeWriteConcern) {
+ // Get the write concern settings
+ var finalOptions = writeConcern(shallowClone(options), self.s.db, self, options);
+ // Add the write concern to the command
+ if(finalOptions.writeConcern) {
+ command.writeConcern = finalOptions.writeConcern;
+ }
+ }
+}
+
+function decorateWithCollation(command, self, options) {
+ // Do we support collation 3.4 and higher
+ var capabilities = self.s.topology.capabilities();
+ // Do we support write concerns 3.4 and higher
+ if(capabilities && capabilities.commandsTakeCollation) {
+ if(options.collation && typeof options.collation == 'object') {
+ command.collation = options.collation;
+ }
+ }
+}
+
+/**
+ * Execute an aggregation framework pipeline against the collection, needs MongoDB >= 2.2
+ * @method
+ * @param {object} pipeline Array containing all the aggregation framework commands for the execution.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.cursor=null] Return the query as cursor, on 2.6 > it returns as a real cursor on pre 2.6 it returns as an emulated cursor.
+ * @param {number} [options.cursor.batchSize=null] The batchSize for the cursor
+ * @param {boolean} [options.explain=false] Explain returns the aggregation execution plan (requires mongodb 2.6 >).
+ * @param {boolean} [options.allowDiskUse=false] allowDiskUse lets the server know if it can use disk to store temporary results for the aggregation (requires mongodb 2.6 >).
+ * @param {number} [options.maxTimeMS=null] maxTimeMS specifies a cumulative time limit in milliseconds for processing operations on the cursor. MongoDB interrupts the operation at the earliest following interrupt point.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {boolean} [options.raw=false] Return document results as raw BSON buffers.
+ * @param {boolean} [options.promoteLongs=true] Promotes Long values to number if they fit inside the 53 bits resolution.
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Collection~resultCallback} callback The command result callback
+ * @return {(null|AggregationCursor)}
+ */
+Collection.prototype.aggregate = function(pipeline, options, callback) {
+ var self = this;
+
+ if(Array.isArray(pipeline)) {
+ // Set up callback if one is provided
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ }
+
+ // If we have no options or callback we are doing
+ // a cursor based aggregation
+ if(options == null && callback == null) {
+ options = {};
+ }
+ } else {
+ // Aggregation pipeline passed as arguments on the method
+ var args = Array.prototype.slice.call(arguments, 0);
+ // Get the callback
+ callback = args.pop();
+ // Get the possible options object
+ var opts = args[args.length - 1];
+ // If it contains any of the admissible options pop it of the args
+ options = opts && (opts.readPreference
+ || opts.explain || opts.cursor || opts.out
+ || opts.maxTimeMS || opts.allowDiskUse) ? args.pop() : {};
+ // Left over arguments is the pipeline
+ pipeline = args;
+ }
+
+ // Ignore readConcern option
+ var ignoreReadConcern = false;
+
+ // Build the command
+ var command = { aggregate : this.s.name, pipeline : pipeline};
+
+ // If out was specified
+ if(typeof options.out == 'string') {
+ pipeline.push({$out: options.out});
+ // Ignore read concern
+ ignoreReadConcern = true;
+ } else if(pipeline.length > 0 && pipeline[pipeline.length - 1]['$out']) {
+ ignoreReadConcern = true;
+ }
+
+ // Decorate command with writeConcern if out has been specified
+ if(pipeline.length > 0 && pipeline[pipeline.length - 1]['$out']) {
+ decorateWithWriteConcern(command, self, options);
+ }
+
+ // Have we specified collation
+ decorateWithCollation(command, self, options);
+
+ // If we have bypassDocumentValidation set
+ if(typeof options.bypassDocumentValidation == 'boolean') {
+ command.bypassDocumentValidation = options.bypassDocumentValidation;
+ }
+
+ // Do we have a readConcern specified
+ if(!ignoreReadConcern && this.s.readConcern) {
+ command.readConcern = this.s.readConcern;
+ }
+
+ // If we have allowDiskUse defined
+ if(options.allowDiskUse) command.allowDiskUse = options.allowDiskUse;
+ if(typeof options.maxTimeMS == 'number') command.maxTimeMS = options.maxTimeMS;
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(this, options, this.s.db, this);
+
+ // If explain has been specified add it
+ if(options.explain) command.explain = options.explain;
+
+ // Validate that cursor options is valid
+ if(options.cursor != null && typeof options.cursor != 'object') {
+ throw toError('cursor options must be an object');
+ }
+
+ // promiseLibrary
+ options.promiseLibrary = this.s.promiseLibrary;
+
+ // Set the AggregationCursor constructor
+ options.cursorFactory = AggregationCursor;
+ if(typeof callback != 'function') {
+ if(!this.s.topology.capabilities()) {
+ throw new MongoError('cannot connect to server');
+ }
+
+ if(this.s.topology.capabilities().hasAggregationCursor) {
+ options.cursor = options.cursor || { batchSize : 1000 };
+ command.cursor = options.cursor;
+ }
+
+ // Allow disk usage command
+ if(typeof options.allowDiskUse == 'boolean') command.allowDiskUse = options.allowDiskUse;
+ if(typeof options.maxTimeMS == 'number') command.maxTimeMS = options.maxTimeMS;
+
+ // Execute the cursor
+ return this.s.topology.cursor(this.s.namespace, command, options);
+ }
+
+ var cursor = null;
+ // We do not allow cursor
+ if(options.cursor) {
+ return this.s.topology.cursor(this.s.namespace, command, options);
+ }
+
+ // Execute the command
+ this.s.db.command(command, options, function(err, result) {
+ if(err) {
+ handleCallback(callback, err);
+ } else if(result['err'] || result['errmsg']) {
+ handleCallback(callback, toError(result));
+ } else if(typeof result == 'object' && result['serverPipeline']) {
+ handleCallback(callback, null, result['serverPipeline']);
+ } else if(typeof result == 'object' && result['stages']) {
+ handleCallback(callback, null, result['stages']);
+ } else {
+ handleCallback(callback, null, result.result);
+ }
+ });
+}
+
+define.classMethod('aggregate', {callback: true, promise:false});
+
+/**
+ * The callback format for results
+ * @callback Collection~parallelCollectionScanCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Cursor[]} cursors A list of cursors returned allowing for parallel reading of collection.
+ */
+
+/**
+ * Return N number of parallel cursors for a collection allowing parallel reading of entire collection. There are
+ * no ordering guarantees for returned results.
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.batchSize=null] Set the batchSize for the getMoreCommand when iterating over the query results.
+ * @param {number} [options.numCursors=1] The maximum number of parallel command cursors to return (the number of returned cursors will be in the range 1:numCursors)
+ * @param {boolean} [options.raw=false] Return all BSON documents as Raw Buffer documents.
+ * @param {Collection~parallelCollectionScanCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.parallelCollectionScan = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {numCursors: 1};
+ // Set number of cursors to 1
+ options.numCursors = options.numCursors || 1;
+ options.batchSize = options.batchSize || 1000;
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(this, options, this.s.db, this);
+
+ // Add a promiseLibrary
+ options.promiseLibrary = this.s.promiseLibrary;
+
+ // Execute using callback
+ if(typeof callback == 'function') return parallelCollectionScan(self, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ parallelCollectionScan(self, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var parallelCollectionScan = function(self, options, callback) {
+ // Create command object
+ var commandObject = {
+ parallelCollectionScan: self.s.name
+ , numCursors: options.numCursors
+ }
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ commandObject.readConcern = self.s.readConcern;
+ }
+
+ // Store the raw value
+ var raw = options.raw;
+ delete options['raw'];
+
+ // Execute the command
+ self.s.db.command(commandObject, options, function(err, result) {
+ if(err) return handleCallback(callback, err, null);
+ if(result == null) return handleCallback(callback, new Error("no result returned for parallelCollectionScan"), null);
+
+ var cursors = [];
+ // Add the raw back to the option
+ if(raw) options.raw = raw;
+ // Create command cursors for each item
+ for(var i = 0; i < result.cursors.length; i++) {
+ var rawId = result.cursors[i].cursor.id
+ // Convert cursorId to Long if needed
+ var cursorId = typeof rawId == 'number' ? Long.fromNumber(rawId) : rawId;
+
+ // Command cursor options
+ var cmd = {
+ batchSize: options.batchSize
+ , cursorId: cursorId
+ , items: result.cursors[i].cursor.firstBatch
+ }
+
+ // Add a command cursor
+ cursors.push(self.s.topology.cursor(self.s.namespace, cursorId, options));
+ }
+
+ handleCallback(callback, null, cursors);
+ });
+}
+
+define.classMethod('parallelCollectionScan', {callback: true, promise:true});
+
+/**
+ * Execute the geoNear command to search for items in the collection
+ *
+ * @method
+ * @param {number} x Point to search on the x axis, ensure the indexes are ordered in the same order.
+ * @param {number} y Point to search on the y axis, ensure the indexes are ordered in the same order.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.num=null] Max number of results to return.
+ * @param {number} [options.minDistance=null] Include results starting at minDistance from a point (2.6 or higher)
+ * @param {number} [options.maxDistance=null] Include results up to maxDistance from the point.
+ * @param {number} [options.distanceMultiplier=null] Include a value to multiply the distances with allowing for range conversions.
+ * @param {object} [options.query=null] Filter the results by a query.
+ * @param {boolean} [options.spherical=false] Perform query using a spherical model.
+ * @param {boolean} [options.uniqueDocs=false] The closest location in a document to the center of the search region will always be returned MongoDB > 2.X.
+ * @param {boolean} [options.includeLocs=false] Include the location data fields in the top level of the results MongoDB > 2.X.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.geoNear = function(x, y, options, callback) {
+ var self = this;
+ var point = typeof(x) == 'object' && x
+ , args = Array.prototype.slice.call(arguments, point?1:2);
+
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ // Fetch all commands
+ options = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return geoNear(self, x, y, point, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ geoNear(self, x, y, point, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var geoNear = function(self, x, y, point, options, callback) {
+ // Build command object
+ var commandObject = {
+ geoNear:self.s.name,
+ near: point || [x, y]
+ }
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Exclude readPreference and existing options to prevent user from
+ // shooting themselves in the foot
+ var exclude = {
+ readPreference: true,
+ geoNear: true,
+ near: true
+ };
+
+ // Filter out any excluded objects
+ commandObject = decorateCommand(commandObject, options, exclude);
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ commandObject.readConcern = self.s.readConcern;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(commandObject, self, options);
+
+ // Execute the command
+ self.s.db.command(commandObject, options, function (err, res) {
+ if(err) return handleCallback(callback, err);
+ if(res.err || res.errmsg) return handleCallback(callback, toError(res));
+ // should we only be returning res.results here? Not sure if the user
+ // should see the other return information
+ handleCallback(callback, null, res);
+ });
+}
+
+define.classMethod('geoNear', {callback: true, promise:true});
+
+/**
+ * Execute a geo search using a geo haystack index on a collection.
+ *
+ * @method
+ * @param {number} x Point to search on the x axis, ensure the indexes are ordered in the same order.
+ * @param {number} y Point to search on the y axis, ensure the indexes are ordered in the same order.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {number} [options.maxDistance=null] Include results up to maxDistance from the point.
+ * @param {object} [options.search=null] Filter the results by a query.
+ * @param {number} [options.limit=false] Max number of results to return.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.geoHaystackSearch = function(x, y, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ // Fetch all commands
+ options = args.length ? args.shift() || {} : {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return geoHaystackSearch(self, x, y, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ geoHaystackSearch(self, x, y, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var geoHaystackSearch = function(self, x, y, options, callback) {
+ // Build command object
+ var commandObject = {
+ geoSearch: self.s.name,
+ near: [x, y]
+ }
+
+ // Remove read preference from hash if it exists
+ commandObject = decorateCommand(commandObject, options, {readPreference: true});
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ commandObject.readConcern = self.s.readConcern;
+ }
+
+ // Execute the command
+ self.s.db.command(commandObject, options, function (err, res) {
+ if(err) return handleCallback(callback, err);
+ if(res.err || res.errmsg) handleCallback(callback, utils.toError(res));
+ // should we only be returning res.results here? Not sure if the user
+ // should see the other return information
+ handleCallback(callback, null, res);
+ });
+}
+
+define.classMethod('geoHaystackSearch', {callback: true, promise:true});
+
+/**
+ * Group function helper
+ * @ignore
+ */
+// var groupFunction = function () {
+// var c = db[ns].find(condition);
+// var map = new Map();
+// var reduce_function = reduce;
+//
+// while (c.hasNext()) {
+// var obj = c.next();
+// var key = {};
+//
+// for (var i = 0, len = keys.length; i < len; ++i) {
+// var k = keys[i];
+// key[k] = obj[k];
+// }
+//
+// var aggObj = map.get(key);
+//
+// if (aggObj == null) {
+// var newObj = Object.extend({}, key);
+// aggObj = Object.extend(newObj, initial);
+// map.put(key, aggObj);
+// }
+//
+// reduce_function(obj, aggObj);
+// }
+//
+// return { "result": map.values() };
+// }.toString();
+var groupFunction = 'function () {\nvar c = db[ns].find(condition);\nvar map = new Map();\nvar reduce_function = reduce;\n\nwhile (c.hasNext()) {\nvar obj = c.next();\nvar key = {};\n\nfor (var i = 0, len = keys.length; i < len; ++i) {\nvar k = keys[i];\nkey[k] = obj[k];\n}\n\nvar aggObj = map.get(key);\n\nif (aggObj == null) {\nvar newObj = Object.extend({}, key);\naggObj = Object.extend(newObj, initial);\nmap.put(key, aggObj);\n}\n\nreduce_function(obj, aggObj);\n}\n\nreturn { "result": map.values() };\n}';
+
+/**
+ * Run a group command across a collection
+ *
+ * @method
+ * @param {(object|array|function|code)} keys An object, array or function expressing the keys to group by.
+ * @param {object} condition An optional condition that must be true for a row to be considered.
+ * @param {object} initial Initial value of the aggregation counter object.
+ * @param {(function|Code)} reduce The reduce function aggregates (reduces) the objects iterated
+ * @param {(function|Code)} finalize An optional function to be run on each item in the result set just before the item is returned.
+ * @param {boolean} command Specify if you wish to run using the internal group command or using eval, default is true.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.group = function(keys, condition, initial, reduce, finalize, command, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 3);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ // Fetch all commands
+ reduce = args.length ? args.shift() : null;
+ finalize = args.length ? args.shift() : null;
+ command = args.length ? args.shift() : null;
+ options = args.length ? args.shift() || {} : {};
+
+ // Make sure we are backward compatible
+ if(!(typeof finalize == 'function')) {
+ command = finalize;
+ finalize = null;
+ }
+
+ if (!Array.isArray(keys) && keys instanceof Object && typeof(keys) !== 'function' && !(keys instanceof Code)) {
+ keys = Object.keys(keys);
+ }
+
+ if(typeof reduce === 'function') {
+ reduce = reduce.toString();
+ }
+
+ if(typeof finalize === 'function') {
+ finalize = finalize.toString();
+ }
+
+ // Set up the command as default
+ command = command == null ? true : command;
+
+ // Execute using callback
+ if(typeof callback == 'function') return group(self, keys, condition, initial, reduce, finalize, command, options, callback);
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ group(self, keys, condition, initial, reduce, finalize, command, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var group = function(self, keys, condition, initial, reduce, finalize, command, options, callback) {
+ // Execute using the command
+ if(command) {
+ var reduceFunction = reduce instanceof Code
+ ? reduce
+ : new Code(reduce);
+
+ var selector = {
+ group: {
+ 'ns': self.s.name
+ , '$reduce': reduceFunction
+ , 'cond': condition
+ , 'initial': initial
+ , 'out': "inline"
+ }
+ };
+
+ // if finalize is defined
+ if(finalize != null) selector.group['finalize'] = finalize;
+ // Set up group selector
+ if ('function' === typeof keys || keys instanceof Code) {
+ selector.group.$keyf = keys instanceof Code
+ ? keys
+ : new Code(keys);
+ } else {
+ var hash = {};
+ keys.forEach(function (key) {
+ hash[key] = 1;
+ });
+ selector.group.key = hash;
+ }
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // Do we have a readConcern specified
+ if(self.s.readConcern) {
+ selector.readConcern = self.s.readConcern;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(selector, self, options);
+
+ // Execute command
+ self.s.db.command(selector, options, function(err, result) {
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, result.retval);
+ });
+ } else {
+ // Create execution scope
+ var scope = reduce != null && reduce instanceof Code
+ ? reduce.scope
+ : {};
+
+ scope.ns = self.s.name;
+ scope.keys = keys;
+ scope.condition = condition;
+ scope.initial = initial;
+
+ // Pass in the function text to execute within mongodb.
+ var groupfn = groupFunction.replace(/ reduce;/, reduce.toString() + ';');
+
+ self.s.db.eval(new Code(groupfn, scope), function (err, results) {
+ if (err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, results.result || results);
+ });
+ }
+}
+
+define.classMethod('group', {callback: true, promise:true});
+
+/**
+ * Functions that are passed as scope args must
+ * be converted to Code instances.
+ * @ignore
+ */
+function processScope (scope) {
+ if(!isObject(scope) || scope instanceof ObjectID) {
+ return scope;
+ }
+
+ var keys = Object.keys(scope);
+ var i = keys.length;
+ var key;
+ var new_scope = {};
+
+ while (i--) {
+ key = keys[i];
+ if ('function' == typeof scope[key]) {
+ new_scope[key] = new Code(String(scope[key]));
+ } else {
+ new_scope[key] = processScope(scope[key]);
+ }
+ }
+
+ return new_scope;
+}
+
+/**
+ * Run Map Reduce across a collection. Be aware that the inline option for out will return an array of results not a collection.
+ *
+ * @method
+ * @param {(function|string)} map The mapping function.
+ * @param {(function|string)} reduce The reduce function.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.out=null] Sets the output target for the map reduce job. *{inline:1} | {replace:'collectionName'} | {merge:'collectionName'} | {reduce:'collectionName'}*
+ * @param {object} [options.query=null] Query filter object.
+ * @param {object} [options.sort=null] Sorts the input objects using this key. Useful for optimization, like sorting by the emit key for fewer reduces.
+ * @param {number} [options.limit=null] Number of objects to return from collection.
+ * @param {boolean} [options.keeptemp=false] Keep temporary data.
+ * @param {(function|string)} [options.finalize=null] Finalize function.
+ * @param {object} [options.scope=null] Can pass in variables that can be access from map/reduce/finalize.
+ * @param {boolean} [options.jsMode=false] It is possible to make the execution stay in JS. Provided in MongoDB > 2.0.X.
+ * @param {boolean} [options.verbose=false] Provide statistics on job execution time.
+ * @param {boolean} [options.bypassDocumentValidation=false] Allow driver to bypass schema validation in MongoDB 3.2 or higher.
+ * @param {Collection~resultCallback} [callback] The command result callback
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+Collection.prototype.mapReduce = function(map, reduce, options, callback) {
+ var self = this;
+ if('function' === typeof options) callback = options, options = {};
+ // Out must allways be defined (make sure we don't break weirdly on pre 1.8+ servers)
+ if(null == options.out) {
+ throw new Error("the out option parameter must be defined, see mongodb docs for possible values");
+ }
+
+ if('function' === typeof map) {
+ map = map.toString();
+ }
+
+ if('function' === typeof reduce) {
+ reduce = reduce.toString();
+ }
+
+ if('function' === typeof options.finalize) {
+ options.finalize = options.finalize.toString();
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return mapReduce(self, map, reduce, options, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ mapReduce(self, map, reduce, options, function(err, r, r1) {
+ if(err) return reject(err);
+ if(!r1) return resolve(r);
+ resolve({results: r, stats: r1});
+ });
+ });
+}
+
+var mapReduce = function(self, map, reduce, options, callback) {
+ var mapCommandHash = {
+ mapreduce: self.s.name
+ , map: map
+ , reduce: reduce
+ };
+
+ // Add any other options passed in
+ for(var n in options) {
+ if('scope' == n) {
+ mapCommandHash[n] = processScope(options[n]);
+ } else {
+ mapCommandHash[n] = options[n];
+ }
+ }
+
+ options = shallowClone(options);
+ // Ensure we have the right read preference inheritance
+ options = getReadPreference(self, options, self.s.db, self);
+
+ // If we have a read preference and inline is not set as output fail hard
+ if((options.readPreference != false && options.readPreference != 'primary')
+ && options['out'] && (options['out'].inline != 1 && options['out'] != 'inline')) {
+ // Force readPreference to primary
+ options.readPreference = 'primary';
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(mapCommandHash, self, options);
+ } else if(self.s.readConcern) {
+ mapCommandHash.readConcern = self.s.readConcern;
+ }
+
+ // Is bypassDocumentValidation specified
+ if(typeof options.bypassDocumentValidation == 'boolean') {
+ mapCommandHash.bypassDocumentValidation = options.bypassDocumentValidation;
+ }
+
+ // Have we specified collation
+ decorateWithCollation(mapCommandHash, self, options);
+
+ // Execute command
+ self.s.db.command(mapCommandHash, {readPreference:options.readPreference}, function (err, result) {
+ if(err) return handleCallback(callback, err);
+ // Check if we have an error
+ if(1 != result.ok || result.err || result.errmsg) {
+ return handleCallback(callback, toError(result));
+ }
+
+ // Create statistics value
+ var stats = {};
+ if(result.timeMillis) stats['processtime'] = result.timeMillis;
+ if(result.counts) stats['counts'] = result.counts;
+ if(result.timing) stats['timing'] = result.timing;
+
+ // invoked with inline?
+ if(result.results) {
+ // If we wish for no verbosity
+ if(options['verbose'] == null || !options['verbose']) {
+ return handleCallback(callback, null, result.results);
+ }
+
+ return handleCallback(callback, null, result.results, stats);
+ }
+
+ // The returned collection
+ var collection = null;
+
+ // If we have an object it's a different db
+ if(result.result != null && typeof result.result == 'object') {
+ var doc = result.result;
+ collection = self.s.db.db(doc.db).collection(doc.collection);
+ } else {
+ // Create a collection object that wraps the result collection
+ collection = self.s.db.collection(result.result)
+ }
+
+ // If we wish for no verbosity
+ if(options['verbose'] == null || !options['verbose']) {
+ return handleCallback(callback, err, collection);
+ }
+
+ // Return stats as third set of values
+ handleCallback(callback, err, collection, stats);
+ });
+}
+
+define.classMethod('mapReduce', {callback: true, promise:true});
+
+/**
+ * Initiate a Out of order batch write operation. All operations will be buffered into insert/update/remove commands executed out of order.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @return {UnorderedBulkOperation}
+ */
+Collection.prototype.initializeUnorderedBulkOp = function(options) {
+ options = options || {};
+ options.promiseLibrary = this.s.promiseLibrary;
+ return unordered(this.s.topology, this, options);
+}
+
+define.classMethod('initializeUnorderedBulkOp', {callback: false, promise:false, returns: [ordered.UnorderedBulkOperation]});
+
+/**
+ * Initiate an In order bulk write operation, operations will be serially executed in the order they are added, creating a new operation for each switch in types.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {OrderedBulkOperation} callback The command result callback
+ * @return {null}
+ */
+Collection.prototype.initializeOrderedBulkOp = function(options) {
+ options = options || {};
+ options.promiseLibrary = this.s.promiseLibrary;
+ return ordered(this.s.topology, this, options);
+}
+
+define.classMethod('initializeOrderedBulkOp', {callback: false, promise:false, returns: [ordered.OrderedBulkOperation]});
+
+// Get write concern
+var writeConcern = function(target, db, col, options) {
+ if(options.w != null || options.j != null || options.fsync != null) {
+ var opts = {};
+ if(options.w != null) opts.w = options.w;
+ if(options.wtimeout != null) opts.wtimeout = options.wtimeout;
+ if(options.j != null) opts.j = options.j;
+ if(options.fsync != null) opts.fsync = options.fsync;
+ target.writeConcern = opts;
+ } else if(col.writeConcern.w != null || col.writeConcern.j != null || col.writeConcern.fsync != null) {
+ target.writeConcern = col.writeConcern;
+ } else if(db.writeConcern.w != null || db.writeConcern.j != null || db.writeConcern.fsync != null) {
+ target.writeConcern = db.writeConcern;
+ }
+
+ return target
+}
+
+// Figure out the read preference
+var getReadPreference = function(self, options, db, coll) {
+ var r = null
+ if(options.readPreference) {
+ r = options.readPreference
+ } else if(self.s.readPreference) {
+ r = self.s.readPreference
+ } else if(db.s.readPreference) {
+ r = db.s.readPreference;
+ }
+
+ if(r instanceof ReadPreference) {
+ options.readPreference = new CoreReadPreference(r.mode, r.tags, {maxStalenessMS: r.maxStalenessMS});
+ } else if(typeof r == 'string') {
+ options.readPreference = new CoreReadPreference(r);
+ } else if(r && !(r instanceof ReadPreference) && typeof r == 'object') {
+ var mode = r.mode || r.preference;
+ if (mode && typeof mode == 'string') {
+ options.readPreference = new CoreReadPreference(mode, r.tags, {maxStalenessMS: r.maxStalenessMS});
+ }
+ }
+
+ return options;
+}
+
+var testForFields = {
+ limit: 1, sort: 1, fields:1, skip: 1, hint: 1, explain: 1, snapshot: 1, timeout: 1, tailable: 1, tailableRetryInterval: 1
+ , numberOfRetries: 1, awaitdata: 1, awaitData: 1, exhaust: 1, batchSize: 1, returnKey: 1, maxScan: 1, min: 1, max: 1, showDiskLoc: 1
+ , comment: 1, raw: 1, readPreference: 1, partial: 1, read: 1, dbName: 1, oplogReplay: 1, connection: 1, maxTimeMS: 1, transforms: 1
+ , collation: 1
+}
+
+module.exports = Collection;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/command_cursor.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/command_cursor.js
new file mode 100644
index 0000000..c0d86ca
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/command_cursor.js
@@ -0,0 +1,320 @@
+"use strict";
+
+var inherits = require('util').inherits
+ , f = require('util').format
+ , toError = require('./utils').toError
+ , getSingleProperty = require('./utils').getSingleProperty
+ , formattedOrderClause = require('./utils').formattedOrderClause
+ , handleCallback = require('./utils').handleCallback
+ , Logger = require('mongodb-core').Logger
+ , EventEmitter = require('events').EventEmitter
+ , ReadPreference = require('./read_preference')
+ , MongoError = require('mongodb-core').MongoError
+ , Readable = require('stream').Readable || require('readable-stream').Readable
+ , Define = require('./metadata')
+ , CoreCursor = require('./cursor')
+ , Query = require('mongodb-core').Query
+ , CoreReadPreference = require('mongodb-core').ReadPreference;
+
+/**
+ * @fileOverview The **CommandCursor** class is an internal class that embodies a
+ * generalized cursor based on a MongoDB command allowing for iteration over the
+ * results returned. It supports one by one document iteration, conversion to an
+ * array or can be iterated as a Node 0.10.X or higher stream
+ *
+ * **CommandCursor Cannot directly be instantiated**
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Create a collection we want to drop later
+ * var col = db.collection('listCollectionsExample1');
+ * // Insert a bunch of documents
+ * col.insert([{a:1, b:1}
+ * , {a:2, b:2}, {a:3, b:3}
+ * , {a:4, b:4}], {w:1}, function(err, result) {
+ * test.equal(null, err);
+ *
+ * // List the database collections available
+ * db.listCollections().toArray(function(err, items) {
+ * test.equal(null, err);
+ * db.close();
+ * });
+ * });
+ * });
+ */
+
+/**
+ * Namespace provided by the browser.
+ * @external Readable
+ */
+
+/**
+ * Creates a new Command Cursor instance (INTERNAL TYPE, do not instantiate directly)
+ * @class CommandCursor
+ * @extends external:Readable
+ * @fires CommandCursor#data
+ * @fires CommandCursor#end
+ * @fires CommandCursor#close
+ * @fires CommandCursor#readable
+ * @return {CommandCursor} an CommandCursor instance.
+ */
+var CommandCursor = function(bson, ns, cmd, options, topology, topologyOptions) {
+ CoreCursor.apply(this, Array.prototype.slice.call(arguments, 0));
+ var self = this;
+ var state = CommandCursor.INIT;
+ var streamOptions = {};
+
+ // MaxTimeMS
+ var maxTimeMS = null;
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Set up
+ Readable.call(this, {objectMode: true});
+
+ // Internal state
+ this.s = {
+ // MaxTimeMS
+ maxTimeMS: maxTimeMS
+ // State
+ , state: state
+ // Stream options
+ , streamOptions: streamOptions
+ // BSON
+ , bson: bson
+ // Namespae
+ , ns: ns
+ // Command
+ , cmd: cmd
+ // Options
+ , options: options
+ // Topology
+ , topology: topology
+ // Topology Options
+ , topologyOptions: topologyOptions
+ // Promise library
+ , promiseLibrary: promiseLibrary
+ }
+}
+
+/**
+ * CommandCursor stream data event, fired for each document in the cursor.
+ *
+ * @event CommandCursor#data
+ * @type {object}
+ */
+
+/**
+ * CommandCursor stream end event
+ *
+ * @event CommandCursor#end
+ * @type {null}
+ */
+
+/**
+ * CommandCursor stream close event
+ *
+ * @event CommandCursor#close
+ * @type {null}
+ */
+
+/**
+ * CommandCursor stream readable event
+ *
+ * @event CommandCursor#readable
+ * @type {null}
+ */
+
+// Inherit from Readable
+inherits(CommandCursor, Readable);
+
+// Set the methods to inherit from prototype
+var methodsToInherit = ['_next', 'next', 'each', 'forEach', 'toArray'
+ , 'rewind', 'bufferedCount', 'readBufferedDocuments', 'close', 'isClosed', 'kill'
+ , '_find', '_getmore', '_killcursor', 'isDead', 'explain', 'isNotified', 'isKilled'];
+
+// Only inherit the types we need
+for(var i = 0; i < methodsToInherit.length; i++) {
+ CommandCursor.prototype[methodsToInherit[i]] = CoreCursor.prototype[methodsToInherit[i]];
+}
+
+var define = CommandCursor.define = new Define('CommandCursor', CommandCursor, true);
+
+/**
+ * Set the ReadPreference for the cursor.
+ * @method
+ * @param {(string|ReadPreference)} readPreference The new read preference for the cursor.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+CommandCursor.prototype.setReadPreference = function(r) {
+ if(this.s.state == CommandCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(this.s.state != CommandCursor.INIT) throw MongoError.create({message: 'cannot change cursor readPreference after cursor has been accessed', driver:true});
+
+ if(r instanceof ReadPreference) {
+ this.s.options.readPreference = new CoreReadPreference(r.mode, r.tags, {maxStalenessMS: r.maxStalenessMS});
+ } else if(typeof r == 'string') {
+ this.s.options.readPreference = new CoreReadPreference(r);
+ } else if(r instanceof CoreReadPreference) {
+ this.s.options.readPreference = r;
+ }
+
+ return this;
+}
+
+define.classMethod('setReadPreference', {callback: false, promise:false, returns: [CommandCursor]});
+
+/**
+ * Set the batch size for the cursor.
+ * @method
+ * @param {number} value The batchSize for the cursor.
+ * @throws {MongoError}
+ * @return {CommandCursor}
+ */
+CommandCursor.prototype.batchSize = function(value) {
+ if(this.s.state == CommandCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(typeof value != 'number') throw MongoError.create({message: "batchSize requires an integer", driver:true});
+ if(this.s.cmd.cursor) this.s.cmd.cursor.batchSize = value;
+ this.setCursorBatchSize(value);
+ return this;
+}
+
+define.classMethod('batchSize', {callback: false, promise:false, returns: [CommandCursor]});
+
+/**
+ * Add a maxTimeMS stage to the aggregation pipeline
+ * @method
+ * @param {number} value The state maxTimeMS value.
+ * @return {CommandCursor}
+ */
+CommandCursor.prototype.maxTimeMS = function(value) {
+ if(this.s.topology.lastIsMaster().minWireVersion > 2) {
+ this.s.cmd.maxTimeMS = value;
+ }
+ return this;
+}
+
+define.classMethod('maxTimeMS', {callback: false, promise:false, returns: [CommandCursor]});
+
+CommandCursor.prototype.get = CommandCursor.prototype.toArray;
+
+define.classMethod('get', {callback: true, promise:false});
+
+// Inherited methods
+define.classMethod('toArray', {callback: true, promise:true});
+define.classMethod('each', {callback: true, promise:false});
+define.classMethod('forEach', {callback: true, promise:false});
+define.classMethod('next', {callback: true, promise:true});
+define.classMethod('close', {callback: true, promise:true});
+define.classMethod('isClosed', {callback: false, promise:false, returns: [Boolean]});
+define.classMethod('rewind', {callback: false, promise:false});
+define.classMethod('bufferedCount', {callback: false, promise:false, returns: [Number]});
+define.classMethod('readBufferedDocuments', {callback: false, promise:false, returns: [Array]});
+
+/**
+ * Get the next available document from the cursor, returns null if no more documents are available.
+ * @function CommandCursor.prototype.next
+ * @param {CommandCursor~resultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * The callback format for results
+ * @callback CommandCursor~toArrayResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object[]} documents All the documents the satisfy the cursor.
+ */
+
+/**
+ * Returns an array of documents. The caller is responsible for making sure that there
+ * is enough memory to store the results. Note that the array only contain partial
+ * results when this cursor had been previouly accessed.
+ * @method CommandCursor.prototype.toArray
+ * @param {CommandCursor~toArrayResultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * The callback format for results
+ * @callback CommandCursor~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {(object|null)} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Iterates over all the documents for this cursor. As with **{cursor.toArray}**,
+ * not all of the elements will be iterated if this cursor had been previouly accessed.
+ * In that case, **{cursor.rewind}** can be used to reset the cursor. However, unlike
+ * **{cursor.toArray}**, the cursor will only hold a maximum of batch size elements
+ * at any given time if batch size is specified. Otherwise, the caller is responsible
+ * for making sure that the entire result can fit the memory.
+ * @method CommandCursor.prototype.each
+ * @param {CommandCursor~resultCallback} callback The result callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+
+/**
+ * Close the cursor, sending a KillCursor command and emitting close.
+ * @method CommandCursor.prototype.close
+ * @param {CommandCursor~resultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+
+/**
+ * Is the cursor closed
+ * @method CommandCursor.prototype.isClosed
+ * @return {boolean}
+ */
+
+/**
+ * Clone the cursor
+ * @function CommandCursor.prototype.clone
+ * @return {CommandCursor}
+ */
+
+/**
+ * Resets the cursor
+ * @function CommandCursor.prototype.rewind
+ * @return {CommandCursor}
+ */
+
+/**
+ * The callback format for the forEach iterator method
+ * @callback CommandCursor~iteratorCallback
+ * @param {Object} doc An emitted document for the iterator
+ */
+
+/**
+ * The callback error format for the forEach iterator method
+ * @callback CommandCursor~endCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ */
+
+/*
+ * Iterates over all the documents for this cursor using the iterator, callback pattern.
+ * @method CommandCursor.prototype.forEach
+ * @param {CommandCursor~iteratorCallback} iterator The iteration callback.
+ * @param {CommandCursor~endCallback} callback The end callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+
+CommandCursor.INIT = 0;
+CommandCursor.OPEN = 1;
+CommandCursor.CLOSED = 2;
+
+module.exports = CommandCursor;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/cursor.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/cursor.js
new file mode 100644
index 0000000..44431c1
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/cursor.js
@@ -0,0 +1,1182 @@
+"use strict";
+
+var inherits = require('util').inherits
+ , f = require('util').format
+ , formattedOrderClause = require('./utils').formattedOrderClause
+ , handleCallback = require('./utils').handleCallback
+ , ReadPreference = require('./read_preference')
+ , MongoError = require('mongodb-core').MongoError
+ , Readable = require('stream').Readable || require('readable-stream').Readable
+ , Define = require('./metadata')
+ , CoreCursor = require('mongodb-core').Cursor
+ , Map = require('mongodb-core').BSON.Map
+ , Query = require('mongodb-core').Query
+ , CoreReadPreference = require('mongodb-core').ReadPreference;
+
+/**
+ * @fileOverview The **Cursor** class is an internal class that embodies a cursor on MongoDB
+ * allowing for iteration over the results returned from the underlying query. It supports
+ * one by one document iteration, conversion to an array or can be iterated as a Node 0.10.X
+ * or higher stream
+ *
+ * **CURSORS Cannot directly be instantiated**
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Create a collection we want to drop later
+ * var col = db.collection('createIndexExample1');
+ * // Insert a bunch of documents
+ * col.insert([{a:1, b:1}
+ * , {a:2, b:2}, {a:3, b:3}
+ * , {a:4, b:4}], {w:1}, function(err, result) {
+ * test.equal(null, err);
+ *
+ * // Show that duplicate records got dropped
+ * col.find({}).toArray(function(err, items) {
+ * test.equal(null, err);
+ * test.equal(4, items.length);
+ * db.close();
+ * });
+ * });
+ * });
+ */
+
+/**
+ * Namespace provided by the mongodb-core and node.js
+ * @external CoreCursor
+ * @external Readable
+ */
+
+// Flags allowed for cursor
+var flags = ['tailable', 'oplogReplay', 'noCursorTimeout', 'awaitData', 'exhaust', 'partial'];
+var fields = ['numberOfRetries', 'tailableRetryInterval'];
+var push = Array.prototype.push;
+
+/**
+ * Creates a new Cursor instance (INTERNAL TYPE, do not instantiate directly)
+ * @class Cursor
+ * @extends external:CoreCursor
+ * @extends external:Readable
+ * @property {string} sortValue Cursor query sort setting.
+ * @property {boolean} timeout Is Cursor able to time out.
+ * @property {ReadPreference} readPreference Get cursor ReadPreference.
+ * @fires Cursor#data
+ * @fires Cursor#end
+ * @fires Cursor#close
+ * @fires Cursor#readable
+ * @return {Cursor} a Cursor instance.
+ * @example
+ * Cursor cursor options.
+ *
+ * collection.find({}).project({a:1}) // Create a projection of field a
+ * collection.find({}).skip(1).limit(10) // Skip 1 and limit 10
+ * collection.find({}).batchSize(5) // Set batchSize on cursor to 5
+ * collection.find({}).filter({a:1}) // Set query on the cursor
+ * collection.find({}).comment('add a comment') // Add a comment to the query, allowing to correlate queries
+ * collection.find({}).addCursorFlag('tailable', true) // Set cursor as tailable
+ * collection.find({}).addCursorFlag('oplogReplay', true) // Set cursor as oplogReplay
+ * collection.find({}).addCursorFlag('noCursorTimeout', true) // Set cursor as noCursorTimeout
+ * collection.find({}).addCursorFlag('awaitData', true) // Set cursor as awaitData
+ * collection.find({}).addCursorFlag('partial', true) // Set cursor as partial
+ * collection.find({}).addQueryModifier('$orderby', {a:1}) // Set $orderby {a:1}
+ * collection.find({}).max(10) // Set the cursor maxScan
+ * collection.find({}).maxScan(10) // Set the cursor maxScan
+ * collection.find({}).maxTimeMS(1000) // Set the cursor maxTimeMS
+ * collection.find({}).min(100) // Set the cursor min
+ * collection.find({}).returnKey(10) // Set the cursor returnKey
+ * collection.find({}).setReadPreference(ReadPreference.PRIMARY) // Set the cursor readPreference
+ * collection.find({}).showRecordId(true) // Set the cursor showRecordId
+ * collection.find({}).snapshot(true) // Set the cursor snapshot
+ * collection.find({}).sort([['a', 1]]) // Sets the sort order of the cursor query
+ * collection.find({}).hint('a_1') // Set the cursor hint
+ *
+ * All options are chainable, so one can do the following.
+ *
+ * collection.find({}).maxTimeMS(1000).maxScan(100).skip(1).toArray(..)
+ */
+var Cursor = function(bson, ns, cmd, options, topology, topologyOptions) {
+ CoreCursor.apply(this, Array.prototype.slice.call(arguments, 0));
+ var self = this;
+ var state = Cursor.INIT;
+ var streamOptions = {};
+
+ // Tailable cursor options
+ var numberOfRetries = options.numberOfRetries || 5;
+ var tailableRetryInterval = options.tailableRetryInterval || 500;
+ var currentNumberOfRetries = numberOfRetries;
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Set up
+ Readable.call(this, {objectMode: true});
+
+ // Internal cursor state
+ this.s = {
+ // Tailable cursor options
+ numberOfRetries: numberOfRetries
+ , tailableRetryInterval: tailableRetryInterval
+ , currentNumberOfRetries: currentNumberOfRetries
+ // State
+ , state: state
+ // Stream options
+ , streamOptions: streamOptions
+ // BSON
+ , bson: bson
+ // Namespace
+ , ns: ns
+ // Command
+ , cmd: cmd
+ // Options
+ , options: options
+ // Topology
+ , topology: topology
+ // Topology options
+ , topologyOptions: topologyOptions
+ // Promise library
+ , promiseLibrary: promiseLibrary
+ // Current doc
+ , currentDoc: null
+ }
+
+ // Translate correctly
+ if(self.s.options.noCursorTimeout == true) {
+ self.addCursorFlag('noCursorTimeout', true);
+ }
+
+ // Set the sort value
+ this.sortValue = self.s.cmd.sort;
+}
+
+/**
+ * Cursor stream data event, fired for each document in the cursor.
+ *
+ * @event Cursor#data
+ * @type {object}
+ */
+
+/**
+ * Cursor stream end event
+ *
+ * @event Cursor#end
+ * @type {null}
+ */
+
+/**
+ * Cursor stream close event
+ *
+ * @event Cursor#close
+ * @type {null}
+ */
+
+/**
+ * Cursor stream readable event
+ *
+ * @event Cursor#readable
+ * @type {null}
+ */
+
+// Inherit from Readable
+inherits(Cursor, Readable);
+
+// Map core cursor _next method so we can apply mapping
+CoreCursor.prototype._next = CoreCursor.prototype.next;
+
+for(var name in CoreCursor.prototype) {
+ Cursor.prototype[name] = CoreCursor.prototype[name];
+}
+
+var define = Cursor.define = new Define('Cursor', Cursor, true);
+
+/**
+ * Check if there is any document still available in the cursor
+ * @method
+ * @param {Cursor~resultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.hasNext = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') {
+ if(self.s.currentDoc){
+ return callback(null, true);
+ } else {
+ return nextObject(self, function(err, doc) {
+ if(!doc) return callback(null, false);
+ self.s.currentDoc = doc;
+ callback(null, true);
+ });
+ }
+ }
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ if(self.s.currentDoc){
+ resolve(true);
+ } else {
+ nextObject(self, function(err, doc) {
+ if(self.s.state == Cursor.CLOSED || self.isDead()) return resolve(false);
+ if(err) return reject(err);
+ if(!doc) return resolve(false);
+ self.s.currentDoc = doc;
+ resolve(true);
+ });
+ }
+ });
+}
+
+define.classMethod('hasNext', {callback: true, promise:true});
+
+/**
+ * Get the next available document from the cursor, returns null if no more documents are available.
+ * @method
+ * @param {Cursor~resultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.next = function(callback) {
+ var self = this;
+
+ // Execute using callback
+ if(typeof callback == 'function') {
+ // Return the currentDoc if someone called hasNext first
+ if(self.s.currentDoc) {
+ var doc = self.s.currentDoc;
+ self.s.currentDoc = null;
+ return callback(null, doc);
+ }
+
+ // Return the next object
+ return nextObject(self, callback)
+ };
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ // Return the currentDoc if someone called hasNext first
+ if(self.s.currentDoc) {
+ var doc = self.s.currentDoc;
+ self.s.currentDoc = null;
+ return resolve(doc);
+ }
+
+ nextObject(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('next', {callback: true, promise:true});
+
+/**
+ * Set the cursor query
+ * @method
+ * @param {object} filter The filter object used for the cursor.
+ * @return {Cursor}
+ */
+Cursor.prototype.filter = function(filter) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.query = filter;
+ return this;
+}
+
+define.classMethod('filter', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor maxScan
+ * @method
+ * @param {object} maxScan Constrains the query to only scan the specified number of documents when fulfilling the query
+ * @return {Cursor}
+ */
+Cursor.prototype.maxScan = function(maxScan) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.maxScan = maxScan;
+ return this;
+}
+
+define.classMethod('maxScan', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor hint
+ * @method
+ * @param {object} hint If specified, then the query system will only consider plans using the hinted index.
+ * @return {Cursor}
+ */
+Cursor.prototype.hint = function(hint) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.hint = hint;
+ return this;
+}
+
+define.classMethod('hint', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor min
+ * @method
+ * @param {object} min Specify a $min value to specify the inclusive lower bound for a specific index in order to constrain the results of find(). The $min specifies the lower bound for all keys of a specific index in order.
+ * @return {Cursor}
+ */
+Cursor.prototype.min = function(min) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.min = min;
+ return this;
+}
+
+define.classMethod('min', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor max
+ * @method
+ * @param {object} max Specify a $max value to specify the exclusive upper bound for a specific index in order to constrain the results of find(). The $max specifies the upper bound for all keys of a specific index in order.
+ * @return {Cursor}
+ */
+Cursor.prototype.max = function(max) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.max = max;
+ return this;
+}
+
+define.classMethod('max', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor returnKey
+ * @method
+ * @param {object} returnKey Only return the index field or fields for the results of the query. If $returnKey is set to true and the query does not use an index to perform the read operation, the returned documents will not contain any fields. Use one of the following forms:
+ * @return {Cursor}
+ */
+Cursor.prototype.returnKey = function(value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.returnKey = value;
+ return this;
+}
+
+define.classMethod('returnKey', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor showRecordId
+ * @method
+ * @param {object} showRecordId The $showDiskLoc option has now been deprecated and replaced with the showRecordId field. $showDiskLoc will still be accepted for OP_QUERY stye find.
+ * @return {Cursor}
+ */
+Cursor.prototype.showRecordId = function(value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.showDiskLoc = value;
+ return this;
+}
+
+define.classMethod('showRecordId', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the cursor snapshot
+ * @method
+ * @param {object} snapshot The $snapshot operator prevents the cursor from returning a document more than once because an intervening write operation results in a move of the document.
+ * @return {Cursor}
+ */
+Cursor.prototype.snapshot = function(value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.snapshot = value;
+ return this;
+}
+
+define.classMethod('snapshot', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set a node.js specific cursor option
+ * @method
+ * @param {string} field The cursor option to set ['numberOfRetries', 'tailableRetryInterval'].
+ * @param {object} value The field value.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.setCursorOption = function(field, value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(fields.indexOf(field) == -1) throw MongoError.create({message: f("option %s not a supported option %s", field, fields), driver:true });
+ this.s[field] = value;
+ if(field == 'numberOfRetries')
+ this.s.currentNumberOfRetries = value;
+ return this;
+}
+
+define.classMethod('setCursorOption', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Add a cursor flag to the cursor
+ * @method
+ * @param {string} flag The flag to set, must be one of following ['tailable', 'oplogReplay', 'noCursorTimeout', 'awaitData', 'partial'].
+ * @param {boolean} value The flag boolean value.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.addCursorFlag = function(flag, value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(flags.indexOf(flag) == -1) throw MongoError.create({message: f("flag %s not a supported flag %s", flag, flags), driver:true });
+ if(typeof value != 'boolean') throw MongoError.create({message: f("flag %s must be a boolean value", flag), driver:true});
+ this.s.cmd[flag] = value;
+ return this;
+}
+
+define.classMethod('addCursorFlag', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Add a query modifier to the cursor query
+ * @method
+ * @param {string} name The query modifier (must start with $, such as $orderby etc)
+ * @param {boolean} value The flag boolean value.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.addQueryModifier = function(name, value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(name[0] != '$') throw MongoError.create({message: f("%s is not a valid query modifier"), driver:true});
+ // Strip of the $
+ var field = name.substr(1);
+ // Set on the command
+ this.s.cmd[field] = value;
+ // Deal with the special case for sort
+ if(field == 'orderby') this.s.cmd.sort = this.s.cmd[field];
+ return this;
+}
+
+define.classMethod('addQueryModifier', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Add a comment to the cursor query allowing for tracking the comment in the log.
+ * @method
+ * @param {string} value The comment attached to this query.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.comment = function(value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.comment = value;
+ return this;
+}
+
+define.classMethod('comment', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set a maxAwaitTimeMS on a tailing cursor query to allow to customize the timeout value for the option awaitData (Only supported on MongoDB 3.2 or higher, ignored otherwise)
+ * @method
+ * @param {number} value Number of milliseconds to wait before aborting the tailed query.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.maxAwaitTimeMS = function(value) {
+ if(typeof value != 'number') throw MongoError.create({message: "maxAwaitTimeMS must be a number", driver:true});
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.maxAwaitTimeMS = value;
+ return this;
+}
+
+define.classMethod('maxAwaitTimeMS', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set a maxTimeMS on the cursor query, allowing for hard timeout limits on queries (Only supported on MongoDB 2.6 or higher)
+ * @method
+ * @param {number} value Number of milliseconds to wait before aborting the query.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.maxTimeMS = function(value) {
+ if(typeof value != 'number') throw MongoError.create({message: "maxTimeMS must be a number", driver:true});
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.maxTimeMS = value;
+ return this;
+}
+
+define.classMethod('maxTimeMS', {callback: false, promise:false, returns: [Cursor]});
+
+Cursor.prototype.maxTimeMs = Cursor.prototype.maxTimeMS;
+
+define.classMethod('maxTimeMs', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Sets a field projection for the query.
+ * @method
+ * @param {object} value The field projection object.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.project = function(value) {
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ this.s.cmd.fields = value;
+ return this;
+}
+
+define.classMethod('project', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Sets the sort order of the cursor query.
+ * @method
+ * @param {(string|array|object)} keyOrList The key or keys set for the sort.
+ * @param {number} [direction] The direction of the sorting (1 or -1).
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.sort = function(keyOrList, direction) {
+ if(this.s.options.tailable) throw MongoError.create({message: "Tailable cursor doesn't support sorting", driver:true});
+ if(this.s.state == Cursor.CLOSED || this.s.state == Cursor.OPEN || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ var order = keyOrList;
+
+ // We have an array of arrays, we need to preserve the order of the sort
+ // so we will us a Map
+ if(Array.isArray(order) && Array.isArray(order[0])) {
+ order = new Map(order.map(function(x) {
+ var value = [x[0], null];
+ if(x[1] == 'asc') {
+ value[1] = 1;
+ } else if(x[1] == 'desc') {
+ value[1] = -1;
+ } else if(x[1] == 1 || x[1] == -1) {
+ value[1] = x[1];
+ } else {
+ throw new MongoError("Illegal sort clause, must be of the form [['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]");
+ }
+
+ return value;
+ }));
+ }
+
+ if(direction != null) {
+ order = [[keyOrList, direction]];
+ }
+
+ this.s.cmd.sort = order;
+ this.sortValue = order;
+ return this;
+}
+
+define.classMethod('sort', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the batch size for the cursor.
+ * @method
+ * @param {number} value The batchSize for the cursor.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.batchSize = function(value) {
+ if(this.s.options.tailable) throw MongoError.create({message: "Tailable cursor doesn't support batchSize", driver:true});
+ if(this.s.state == Cursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(typeof value != 'number') throw MongoError.create({message: "batchSize requires an integer", driver:true});
+ this.s.cmd.batchSize = value;
+ this.setCursorBatchSize(value);
+ return this;
+}
+
+define.classMethod('batchSize', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the collation options for the cursor.
+ * @method
+ * @param {object} value The cursor collation options (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.collation = function(value) {
+ this.s.cmd.collation = value;
+ return this;
+}
+
+define.classMethod('collation', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the limit for the cursor.
+ * @method
+ * @param {number} value The limit for the cursor query.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.limit = function(value) {
+ if(this.s.options.tailable) throw MongoError.create({message: "Tailable cursor doesn't support limit", driver:true});
+ if(this.s.state == Cursor.OPEN || this.s.state == Cursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(typeof value != 'number') throw MongoError.create({message: "limit requires an integer", driver:true});
+ this.s.cmd.limit = value;
+ // this.cursorLimit = value;
+ this.setCursorLimit(value);
+ return this;
+}
+
+define.classMethod('limit', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Set the skip for the cursor.
+ * @method
+ * @param {number} value The skip for the cursor query.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.skip = function(value) {
+ if(this.s.options.tailable) throw MongoError.create({message: "Tailable cursor doesn't support skip", driver:true});
+ if(this.s.state == Cursor.OPEN || this.s.state == Cursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
+ if(typeof value != 'number') throw MongoError.create({message: "skip requires an integer", driver:true});
+ this.s.cmd.skip = value;
+ this.setCursorSkip(value);
+ return this;
+}
+
+define.classMethod('skip', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * The callback format for results
+ * @callback Cursor~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {(object|null|boolean)} result The result object if the command was executed successfully.
+ */
+
+/**
+ * Clone the cursor
+ * @function external:CoreCursor#clone
+ * @return {Cursor}
+ */
+
+/**
+ * Resets the cursor
+ * @function external:CoreCursor#rewind
+ * @return {null}
+ */
+
+/**
+ * Get the next available document from the cursor, returns null if no more documents are available.
+ * @method
+ * @param {Cursor~resultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @deprecated
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.nextObject = Cursor.prototype.next;
+
+var nextObject = function(self, callback) {
+ if(self.s.state == Cursor.CLOSED || self.isDead && self.isDead()) return handleCallback(callback, MongoError.create({message: "Cursor is closed", driver:true}));
+ if(self.s.state == Cursor.INIT && self.s.cmd.sort) {
+ try {
+ self.s.cmd.sort = formattedOrderClause(self.s.cmd.sort);
+ } catch(err) {
+ return handleCallback(callback, err);
+ }
+ }
+
+ // Get the next object
+ self._next(function(err, doc) {
+ self.s.state = Cursor.OPEN;
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, doc);
+ });
+}
+
+define.classMethod('nextObject', {callback: true, promise:true});
+
+// Trampoline emptying the number of retrieved items
+// without incurring a nextTick operation
+var loop = function(self, callback) {
+ // No more items we are done
+ if(self.bufferedCount() == 0) return;
+ // Get the next document
+ self._next(callback);
+ // Loop
+ return loop;
+}
+
+Cursor.prototype.next = Cursor.prototype.nextObject;
+
+define.classMethod('next', {callback: true, promise:true});
+
+/**
+ * Iterates over all the documents for this cursor. As with **{cursor.toArray}**,
+ * not all of the elements will be iterated if this cursor had been previouly accessed.
+ * In that case, **{cursor.rewind}** can be used to reset the cursor. However, unlike
+ * **{cursor.toArray}**, the cursor will only hold a maximum of batch size elements
+ * at any given time if batch size is specified. Otherwise, the caller is responsible
+ * for making sure that the entire result can fit the memory.
+ * @method
+ * @deprecated
+ * @param {Cursor~resultCallback} callback The result callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+Cursor.prototype.each = function(callback) {
+ // Rewind cursor state
+ this.rewind();
+ // Set current cursor to INIT
+ this.s.state = Cursor.INIT;
+ // Run the query
+ _each(this, callback);
+};
+
+define.classMethod('each', {callback: true, promise:false});
+
+// Run the each loop
+var _each = function(self, callback) {
+ if(!callback) throw MongoError.create({message: 'callback is mandatory', driver:true});
+ if(self.isNotified()) return;
+ if(self.s.state == Cursor.CLOSED || self.isDead()) {
+ return handleCallback(callback, MongoError.create({message: "Cursor is closed", driver:true}));
+ }
+
+ if(self.s.state == Cursor.INIT) self.s.state = Cursor.OPEN;
+
+ // Define function to avoid global scope escape
+ var fn = null;
+ // Trampoline all the entries
+ if(self.bufferedCount() > 0) {
+ while(fn = loop(self, callback)) fn(self, callback);
+ _each(self, callback);
+ } else {
+ self.next(function(err, item) {
+ if(err) return handleCallback(callback, err);
+ if(item == null) {
+ self.s.state = Cursor.CLOSED;
+ return handleCallback(callback, null, null);
+ }
+
+ if(handleCallback(callback, null, item) == false) return;
+ _each(self, callback);
+ })
+ }
+}
+
+/**
+ * The callback format for the forEach iterator method
+ * @callback Cursor~iteratorCallback
+ * @param {Object} doc An emitted document for the iterator
+ */
+
+/**
+ * The callback error format for the forEach iterator method
+ * @callback Cursor~endCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ */
+
+/**
+ * Iterates over all the documents for this cursor using the iterator, callback pattern.
+ * @method
+ * @param {Cursor~iteratorCallback} iterator The iteration callback.
+ * @param {Cursor~endCallback} callback The end callback.
+ * @throws {MongoError}
+ * @return {null}
+ */
+Cursor.prototype.forEach = function(iterator, callback) {
+ this.each(function(err, doc){
+ if(err) { callback(err); return false; }
+ if(doc != null) { iterator(doc); return true; }
+ if(doc == null && callback) {
+ var internalCallback = callback;
+ callback = null;
+ internalCallback(null);
+ return false;
+ }
+ });
+}
+
+define.classMethod('forEach', {callback: true, promise:false});
+
+/**
+ * Set the ReadPreference for the cursor.
+ * @method
+ * @param {(string|ReadPreference)} readPreference The new read preference for the cursor.
+ * @throws {MongoError}
+ * @return {Cursor}
+ */
+Cursor.prototype.setReadPreference = function(r) {
+ if(this.s.state != Cursor.INIT) throw MongoError.create({message: 'cannot change cursor readPreference after cursor has been accessed', driver:true});
+ if(r instanceof ReadPreference) {
+ this.s.options.readPreference = new CoreReadPreference(r.mode, r.tags, {maxStalenessMS: r.maxStalenessMS});
+ } else if(typeof r == 'string'){
+ this.s.options.readPreference = new CoreReadPreference(r);
+ } else if(r instanceof CoreReadPreference) {
+ this.s.options.readPreference = r;
+ }
+
+ return this;
+}
+
+define.classMethod('setReadPreference', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * The callback format for results
+ * @callback Cursor~toArrayResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object[]} documents All the documents the satisfy the cursor.
+ */
+
+/**
+ * Returns an array of documents. The caller is responsible for making sure that there
+ * is enough memory to store the results. Note that the array only contain partial
+ * results when this cursor had been previouly accessed. In that case,
+ * cursor.rewind() can be used to reset the cursor.
+ * @method
+ * @param {Cursor~toArrayResultCallback} [callback] The result callback.
+ * @throws {MongoError}
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.toArray = function(callback) {
+ var self = this;
+ if(self.s.options.tailable) throw MongoError.create({message: 'Tailable cursor cannot be converted to array', driver:true});
+
+ // Execute using callback
+ if(typeof callback == 'function') return toArray(self, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ toArray(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+var toArray = function(self, callback) {
+ var items = [];
+
+ // Reset cursor
+ self.rewind();
+ self.s.state = Cursor.INIT;
+
+ // Fetch all the documents
+ var fetchDocs = function() {
+ self._next(function(err, doc) {
+ if(err) return handleCallback(callback, err);
+ if(doc == null) {
+ self.s.state = Cursor.CLOSED;
+ return handleCallback(callback, null, items);
+ }
+
+ // Add doc to items
+ items.push(doc)
+
+ // Get all buffered objects
+ if(self.bufferedCount() > 0) {
+ var docs = self.readBufferedDocuments(self.bufferedCount())
+
+ // Transform the doc if transform method added
+ if(self.s.transforms && typeof self.s.transforms.doc == 'function') {
+ docs = docs.map(self.s.transforms.doc);
+ }
+
+ push.apply(items, docs);
+ }
+
+ // Attempt a fetch
+ fetchDocs();
+ })
+ }
+
+ fetchDocs();
+}
+
+define.classMethod('toArray', {callback: true, promise:true});
+
+/**
+ * The callback format for results
+ * @callback Cursor~countResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {number} count The count of documents.
+ */
+
+/**
+ * Get the count of documents for this cursor
+ * @method
+ * @param {boolean} [applySkipLimit=true] Should the count command apply limit and skip settings on the cursor or in the passed in options.
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.skip=null] The number of documents to skip.
+ * @param {number} [options.limit=null] The maximum amounts to count before aborting.
+ * @param {number} [options.maxTimeMS=null] Number of miliseconds to wait before aborting the query.
+ * @param {string} [options.hint=null] An index name hint for the query.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {Cursor~countResultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.count = function(applySkipLimit, opts, callback) {
+ var self = this;
+ if(self.s.cmd.query == null) throw MongoError.create({message: "count can only be used with find command", driver:true});
+ if(typeof opts == 'function') callback = opts, opts = {};
+ opts = opts || {};
+
+ // Execute using callback
+ if(typeof callback == 'function') return count(self, applySkipLimit, opts, callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ count(self, applySkipLimit, opts, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var count = function(self, applySkipLimit, opts, callback) {
+ if(typeof applySkipLimit == 'function') {
+ callback = applySkipLimit;
+ applySkipLimit = true;
+ }
+
+ if(applySkipLimit) {
+ if(typeof self.cursorSkip() == 'number') opts.skip = self.cursorSkip();
+ if(typeof self.cursorLimit() == 'number') opts.limit = self.cursorLimit();
+ }
+
+ // Command
+ var delimiter = self.s.ns.indexOf('.');
+
+ var command = {
+ 'count': self.s.ns.substr(delimiter+1), 'query': self.s.cmd.query
+ }
+
+ if(typeof opts.maxTimeMS == 'number') {
+ command.maxTimeMS = opts.maxTimeMS;
+ } else if(self.s.cmd && typeof self.s.cmd.maxTimeMS == 'number') {
+ command.maxTimeMS = self.s.cmd.maxTimeMS;
+ }
+
+ // Merge in any options
+ if(opts.skip) command.skip = opts.skip;
+ if(opts.limit) command.limit = opts.limit;
+ if(self.s.options.hint) command.hint = self.s.options.hint;
+
+ // Execute the command
+ self.topology.command(f("%s.$cmd", self.s.ns.substr(0, delimiter))
+ , command, function(err, result) {
+ callback(err, result ? result.result.n : null)
+ });
+}
+
+define.classMethod('count', {callback: true, promise:true});
+
+/**
+ * Close the cursor, sending a KillCursor command and emitting close.
+ * @method
+ * @param {Cursor~resultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.close = function(callback) {
+ this.s.state = Cursor.CLOSED;
+ // Kill the cursor
+ this.kill();
+ // Emit the close event for the cursor
+ this.emit('close');
+ // Callback if provided
+ if(typeof callback == 'function') return handleCallback(callback, null, this);
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ resolve();
+ });
+}
+
+define.classMethod('close', {callback: true, promise:true});
+
+/**
+ * Map all documents using the provided function
+ * @method
+ * @param {function} [transform] The mapping transformation method.
+ * @return {null}
+ */
+Cursor.prototype.map = function(transform) {
+ this.cursorState.transforms = { doc: transform };
+ return this;
+}
+
+define.classMethod('map', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Is the cursor closed
+ * @method
+ * @return {boolean}
+ */
+Cursor.prototype.isClosed = function() {
+ return this.isDead();
+}
+
+define.classMethod('isClosed', {callback: false, promise:false, returns: [Boolean]});
+
+Cursor.prototype.destroy = function(err) {
+ if(err) this.emit('error', err);
+ this.pause();
+ this.close();
+}
+
+define.classMethod('destroy', {callback: false, promise:false});
+
+/**
+ * Return a modified Readable stream including a possible transform method.
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {function} [options.transform=null] A transformation method applied to each document emitted by the stream.
+ * @return {Cursor}
+ */
+Cursor.prototype.stream = function(options) {
+ this.s.streamOptions = options || {};
+ return this;
+}
+
+define.classMethod('stream', {callback: false, promise:false, returns: [Cursor]});
+
+/**
+ * Execute the explain for the cursor
+ * @method
+ * @param {Cursor~resultCallback} [callback] The result callback.
+ * @return {Promise} returns Promise if no callback passed
+ */
+Cursor.prototype.explain = function(callback) {
+ var self = this;
+ this.s.cmd.explain = true;
+
+ // Do we have a readConcern
+ if(this.s.cmd.readConcern) {
+ delete this.s.cmd['readConcern'];
+ }
+
+ // Execute using callback
+ if(typeof callback == 'function') return this._next(callback);
+
+ // Return a Promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self._next(function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('explain', {callback: true, promise:true});
+
+Cursor.prototype._read = function(n) {
+ var self = this;
+ if(self.s.state == Cursor.CLOSED || self.isDead()) {
+ return self.push(null);
+ }
+
+ // Get the next item
+ self.nextObject(function(err, result) {
+ if(err) {
+ if(self.listeners('error') && self.listeners('error').length > 0) {
+ self.emit('error', err);
+ }
+ if(!self.isDead()) self.close();
+
+ // Emit end event
+ self.emit('end');
+ return self.emit('finish');
+ }
+
+ // If we provided a transformation method
+ if(typeof self.s.streamOptions.transform == 'function' && result != null) {
+ return self.push(self.s.streamOptions.transform(result));
+ }
+
+ // If we provided a map function
+ if(self.cursorState.transforms && typeof self.cursorState.transforms.doc == 'function' && result != null) {
+ return self.push(self.cursorState.transforms.doc(result));
+ }
+
+ // Return the result
+ self.push(result);
+ });
+}
+
+Object.defineProperty(Cursor.prototype, 'readPreference', {
+ enumerable:true,
+ get: function() {
+ if (!this || !this.s) {
+ return null;
+ }
+
+ return this.s.options.readPreference;
+ }
+});
+
+Object.defineProperty(Cursor.prototype, 'namespace', {
+ enumerable: true,
+ get: function() {
+ if (!this || !this.s) {
+ return null;
+ }
+
+ // TODO: refactor this logic into core
+ var ns = this.s.ns || '';
+ var firstDot = ns.indexOf('.');
+ if (firstDot < 0) {
+ return {
+ database: this.s.ns,
+ collection: ''
+ };
+ }
+ return {
+ database: ns.substr(0, firstDot),
+ collection: ns.substr(firstDot + 1)
+ };
+ }
+});
+
+/**
+ * The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.
+ * @function external:Readable#read
+ * @param {number} size Optional argument to specify how much data to read.
+ * @return {(String | Buffer | null)}
+ */
+
+/**
+ * Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects.
+ * @function external:Readable#setEncoding
+ * @param {string} encoding The encoding to use.
+ * @return {null}
+ */
+
+/**
+ * This method will cause the readable stream to resume emitting data events.
+ * @function external:Readable#resume
+ * @return {null}
+ */
+
+/**
+ * This method will cause a stream in flowing-mode to stop emitting data events. Any data that becomes available will remain in the internal buffer.
+ * @function external:Readable#pause
+ * @return {null}
+ */
+
+/**
+ * This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream.
+ * @function external:Readable#pipe
+ * @param {Writable} destination The destination for writing data
+ * @param {object} [options] Pipe options
+ * @return {null}
+ */
+
+/**
+ * This method will remove the hooks set up for a previous pipe() call.
+ * @function external:Readable#unpipe
+ * @param {Writable} [destination] The destination for writing data
+ * @return {null}
+ */
+
+/**
+ * This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party.
+ * @function external:Readable#unshift
+ * @param {(Buffer|string)} chunk Chunk of data to unshift onto the read queue.
+ * @return {null}
+ */
+
+/**
+ * Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is today. (See "Compatibility" below for more information.)
+ * @function external:Readable#wrap
+ * @param {Stream} stream An "old style" readable stream.
+ * @return {null}
+ */
+
+Cursor.INIT = 0;
+Cursor.OPEN = 1;
+Cursor.CLOSED = 2;
+Cursor.GET_MORE = 3;
+
+module.exports = Cursor;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/db.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/db.js
new file mode 100644
index 0000000..ca7c486
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/db.js
@@ -0,0 +1,1886 @@
+"use strict";
+
+var EventEmitter = require('events').EventEmitter
+ , inherits = require('util').inherits
+ , getSingleProperty = require('./utils').getSingleProperty
+ , shallowClone = require('./utils').shallowClone
+ , parseIndexOptions = require('./utils').parseIndexOptions
+ , debugOptions = require('./utils').debugOptions
+ , CommandCursor = require('./command_cursor')
+ , handleCallback = require('./utils').handleCallback
+ , filterOptions = require('./utils').filterOptions
+ , toError = require('./utils').toError
+ , ReadPreference = require('./read_preference')
+ , f = require('util').format
+ , Admin = require('./admin')
+ , Code = require('mongodb-core').BSON.Code
+ , CoreReadPreference = require('mongodb-core').ReadPreference
+ , MongoError = require('mongodb-core').MongoError
+ , ObjectID = require('mongodb-core').ObjectID
+ , Define = require('./metadata')
+ , Logger = require('mongodb-core').Logger
+ , Collection = require('./collection')
+ , crypto = require('crypto')
+ , assign = require('./utils').assign;
+
+var debugFields = ['authSource', 'w', 'wtimeout', 'j', 'native_parser', 'forceServerObjectId'
+ , 'serializeFunctions', 'raw', 'promoteLongs', 'promoteValues', 'promoteBuffers', 'bufferMaxEntries', 'numberOfRetries', 'retryMiliSeconds'
+ , 'readPreference', 'pkFactory', 'parentDb', 'promiseLibrary', 'noListener'];
+
+/**
+ * @fileOverview The **Db** class is a class that represents a MongoDB Database.
+ *
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Get an additional db
+ * var testDb = db.db('test');
+ * db.close();
+ * });
+ */
+
+// Allowed parameters
+var legalOptionNames = ['w', 'wtimeout', 'fsync', 'j', 'readPreference', 'readPreferenceTags', 'native_parser'
+ , 'forceServerObjectId', 'pkFactory', 'serializeFunctions', 'raw', 'bufferMaxEntries', 'authSource'
+ , 'ignoreUndefined', 'promoteLongs', 'promiseLibrary', 'readConcern', 'retryMiliSeconds', 'numberOfRetries'
+ , 'parentDb', 'noListener', 'loggerLevel', 'logger', 'promoteBuffers', 'promoteLongs', 'promoteValues'];
+
+/**
+ * Creates a new Db instance
+ * @class
+ * @param {string} databaseName The name of the database this instance represents.
+ * @param {(Server|ReplSet|Mongos)} topology The server topology for the database.
+ * @param {object} [options=null] Optional settings.
+ * @param {string} [options.authSource=null] If the database authentication is dependent on another databaseName.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.forceServerObjectId=false] Force server to assign _id values instead of driver.
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {boolean} [options.raw=false] Return document results as raw BSON buffers.
+ * @param {boolean} [options.promoteLongs=true] Promotes Long values to number if they fit inside the 53 bits resolution.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {number} [options.bufferMaxEntries=-1] Sets a cap on how many operations the driver will buffer up before giving up on getting a working connection, default is -1 which is unlimited.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.pkFactory=null] A primary key factory object for generation of custom _id keys.
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {object} [options.readConcern=null] Specify a read concern for the collection. (only MongoDB 3.2 or higher supported)
+ * @param {object} [options.readConcern.level='local'] Specify a read concern level for the collection operations, one of [local|majority]. (only MongoDB 3.2 or higher supported)
+ * @property {(Server|ReplSet|Mongos)} serverConfig Get the current db topology.
+ * @property {number} bufferMaxEntries Current bufferMaxEntries value for the database
+ * @property {string} databaseName The name of the database this instance represents.
+ * @property {object} options The options associated with the db instance.
+ * @property {boolean} native_parser The current value of the parameter native_parser.
+ * @property {boolean} slaveOk The current slaveOk value for the db instance.
+ * @property {object} writeConcern The current write concern values.
+ * @property {object} topology Access the topology object (single server, replicaset or mongos).
+ * @fires Db#close
+ * @fires Db#authenticated
+ * @fires Db#reconnect
+ * @fires Db#error
+ * @fires Db#timeout
+ * @fires Db#parseError
+ * @fires Db#fullsetup
+ * @return {Db} a Db instance.
+ */
+var Db = function(databaseName, topology, options) {
+ options = options || {};
+ if(!(this instanceof Db)) return new Db(databaseName, topology, options);
+ EventEmitter.call(this);
+ var self = this;
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Filter the options
+ options = filterOptions(options, legalOptionNames);
+
+ // Ensure we put the promiseLib in the options
+ options.promiseLibrary = promiseLibrary;
+
+ // var self = this; // Internal state of the db object
+ this.s = {
+ // Database name
+ databaseName: databaseName
+ // DbCache
+ , dbCache: {}
+ // Children db's
+ , children: []
+ // Topology
+ , topology: topology
+ // Options
+ , options: options
+ // Logger instance
+ , logger: Logger('Db', options)
+ // Get the bson parser
+ , bson: topology ? topology.bson : null
+ // Authsource if any
+ , authSource: options.authSource
+ // Unpack read preference
+ , readPreference: options.readPreference
+ // Set buffermaxEntries
+ , bufferMaxEntries: typeof options.bufferMaxEntries == 'number' ? options.bufferMaxEntries : -1
+ // Parent db (if chained)
+ , parentDb: options.parentDb || null
+ // Set up the primary key factory or fallback to ObjectID
+ , pkFactory: options.pkFactory || ObjectID
+ // Get native parser
+ , nativeParser: options.nativeParser || options.native_parser
+ // Promise library
+ , promiseLibrary: promiseLibrary
+ // No listener
+ , noListener: typeof options.noListener == 'boolean' ? options.noListener : false
+ // ReadConcern
+ , readConcern: options.readConcern
+ }
+
+ // Ensure we have a valid db name
+ validateDatabaseName(self.s.databaseName);
+
+ // Add a read Only property
+ getSingleProperty(this, 'serverConfig', self.s.topology);
+ getSingleProperty(this, 'bufferMaxEntries', self.s.bufferMaxEntries);
+ getSingleProperty(this, 'databaseName', self.s.databaseName);
+
+ // This is a child db, do not register any listeners
+ if(options.parentDb) return;
+ if(this.s.noListener) return;
+
+ // Add listeners
+ topology.on('error', createListener(self, 'error', self));
+ topology.on('timeout', createListener(self, 'timeout', self));
+ topology.on('close', createListener(self, 'close', self));
+ topology.on('parseError', createListener(self, 'parseError', self));
+ topology.once('open', createListener(self, 'open', self));
+ topology.once('fullsetup', createListener(self, 'fullsetup', self));
+ topology.once('all', createListener(self, 'all', self));
+ topology.on('reconnect', createListener(self, 'reconnect', self));
+}
+
+inherits(Db, EventEmitter);
+
+var define = Db.define = new Define('Db', Db, false);
+
+// Topology
+Object.defineProperty(Db.prototype, 'topology', {
+ enumerable:true,
+ get: function() { return this.s.topology; }
+});
+
+// Options
+Object.defineProperty(Db.prototype, 'options', {
+ enumerable:true,
+ get: function() { return this.s.options; }
+});
+
+// slaveOk specified
+Object.defineProperty(Db.prototype, 'slaveOk', {
+ enumerable:true,
+ get: function() {
+ if(this.s.options.readPreference != null
+ && (this.s.options.readPreference != 'primary' || this.s.options.readPreference.mode != 'primary')) {
+ return true;
+ }
+ return false;
+ }
+});
+
+// get the write Concern
+Object.defineProperty(Db.prototype, 'writeConcern', {
+ enumerable:true,
+ get: function() {
+ var ops = {};
+ if(this.s.options.w != null) ops.w = this.s.options.w;
+ if(this.s.options.j != null) ops.j = this.s.options.j;
+ if(this.s.options.fsync != null) ops.fsync = this.s.options.fsync;
+ if(this.s.options.wtimeout != null) ops.wtimeout = this.s.options.wtimeout;
+ return ops;
+ }
+});
+
+/**
+ * The callback format for the Db.open method
+ * @callback Db~openCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Db} db The Db instance if the open method was successful.
+ */
+
+// Internal method
+var open = function(self, callback) {
+ self.s.topology.connect(self, self.s.options, function(err, topology) {
+ if(callback == null) return;
+ var internalCallback = callback;
+ callback == null;
+
+ if(err) {
+ self.close();
+ return internalCallback(err);
+ }
+
+ internalCallback(null, self);
+ });
+}
+
+/**
+ * Open the database
+ * @method
+ * @param {Db~openCallback} [callback] Callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.open = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return open(self, callback);
+ // Return promise
+ return new self.s.promiseLibrary(function(resolve, reject) {
+ open(self, function(err, db) {
+ if(err) return reject(err);
+ resolve(db);
+ })
+ });
+}
+
+define.classMethod('open', {callback: true, promise:true});
+
+/**
+ * Converts provided read preference to CoreReadPreference
+ * @param {(ReadPreference|string|object)} readPreference the user provided read preference
+ * @return {CoreReadPreference}
+ */
+var convertReadPreference = function(readPreference) {
+ if(readPreference && typeof readPreference == 'string') {
+ return new CoreReadPreference(readPreference);
+ } else if(readPreference instanceof ReadPreference) {
+ return new CoreReadPreference(readPreference.mode, readPreference.tags, {maxStalenessMS: readPreference.maxStalenessMS});
+ } else if(readPreference && typeof readPreference == 'object') {
+ var mode = readPreference.mode || readPreference.preference;
+ if (mode && typeof mode == 'string') {
+ readPreference = new CoreReadPreference(mode, readPreference.tags, {maxStalenessMS: readPreference.maxStalenessMS});
+ }
+ }
+ return readPreference;
+}
+
+/**
+ * The callback format for results
+ * @callback Db~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object} result The result object if the command was executed successfully.
+ */
+
+var executeCommand = function(self, command, options, callback) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ // Get the db name we are executing against
+ var dbName = options.dbName || options.authdb || self.s.databaseName;
+
+ // If we have a readPreference set
+ if(options.readPreference == null && self.s.readPreference) {
+ options.readPreference = self.s.readPreference;
+ }
+
+ // Convert the readPreference if its not a write
+ if(options.readPreference) {
+ options.readPreference = convertReadPreference(options.readPreference);
+ } else {
+ options.readPreference = CoreReadPreference.primary;
+ }
+
+ // Debug information
+ if(self.s.logger.isDebug()) self.s.logger.debug(f('executing command %s against %s with options [%s]'
+ , JSON.stringify(command), f('%s.$cmd', dbName), JSON.stringify(debugOptions(debugFields, options))));
+
+ // Execute command
+ self.s.topology.command(f('%s.$cmd', dbName), command, options, function(err, result) {
+ if(err) return handleCallback(callback, err);
+ if(options.full) return handleCallback(callback, null, result);
+ handleCallback(callback, null, result.result);
+ });
+}
+
+/**
+ * Execute a command
+ * @method
+ * @param {object} command The command hash
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.command = function(command, options, callback) {
+ var self = this;
+ // Change the callback
+ if(typeof options == 'function') callback = options, options = {};
+ // Clone the options
+ options = shallowClone(options);
+
+ // Do we have a callback
+ if(typeof callback == 'function') return executeCommand(self, command, options, callback);
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ executeCommand(self, command, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('command', {callback: true, promise:true});
+
+/**
+ * The callback format for results
+ * @callback Db~noResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {null} result Is not set to a value
+ */
+
+/**
+ * Close the db and its underlying connections
+ * @method
+ * @param {boolean} force Force close, emitting no events
+ * @param {Db~noResultCallback} [callback] The result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.close = function(force, callback) {
+ if(typeof force == 'function') callback = force, force = false;
+ this.s.topology.close(force);
+ var self = this;
+
+ // Fire close event if any listeners
+ if(this.listeners('close').length > 0) {
+ this.emit('close');
+
+ // If it's the top level db emit close on all children
+ if(this.parentDb == null) {
+ // Fire close on all children
+ for(var i = 0; i < this.s.children.length; i++) {
+ this.s.children[i].emit('close');
+ }
+ }
+
+ // Remove listeners after emit
+ self.removeAllListeners('close');
+ }
+
+ // Close parent db if set
+ if(this.s.parentDb) this.s.parentDb.close();
+ // Callback after next event loop tick
+ if(typeof callback == 'function') return process.nextTick(function() {
+ handleCallback(callback, null);
+ })
+
+ // Return dummy promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ resolve();
+ });
+}
+
+define.classMethod('close', {callback: true, promise:true});
+
+/**
+ * Return the Admin db instance
+ * @method
+ * @return {Admin} return the new Admin db instance
+ */
+Db.prototype.admin = function() {
+ return new Admin(this, this.s.topology, this.s.promiseLibrary);
+};
+
+define.classMethod('admin', {callback: false, promise:false, returns: [Admin]});
+
+/**
+ * The callback format for the collection method, must be used if strict is specified
+ * @callback Db~collectionResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection} collection The collection instance.
+ */
+
+/**
+ * Fetch a specific collection (containing the actual collection information). If the application does not use strict mode you can
+ * can use it without a callback in the following way: `var collection = db.collection('mycollection');`
+ *
+ * @method
+ * @param {string} name the collection name we wish to access.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.raw=false] Return document results as raw BSON buffers.
+ * @param {object} [options.pkFactory=null] A primary key factory object for generation of custom _id keys.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.strict=false] Returns an error if the collection does not exist
+ * @param {object} [options.readConcern=null] Specify a read concern for the collection. (only MongoDB 3.2 or higher supported)
+ * @param {object} [options.readConcern.level='local'] Specify a read concern level for the collection operations, one of [local|majority]. (only MongoDB 3.2 or higher supported)
+ * @param {Db~collectionResultCallback} callback The collection result callback
+ * @return {Collection} return the new Collection instance if not in strict mode
+ */
+Db.prototype.collection = function(name, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ options = shallowClone(options);
+ // Set the promise library
+ options.promiseLibrary = this.s.promiseLibrary;
+
+ // If we have not set a collection level readConcern set the db level one
+ options.readConcern = options.readConcern || this.s.readConcern;
+
+ // Do we have ignoreUndefined set
+ if(this.s.options.ignoreUndefined) {
+ options.ignoreUndefined = this.s.options.ignoreUndefined;
+ }
+
+ // Execute
+ if(options == null || !options.strict) {
+ try {
+ var collection = new Collection(this, this.s.topology, this.s.databaseName, name, this.s.pkFactory, options);
+ if(callback) callback(null, collection);
+ return collection;
+ } catch(err) {
+ if(callback) return callback(err);
+ throw err;
+ }
+ }
+
+ // Strict mode
+ if(typeof callback != 'function') {
+ throw toError(f("A callback is required in strict mode. While getting collection %s.", name));
+ }
+
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) {
+ return callback(new MongoError('topology was destroyed'));
+ }
+
+ // Strict mode
+ this.listCollections({name:name}).toArray(function(err, collections) {
+ if(err != null) return handleCallback(callback, err, null);
+ if(collections.length == 0) return handleCallback(callback, toError(f("Collection %s does not exist. Currently in strict mode.", name)), null);
+
+ try {
+ return handleCallback(callback, null, new Collection(self, self.s.topology, self.s.databaseName, name, self.s.pkFactory, options));
+ } catch(err) {
+ return handleCallback(callback, err, null);
+ }
+ });
+}
+
+define.classMethod('collection', {callback: true, promise:false, returns: [Collection]});
+
+function decorateWithWriteConcern(command, self, options) {
+ // Do we support write concerns 3.4 and higher
+ if(self.s.topology.capabilities().commandsTakeWriteConcern) {
+ // Get the write concern settings
+ var finalOptions = writeConcern(shallowClone(options), self, options);
+ // Add the write concern to the command
+ if(finalOptions.writeConcern) {
+ command.writeConcern = finalOptions.writeConcern;
+ }
+ }
+}
+
+var createCollection = function(self, name, options, callback) {
+ // Get the write concern options
+ var finalOptions = writeConcern(shallowClone(options), self, options);
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // Check if we have the name
+ self.listCollections({name: name})
+ .setReadPreference(ReadPreference.PRIMARY)
+ .toArray(function(err, collections) {
+ if(err != null) return handleCallback(callback, err, null);
+ if(collections.length > 0 && finalOptions.strict) {
+ return handleCallback(callback, MongoError.create({message: f("Collection %s already exists. Currently in strict mode.", name), driver:true}), null);
+ } else if (collections.length > 0) {
+ try { return handleCallback(callback, null, new Collection(self, self.s.topology, self.s.databaseName, name, self.s.pkFactory, options)); }
+ catch(err) { return handleCallback(callback, err); }
+ }
+
+ // Create collection command
+ var cmd = {'create':name};
+
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(cmd, self, options);
+
+ // Add all optional parameters
+ for(var n in options) {
+ if(options[n] != null && typeof options[n] != 'function')
+ cmd[n] = options[n];
+ }
+
+ // Force a primary read Preference
+ finalOptions.readPreference = ReadPreference.PRIMARY;
+
+ // Execute command
+ self.command(cmd, finalOptions, function(err, result) {
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, new Collection(self, self.s.topology, self.s.databaseName, name, self.s.pkFactory, options));
+ });
+ });
+}
+
+/**
+ * Create a new collection on a server with the specified options. Use this to create capped collections.
+ *
+ * @method
+ * @param {string} name the collection name we wish to access.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.raw=false] Return document results as raw BSON buffers.
+ * @param {object} [options.pkFactory=null] A primary key factory object for generation of custom _id keys.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {boolean} [options.serializeFunctions=false] Serialize functions on any object.
+ * @param {boolean} [options.strict=false] Returns an error if the collection does not exist
+ * @param {boolean} [options.capped=false] Create a capped collection.
+ * @param {number} [options.size=null] The size of the capped collection in bytes.
+ * @param {number} [options.max=null] The maximum number of documents in the capped collection.
+ * @param {boolean} [options.autoIndexId=true] Create an index on the _id field of the document, True by default on MongoDB 2.2 or higher off for version < 2.2.
+ * @param {object} [options.collation=null] Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
+ * @param {Db~collectionResultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.createCollection = function(name, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ name = args.length ? args.shift() : null;
+ options = args.length ? args.shift() || {} : {};
+
+ // Do we have a promisesLibrary
+ options.promiseLibrary = options.promiseLibrary || this.s.promiseLibrary;
+
+ // Check if the callback is in fact a string
+ if(typeof callback == 'string') name = callback;
+
+ // Execute the fallback callback
+ if(typeof callback == 'function') return createCollection(self, name, options, callback);
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ createCollection(self, name, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+}
+
+define.classMethod('createCollection', {callback: true, promise:true});
+
+/**
+ * Get all the db statistics.
+ *
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.scale=null] Divide the returned sizes by scale value.
+ * @param {Db~resultCallback} [callback] The collection result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.stats = function(options, callback) {
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ // Build command object
+ var commandObject = { dbStats:true };
+ // Check if we have the scale value
+ if(options['scale'] != null) commandObject['scale'] = options['scale'];
+
+ // If we have a readPreference set
+ if(options.readPreference == null && this.s.readPreference) {
+ options.readPreference = this.s.readPreference;
+ }
+
+ // Execute the command
+ return this.command(commandObject, options, callback);
+}
+
+define.classMethod('stats', {callback: true, promise:true});
+
+// Transformation methods for cursor results
+var listCollectionsTranforms = function(databaseName) {
+ var matching = f('%s.', databaseName);
+
+ return {
+ doc: function(doc) {
+ var index = doc.name.indexOf(matching);
+ // Remove database name if available
+ if(doc.name && index == 0) {
+ doc.name = doc.name.substr(index + matching.length);
+ }
+
+ return doc;
+ }
+ }
+}
+
+/**
+ * Get the list of all collection information for the specified db.
+ *
+ * @method
+ * @param {object} filter Query to filter collections by
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.batchSize=null] The batchSize for the returned command cursor or if pre 2.8 the systems batch collection
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @return {CommandCursor}
+ */
+Db.prototype.listCollections = function(filter, options) {
+ filter = filter || {};
+ options = options || {};
+
+ // Shallow clone the object
+ options = shallowClone(options);
+ // Set the promise library
+ options.promiseLibrary = this.s.promiseLibrary;
+
+ // Ensure valid readPreference
+ if(options.readPreference) {
+ options.readPreference = convertReadPreference(options.readPreference);
+ }
+
+ // We have a list collections command
+ if(this.serverConfig.capabilities().hasListCollectionsCommand) {
+ // Cursor options
+ var cursor = options.batchSize ? {batchSize: options.batchSize} : {}
+ // Build the command
+ var command = { listCollections : true, filter: filter, cursor: cursor };
+ // Set the AggregationCursor constructor
+ options.cursorFactory = CommandCursor;
+ // Create the cursor
+ var cursor = this.s.topology.cursor(f('%s.$cmd', this.s.databaseName), command, options);
+ // Do we have a readPreference, apply it
+ if(options.readPreference) {
+ cursor.setReadPreference(options.readPreference);
+ }
+ // Return the cursor
+ return cursor;
+ }
+
+ // We cannot use the listCollectionsCommand
+ if(!this.serverConfig.capabilities().hasListCollectionsCommand) {
+ // If we have legacy mode and have not provided a full db name filter it
+ if(typeof filter.name == 'string' && !(new RegExp('^' + this.databaseName + '\\.').test(filter.name))) {
+ filter = shallowClone(filter);
+ filter.name = f('%s.%s', this.s.databaseName, filter.name);
+ }
+ }
+
+ // No filter, filter by current database
+ if(filter == null) {
+ filter.name = f('/%s/', this.s.databaseName);
+ }
+
+ // Rewrite the filter to use $and to filter out indexes
+ if(filter.name) {
+ filter = {$and: [{name: filter.name}, {name:/^((?!\$).)*$/}]};
+ } else {
+ filter = {name:/^((?!\$).)*$/};
+ }
+
+ // Return options
+ var _options = {transforms: listCollectionsTranforms(this.s.databaseName)}
+ // Get the cursor
+ var cursor = this.collection(Db.SYSTEM_NAMESPACE_COLLECTION).find(filter, _options);
+ // Do we have a readPreference, apply it
+ if(options.readPreference) cursor.setReadPreference(options.readPreference);
+ // Set the passed in batch size if one was provided
+ if(options.batchSize) cursor = cursor.batchSize(options.batchSize);
+ // We have a fallback mode using legacy systems collections
+ return cursor;
+};
+
+define.classMethod('listCollections', {callback: false, promise:false, returns: [CommandCursor]});
+
+var evaluate = function(self, code, parameters, options, callback) {
+ var finalCode = code;
+ var finalParameters = [];
+
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // If not a code object translate to one
+ if(!(finalCode instanceof Code)) finalCode = new Code(finalCode);
+ // Ensure the parameters are correct
+ if(parameters != null && !Array.isArray(parameters) && typeof parameters !== 'function') {
+ finalParameters = [parameters];
+ } else if(parameters != null && Array.isArray(parameters) && typeof parameters !== 'function') {
+ finalParameters = parameters;
+ }
+
+ // Create execution selector
+ var cmd = {'$eval':finalCode, 'args':finalParameters};
+ // Check if the nolock parameter is passed in
+ if(options['nolock']) {
+ cmd['nolock'] = options['nolock'];
+ }
+
+ // Set primary read preference
+ options.readPreference = new CoreReadPreference(ReadPreference.PRIMARY);
+
+ // Execute the command
+ self.command(cmd, options, function(err, result) {
+ if(err) return handleCallback(callback, err, null);
+ if(result && result.ok == 1) return handleCallback(callback, null, result.retval);
+ if(result) return handleCallback(callback, MongoError.create({message: f("eval failed: %s", result.errmsg), driver:true}), null);
+ handleCallback(callback, err, result);
+ });
+}
+
+/**
+ * Evaluate JavaScript on the server
+ *
+ * @method
+ * @param {Code} code JavaScript to execute on server.
+ * @param {(object|array)} parameters The parameters for the call.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.nolock=false] Tell MongoDB not to block on the evaulation of the javascript.
+ * @param {Db~resultCallback} [callback] The results callback
+ * @deprecated Eval is deprecated on MongoDB 3.2 and forward
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.eval = function(code, parameters, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ parameters = args.length ? args.shift() : parameters;
+ options = args.length ? args.shift() || {} : {};
+
+ // Check if the callback is in fact a string
+ if(typeof callback == 'function') return evaluate(self, code, parameters, options, callback);
+ // Execute the command
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ evaluate(self, code, parameters, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+define.classMethod('eval', {callback: true, promise:true});
+
+/**
+ * Rename a collection.
+ *
+ * @method
+ * @param {string} fromCollection Name of current collection to rename.
+ * @param {string} toCollection New name of of the collection.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.dropTarget=false] Drop the target name collection if it previously exists.
+ * @param {Db~collectionResultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.renameCollection = function(fromCollection, toCollection, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ // Add return new collection
+ options.new_collection = true;
+
+ // Check if the callback is in fact a string
+ if(typeof callback == 'function') {
+ return this.collection(fromCollection).rename(toCollection, options, callback);
+ }
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.collection(fromCollection).rename(toCollection, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+define.classMethod('renameCollection', {callback: true, promise:true});
+
+/**
+ * Drop a collection from the database, removing it permanently. New accesses will create a new collection.
+ *
+ * @method
+ * @param {string} name Name of collection to drop
+ * @param {Db~resultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.dropCollection = function(name, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Command to execute
+ var cmd = {'drop':name}
+
+ // Decorate with write concern
+ decorateWithWriteConcern(cmd, self, options);
+
+ // options
+ options = assign({}, this.s.options, {readPreference: ReadPreference.PRIMARY});
+
+ // Check if the callback is in fact a string
+ if(typeof callback == 'function') return this.command(cmd, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ if(err) return handleCallback(callback, err);
+ if(result.ok) return handleCallback(callback, null, true);
+ handleCallback(callback, null, false);
+ });
+
+ // Clone the options
+ var options = shallowClone(self.s.options);
+ // Set readPreference PRIMARY
+ options.readPreference = ReadPreference.PRIMARY;
+
+ // Execute the command
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ // Execute command
+ self.command(cmd, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return reject(new MongoError('topology was destroyed'));
+ if(err) return reject(err);
+ if(result.ok) return resolve(true);
+ resolve(false);
+ });
+ });
+};
+
+define.classMethod('dropCollection', {callback: true, promise:true});
+
+/**
+ * Drop a database, removing it permanently from the server.
+ *
+ * @method
+ * @param {Db~resultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.dropDatabase = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+ // Drop database command
+ var cmd = {'dropDatabase':1};
+
+ // Decorate with write concern
+ decorateWithWriteConcern(cmd, self, options);
+
+ // Ensure primary only
+ var options = assign({}, this.s.options, {readPreference: ReadPreference.PRIMARY});
+
+ // Check if the callback is in fact a string
+ if(typeof callback == 'function') return this.command(cmd, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, result.ok ? true : false);
+ });
+
+ // Execute the command
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ // Execute command
+ self.command(cmd, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return reject(new MongoError('topology was destroyed'));
+ if(err) return reject(err);
+ if(result.ok) return resolve(true);
+ resolve(false);
+ });
+ });
+}
+
+define.classMethod('dropDatabase', {callback: true, promise:true});
+
+/**
+ * The callback format for the collections method.
+ * @callback Db~collectionsResultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection[]} collections An array of all the collections objects for the db instance.
+ */
+var collections = function(self, callback) {
+ // Let's get the collection names
+ self.listCollections().toArray(function(err, documents) {
+ if(err != null) return handleCallback(callback, err, null);
+ // Filter collections removing any illegal ones
+ documents = documents.filter(function(doc) {
+ return doc.name.indexOf('$') == -1;
+ });
+
+ // Return the collection objects
+ handleCallback(callback, null, documents.map(function(d) {
+ return new Collection(self, self.s.topology, self.s.databaseName, d.name.replace(self.s.databaseName + ".", ''), self.s.pkFactory, self.s.options);
+ }));
+ });
+}
+
+/**
+ * Fetch all collections for the current db.
+ *
+ * @method
+ * @param {Db~collectionsResultCallback} [callback] The results callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.collections = function(callback) {
+ var self = this;
+
+ // Return the callback
+ if(typeof callback == 'function') return collections(self, callback);
+ // Return the promise
+ return new self.s.promiseLibrary(function(resolve, reject) {
+ collections(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+define.classMethod('collections', {callback: true, promise:true});
+
+/**
+ * Runs a command on the database as admin.
+ * @method
+ * @param {object} command The command hash
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.executeDbAdminCommand = function(selector, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Return the callback
+ if(typeof callback == 'function') {
+ // Convert read preference
+ if(options.readPreference) {
+ options.readPreference = convertReadPreference(options.readPreference)
+ }
+
+ return self.s.topology.command('admin.$cmd', selector, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, null, result.result);
+ });
+ }
+
+ // Return promise
+ return new self.s.promiseLibrary(function(resolve, reject) {
+ self.s.topology.command('admin.$cmd', selector, options, function(err, result) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return reject(new MongoError('topology was destroyed'));
+ if(err) return reject(err);
+ resolve(result.result);
+ });
+ });
+};
+
+define.classMethod('executeDbAdminCommand', {callback: true, promise:true});
+
+/**
+ * Creates an index on the db and collection collection.
+ * @method
+ * @param {string} name Name of the collection to create the index on.
+ * @param {(string|object)} fieldOrSpec Defines the index.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.unique=false] Creates an unique index.
+ * @param {boolean} [options.sparse=false] Creates a sparse index.
+ * @param {boolean} [options.background=false] Creates the index in the background, yielding whenever possible.
+ * @param {boolean} [options.dropDups=false] A unique index cannot be created on a key that has pre-existing duplicate values. If you would like to create the index anyway, keeping the first document the database indexes and deleting all subsequent documents that have duplicate value
+ * @param {number} [options.min=null] For geospatial indexes set the lower bound for the co-ordinates.
+ * @param {number} [options.max=null] For geospatial indexes set the high bound for the co-ordinates.
+ * @param {number} [options.v=null] Specify the format version of the indexes.
+ * @param {number} [options.expireAfterSeconds=null] Allows you to expire data on indexes applied to a data (MongoDB 2.2 or higher)
+ * @param {number} [options.name=null] Override the autogenerated index name (useful if the resulting name is larger than 128 bytes)
+ * @param {object} [options.partialFilterExpression=null] Creates a partial index based on the given filter object (MongoDB 3.2 or higher)
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.createIndex = function(name, fieldOrSpec, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+ options = typeof callback === 'function' ? options : callback;
+ options = options == null ? {} : options;
+ // Shallow clone the options
+ options = shallowClone(options);
+ // Run only against primary
+ options.readPreference = ReadPreference.PRIMARY;
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return createIndex(self, name, fieldOrSpec, options, callback);
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ createIndex(self, name, fieldOrSpec, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var createIndex = function(self, name, fieldOrSpec, options, callback) {
+ // Get the write concern options
+ var finalOptions = writeConcern({}, self, options);
+ // Ensure we have a callback
+ if(finalOptions.writeConcern && typeof callback != 'function') {
+ throw MongoError.create({message: "Cannot use a writeConcern without a provided callback", driver:true});
+ }
+
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // Attempt to run using createIndexes command
+ createIndexUsingCreateIndexes(self, name, fieldOrSpec, options, function(err, result) {
+ if(err == null) return handleCallback(callback, err, result);
+
+ // 67 = 'CannotCreateIndex' (malformed index options)
+ // 85 = 'IndexOptionsConflict' (index already exists with different options)
+ // 11000 = 'DuplicateKey' (couldn't build unique index because of dupes)
+ // These errors mean that the server recognized `createIndex` as a command
+ // and so we don't need to fallback to an insert.
+ if(err.code === 67 || err.code == 11000 || err.code === 85) {
+ return handleCallback(callback, err, result);
+ }
+
+ // Create command
+ var doc = createCreateIndexCommand(self, name, fieldOrSpec, options);
+ // Set no key checking
+ finalOptions.checkKeys = false;
+ // Insert document
+ self.s.topology.insert(f("%s.%s", self.s.databaseName, Db.SYSTEM_INDEX_COLLECTION), doc, finalOptions, function(err, result) {
+ if(callback == null) return;
+ if(err) return handleCallback(callback, err);
+ if(result == null) return handleCallback(callback, null, null);
+ if(result.result.writeErrors) return handleCallback(callback, MongoError.create(result.result.writeErrors[0]), null);
+ handleCallback(callback, null, doc.name);
+ });
+ });
+}
+
+define.classMethod('createIndex', {callback: true, promise:true});
+
+/**
+ * Ensures that an index exists, if it does not it creates it
+ * @method
+ * @deprecated since version 2.0
+ * @param {string} name The index name
+ * @param {(string|object)} fieldOrSpec Defines the index.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.unique=false] Creates an unique index.
+ * @param {boolean} [options.sparse=false] Creates a sparse index.
+ * @param {boolean} [options.background=false] Creates the index in the background, yielding whenever possible.
+ * @param {boolean} [options.dropDups=false] A unique index cannot be created on a key that has pre-existing duplicate values. If you would like to create the index anyway, keeping the first document the database indexes and deleting all subsequent documents that have duplicate value
+ * @param {number} [options.min=null] For geospatial indexes set the lower bound for the co-ordinates.
+ * @param {number} [options.max=null] For geospatial indexes set the high bound for the co-ordinates.
+ * @param {number} [options.v=null] Specify the format version of the indexes.
+ * @param {number} [options.expireAfterSeconds=null] Allows you to expire data on indexes applied to a data (MongoDB 2.2 or higher)
+ * @param {number} [options.name=null] Override the autogenerated index name (useful if the resulting name is larger than 128 bytes)
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.ensureIndex = function(name, fieldOrSpec, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return ensureIndex(self, name, fieldOrSpec, options, callback);
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ ensureIndex(self, name, fieldOrSpec, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var ensureIndex = function(self, name, fieldOrSpec, options, callback) {
+ // Get the write concern options
+ var finalOptions = writeConcern({}, self, options);
+ // Create command
+ var selector = createCreateIndexCommand(self, name, fieldOrSpec, options);
+ var index_name = selector.name;
+
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // Default command options
+ var commandOptions = {};
+ // Check if the index allready exists
+ self.indexInformation(name, finalOptions, function(err, indexInformation) {
+ if(err != null && err.code != 26) return handleCallback(callback, err, null);
+ // If the index does not exist, create it
+ if(indexInformation == null || !indexInformation[index_name]) {
+ self.createIndex(name, fieldOrSpec, options, callback);
+ } else {
+ if(typeof callback === 'function') return handleCallback(callback, null, index_name);
+ }
+ });
+}
+
+define.classMethod('ensureIndex', {callback: true, promise:true});
+
+Db.prototype.addChild = function(db) {
+ if(this.s.parentDb) return this.s.parentDb.addChild(db);
+ this.s.children.push(db);
+}
+
+/**
+ * Create a new Db instance sharing the current socket connections. Be aware that the new db instances are
+ * related in a parent-child relationship to the original instance so that events are correctly emitted on child
+ * db instances. Child db instances are cached so performing db('db1') twice will return the same instance.
+ * You can control these behaviors with the options noListener and returnNonCachedInstance.
+ *
+ * @method
+ * @param {string} name The name of the database we want to use.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.noListener=false] Do not make the db an event listener to the original connection.
+ * @param {boolean} [options.returnNonCachedInstance=false] Control if you want to return a cached instance or have a new one created
+ * @return {Db}
+ */
+Db.prototype.db = function(dbName, options) {
+ options = options || {};
+
+ // Copy the options and add out internal override of the not shared flag
+ var finalOptions = assign({}, this.options, options);
+
+ // Do we have the db in the cache already
+ if(this.s.dbCache[dbName] && finalOptions.returnNonCachedInstance !== true) {
+ return this.s.dbCache[dbName];
+ }
+
+ // Add current db as parentDb
+ if(finalOptions.noListener == null || finalOptions.noListener == false) {
+ finalOptions.parentDb = this;
+ }
+
+ // Add promiseLibrary
+ finalOptions.promiseLibrary = this.s.promiseLibrary;
+
+ // Return the db object
+ var db = new Db(dbName, this.s.topology, finalOptions)
+
+ // Add as child
+ if(finalOptions.noListener == null || finalOptions.noListener == false) {
+ this.addChild(db);
+ }
+
+ // Add the db to the cache
+ this.s.dbCache[dbName] = db;
+ // Return the database
+ return db;
+};
+
+define.classMethod('db', {callback: false, promise:false, returns: [Db]});
+
+var _executeAuthCreateUserCommand = function(self, username, password, options, callback) {
+ // Special case where there is no password ($external users)
+ if(typeof username == 'string'
+ && password != null && typeof password == 'object') {
+ options = password;
+ password = null;
+ }
+
+ // Unpack all options
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ }
+
+ // Error out if we digestPassword set
+ if(options.digestPassword != null) {
+ throw toError("The digestPassword option is not supported via add_user. Please use db.command('createUser', ...) instead for this option.");
+ }
+
+ // Get additional values
+ var customData = options.customData != null ? options.customData : {};
+ var roles = Array.isArray(options.roles) ? options.roles : [];
+ var maxTimeMS = typeof options.maxTimeMS == 'number' ? options.maxTimeMS : null;
+
+ // If not roles defined print deprecated message
+ if(roles.length == 0) {
+ console.log("Creating a user without roles is deprecated in MongoDB >= 2.6");
+ }
+
+ // Get the error options
+ var commandOptions = {writeCommand:true};
+ if(options['dbName']) commandOptions.dbName = options['dbName'];
+
+ // Add maxTimeMS to options if set
+ if(maxTimeMS != null) commandOptions.maxTimeMS = maxTimeMS;
+
+ // Check the db name and add roles if needed
+ if((self.databaseName.toLowerCase() == 'admin' || options.dbName == 'admin') && !Array.isArray(options.roles)) {
+ roles = ['root']
+ } else if(!Array.isArray(options.roles)) {
+ roles = ['dbOwner']
+ }
+
+ // Build the command to execute
+ var command = {
+ createUser: username
+ , customData: customData
+ , roles: roles
+ , digestPassword:false
+ }
+
+ // Apply write concern to command
+ command = writeConcern(command, self, options);
+
+ // Use node md5 generator
+ var md5 = crypto.createHash('md5');
+ // Generate keys used for authentication
+ md5.update(username + ":mongo:" + password);
+ var userPassword = md5.digest('hex');
+
+ // No password
+ if(typeof password == 'string') {
+ command.pwd = userPassword;
+ }
+
+ // Force write using primary
+ commandOptions.readPreference = ReadPreference.primary;
+
+ // Execute the command
+ self.command(command, commandOptions, function(err, result) {
+ if(err && err.ok == 0 && err.code == undefined) return handleCallback(callback, {code: -5000}, null);
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, !result.ok ? toError(result) : null
+ , result.ok ? [{user: username, pwd: ''}] : null);
+ })
+}
+
+var addUser = function(self, username, password, options, callback) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ // Attempt to execute auth command
+ _executeAuthCreateUserCommand(self, username, password, options, function(err, r) {
+ // We need to perform the backward compatible insert operation
+ if(err && err.code == -5000) {
+ var finalOptions = writeConcern(shallowClone(options), self, options);
+ // Use node md5 generator
+ var md5 = crypto.createHash('md5');
+ // Generate keys used for authentication
+ md5.update(username + ":mongo:" + password);
+ var userPassword = md5.digest('hex');
+
+ // If we have another db set
+ var db = options.dbName ? self.db(options.dbName) : self;
+
+ // Fetch a user collection
+ var collection = db.collection(Db.SYSTEM_USER_COLLECTION);
+
+ // Check if we are inserting the first user
+ collection.count({}, function(err, count) {
+ // We got an error (f.ex not authorized)
+ if(err != null) return handleCallback(callback, err, null);
+ // Check if the user exists and update i
+ collection.find({user: username}, {dbName: options['dbName']}).toArray(function(err, documents) {
+ // We got an error (f.ex not authorized)
+ if(err != null) return handleCallback(callback, err, null);
+ // Add command keys
+ finalOptions.upsert = true;
+
+ // We have a user, let's update the password or upsert if not
+ collection.update({user: username},{$set: {user: username, pwd: userPassword}}, finalOptions, function(err, results, full) {
+ if(count == 0 && err) return handleCallback(callback, null, [{user:username, pwd:userPassword}]);
+ if(err) return handleCallback(callback, err, null)
+ handleCallback(callback, null, [{user:username, pwd:userPassword}]);
+ });
+ });
+ });
+
+ return;
+ }
+
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, err, r);
+ });
+}
+
+/**
+ * Add a user to the database.
+ * @method
+ * @param {string} username The username.
+ * @param {string} password The password.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {object} [options.customData=null] Custom data associated with the user (only Mongodb 2.6 or higher)
+ * @param {object[]} [options.roles=null] Roles associated with the created user (only Mongodb 2.6 or higher)
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.addUser = function(username, password, options, callback) {
+ // Unpack the parameters
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return addUser(self, username, password, options, callback);
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ addUser(self, username, password, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+define.classMethod('addUser', {callback: true, promise:true});
+
+var _executeAuthRemoveUserCommand = function(self, username, options, callback) {
+ if(typeof options == 'function') callback = options, options = {};
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+ // Get the error options
+ var commandOptions = {writeCommand:true};
+ if(options['dbName']) commandOptions.dbName = options['dbName'];
+
+ // Get additional values
+ var maxTimeMS = typeof options.maxTimeMS == 'number' ? options.maxTimeMS : null;
+
+ // Add maxTimeMS to options if set
+ if(maxTimeMS != null) commandOptions.maxTimeMS = maxTimeMS;
+
+ // Build the command to execute
+ var command = {
+ dropUser: username
+ }
+
+ // Apply write concern to command
+ command = writeConcern(command, self, options);
+
+ // Force write using primary
+ commandOptions.readPreference = ReadPreference.primary;
+
+ // Execute the command
+ self.command(command, commandOptions, function(err, result) {
+ if(err && !err.ok && err.code == undefined) return handleCallback(callback, {code: -5000});
+ if(err) return handleCallback(callback, err, null);
+ handleCallback(callback, null, result.ok ? true : false);
+ })
+}
+
+var removeUser = function(self, username, options, callback) {
+ // Attempt to execute command
+ _executeAuthRemoveUserCommand(self, username, options, function(err, result) {
+ if(err && err.code == -5000) {
+ var finalOptions = writeConcern(shallowClone(options), self, options);
+ // If we have another db set
+ var db = options.dbName ? self.db(options.dbName) : self;
+
+ // Fetch a user collection
+ var collection = db.collection(Db.SYSTEM_USER_COLLECTION);
+
+ // Locate the user
+ collection.findOne({user: username}, {}, function(err, user) {
+ if(user == null) return handleCallback(callback, err, false);
+ collection.remove({user: username}, finalOptions, function(err, result) {
+ handleCallback(callback, err, true);
+ });
+ });
+
+ return;
+ }
+
+ if(err) return handleCallback(callback, err);
+ handleCallback(callback, err, result);
+ });
+}
+
+define.classMethod('removeUser', {callback: true, promise:true});
+
+/**
+ * Remove a user from a database
+ * @method
+ * @param {string} username The username.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.removeUser = function(username, options, callback) {
+ // Unpack the parameters
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() || {} : {};
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return removeUser(self, username, options, callback);
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ removeUser(self, username, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var authenticate = function(self, username, password, options, callback) {
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // the default db to authenticate against is 'self'
+ // if authententicate is called from a retry context, it may be another one, like admin
+ var authdb = options.authdb ? options.authdb : options.dbName;
+ authdb = options.authSource ? options.authSource : authdb;
+ authdb = authdb ? authdb : self.databaseName;
+
+ // Callback
+ var _callback = function(err, result) {
+ if(self.listeners('authenticated').length > 0) {
+ self.emit('authenticated', err, result);
+ }
+
+ // Return to caller
+ handleCallback(callback, err, result);
+ }
+
+ // authMechanism
+ var authMechanism = options.authMechanism || '';
+ authMechanism = authMechanism.toUpperCase();
+
+ // If classic auth delegate to auth command
+ if(authMechanism == 'MONGODB-CR') {
+ self.s.topology.auth('mongocr', authdb, username, password, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else if(authMechanism == 'PLAIN') {
+ self.s.topology.auth('plain', authdb, username, password, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else if(authMechanism == 'MONGODB-X509') {
+ self.s.topology.auth('x509', authdb, username, password, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else if(authMechanism == 'SCRAM-SHA-1') {
+ self.s.topology.auth('scram-sha-1', authdb, username, password, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else if(authMechanism == 'GSSAPI') {
+ if(process.platform == 'win32') {
+ self.s.topology.auth('sspi', authdb, username, password, options, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else {
+ self.s.topology.auth('gssapi', authdb, username, password, options, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ }
+ } else if(authMechanism == 'DEFAULT') {
+ self.s.topology.auth('default', authdb, username, password, function(err, result) {
+ if(err) return handleCallback(callback, err, false);
+ _callback(null, true);
+ });
+ } else {
+ handleCallback(callback, MongoError.create({message: f("authentication mechanism %s not supported", options.authMechanism), driver:true}));
+ }
+}
+
+/**
+ * Authenticate a user against the server.
+ * @method
+ * @param {string} username The username.
+ * @param {string} [password] The password.
+ * @param {object} [options=null] Optional settings.
+ * @param {string} [options.authMechanism=MONGODB-CR] The authentication mechanism to use, GSSAPI, MONGODB-CR, MONGODB-X509, PLAIN
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.authenticate = function(username, password, options, callback) {
+ if(typeof options == 'function') callback = options, options = {};
+ var self = this;
+ // Shallow copy the options
+ options = shallowClone(options);
+
+ // Set default mechanism
+ if(!options.authMechanism) {
+ options.authMechanism = 'DEFAULT';
+ } else if(options.authMechanism != 'GSSAPI'
+ && options.authMechanism != 'DEFAULT'
+ && options.authMechanism != 'MONGODB-CR'
+ && options.authMechanism != 'MONGODB-X509'
+ && options.authMechanism != 'SCRAM-SHA-1'
+ && options.authMechanism != 'PLAIN') {
+ return handleCallback(callback, MongoError.create({message: "only DEFAULT, GSSAPI, PLAIN, MONGODB-X509, SCRAM-SHA-1 or MONGODB-CR is supported by authMechanism", driver:true}));
+ }
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return authenticate(self, username, password, options, function(err, r) {
+ // Support failed auth method
+ if(err && err.message && err.message.indexOf('saslStart') != -1) err.code = 59;
+ // Reject error
+ if(err) return callback(err, r);
+ callback(null, r);
+ });
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ authenticate(self, username, password, options, function(err, r) {
+ // Support failed auth method
+ if(err && err.message && err.message.indexOf('saslStart') != -1) err.code = 59;
+ // Reject error
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+define.classMethod('authenticate', {callback: true, promise:true});
+
+/**
+ * Logout user from server, fire off on all connections and remove all auth info
+ * @method
+ * @param {object} [options=null] Optional settings.
+ * @param {string} [options.dbName=null] Logout against different database than current.
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.logout = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // Establish the correct database name
+ var dbName = this.s.authSource ? this.s.authSource : this.s.databaseName;
+ dbName = options.dbName ? options.dbName : dbName;
+
+ // If we have a callback
+ if(typeof callback == 'function') {
+ return self.s.topology.logout(dbName, function(err, r) {
+ if(err) return callback(err);
+ callback(null, true);
+ });
+ }
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ self.s.topology.logout(dbName, function(err, r) {
+ if(err) return reject(err);
+ resolve(true);
+ });
+ });
+}
+
+define.classMethod('logout', {callback: true, promise:true});
+
+// Figure out the read preference
+var getReadPreference = function(options, db) {
+ if(options.readPreference) return options;
+ if(db.readPreference) options.readPreference = db.readPreference;
+ return options;
+}
+
+/**
+ * Retrieves this collections index info.
+ * @method
+ * @param {string} name The name of the collection.
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.full=false] Returns the full raw index information.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {Db~resultCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+Db.prototype.indexInformation = function(name, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {};
+ options = options || {};
+
+ // If we have a callback fallback
+ if(typeof callback == 'function') return indexInformation(self, name, options, callback);
+
+ // Return a promise
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ indexInformation(self, name, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ });
+ });
+};
+
+var indexInformation = function(self, name, options, callback) {
+ // If we specified full information
+ var full = options['full'] == null ? false : options['full'];
+
+ // Did the user destroy the topology
+ if(self.serverConfig && self.serverConfig.isDestroyed()) return callback(new MongoError('topology was destroyed'));
+
+ // Process all the results from the index command and collection
+ var processResults = function(indexes) {
+ // Contains all the information
+ var info = {};
+ // Process all the indexes
+ for(var i = 0; i < indexes.length; i++) {
+ var index = indexes[i];
+ // Let's unpack the object
+ info[index.name] = [];
+ for(var name in index.key) {
+ info[index.name].push([name, index.key[name]]);
+ }
+ }
+
+ return info;
+ }
+
+ // Get the list of indexes of the specified collection
+ self.collection(name).listIndexes().toArray(function(err, indexes) {
+ if(err) return callback(toError(err));
+ if(!Array.isArray(indexes)) return handleCallback(callback, null, []);
+ if(full) return handleCallback(callback, null, indexes);
+ handleCallback(callback, null, processResults(indexes));
+ });
+}
+
+define.classMethod('indexInformation', {callback: true, promise:true});
+
+var createCreateIndexCommand = function(db, name, fieldOrSpec, options) {
+ var indexParameters = parseIndexOptions(fieldOrSpec);
+ var fieldHash = indexParameters.fieldHash;
+ var keys = indexParameters.keys;
+
+ // Generate the index name
+ var indexName = typeof options.name == 'string' ? options.name : indexParameters.name;
+ var selector = {
+ 'ns': db.databaseName + "." + name, 'key': fieldHash, 'name': indexName
+ }
+
+ // Ensure we have a correct finalUnique
+ var finalUnique = options == null || 'object' === typeof options ? false : options;
+ // Set up options
+ options = options == null || typeof options == 'boolean' ? {} : options;
+
+ // Add all the options
+ var keysToOmit = Object.keys(selector);
+ for(var optionName in options) {
+ if(keysToOmit.indexOf(optionName) == -1) {
+ selector[optionName] = options[optionName];
+ }
+ }
+
+ if(selector['unique'] == null) selector['unique'] = finalUnique;
+
+ // Remove any write concern operations
+ var removeKeys = ['w', 'wtimeout', 'j', 'fsync', 'readPreference'];
+ for(var i = 0; i < removeKeys.length; i++) {
+ delete selector[removeKeys[i]];
+ }
+
+ // Return the command creation selector
+ return selector;
+}
+
+var createIndexUsingCreateIndexes = function(self, name, fieldOrSpec, options, callback) {
+ // Build the index
+ var indexParameters = parseIndexOptions(fieldOrSpec);
+ // Generate the index name
+ var indexName = typeof options.name == 'string' ? options.name : indexParameters.name;
+ // Set up the index
+ var indexes = [{ name: indexName, key: indexParameters.fieldHash }];
+ // merge all the options
+ var keysToOmit = Object.keys(indexes[0]);
+ for(var optionName in options) {
+ if(keysToOmit.indexOf(optionName) == -1) {
+ indexes[0][optionName] = options[optionName];
+ }
+
+ // Remove any write concern operations
+ var removeKeys = ['w', 'wtimeout', 'j', 'fsync', 'readPreference'];
+ for(var i = 0; i < removeKeys.length; i++) {
+ delete indexes[0][removeKeys[i]];
+ }
+ }
+
+ // Get capabilities
+ var capabilities = self.s.topology.capabilities();
+
+ // Did the user pass in a collation, check if our write server supports it
+ if(indexes[0].collation && capabilities && !capabilities.commandsTakeCollation) {
+ // Create a new error
+ var error = new MongoError(f('server/primary/mongos does not support collation'));
+ error.code = 67;
+ // Return the error
+ return callback(error);
+ }
+
+ // Create command, apply write concern to command
+ var cmd = writeConcern({createIndexes: name, indexes: indexes}, self, options);
+
+ // Decorate command with writeConcern if supported
+ decorateWithWriteConcern(cmd, self, options);
+
+ // ReadPreference primary
+ options.readPreference = ReadPreference.PRIMARY;
+
+ // Build the command
+ self.command(cmd, options, function(err, result) {
+ if(err) return handleCallback(callback, err, null);
+ if(result.ok == 0) return handleCallback(callback, toError(result), null);
+ // Return the indexName for backward compatibility
+ handleCallback(callback, null, indexName);
+ });
+}
+
+// Validate the database name
+var validateDatabaseName = function(databaseName) {
+ if(typeof databaseName !== 'string') throw MongoError.create({message: "database name must be a string", driver:true});
+ if(databaseName.length === 0) throw MongoError.create({message: "database name cannot be the empty string", driver:true});
+ if(databaseName == '$external') return;
+
+ var invalidChars = [" ", ".", "$", "/", "\\"];
+ for(var i = 0; i < invalidChars.length; i++) {
+ if(databaseName.indexOf(invalidChars[i]) != -1) throw MongoError.create({message: "database names cannot contain the character '" + invalidChars[i] + "'", driver:true});
+ }
+}
+
+// Get write concern
+var writeConcern = function(target, db, options) {
+ if(options.w != null || options.j != null || options.fsync != null) {
+ var opts = {};
+ if(options.w) opts.w = options.w;
+ if(options.wtimeout) opts.wtimeout = options.wtimeout;
+ if(options.j) opts.j = options.j;
+ if(options.fsync) opts.fsync = options.fsync;
+ target.writeConcern = opts;
+ } else if(db.writeConcern.w != null || db.writeConcern.j != null || db.writeConcern.fsync != null) {
+ target.writeConcern = db.writeConcern;
+ }
+
+ return target
+}
+
+// Add listeners to topology
+var createListener = function(self, e, object) {
+ var listener = function(err) {
+ if(object.listeners(e).length > 0) {
+ object.emit(e, err, self);
+
+ // Emit on all associated db's if available
+ for(var i = 0; i < self.s.children.length; i++) {
+ self.s.children[i].emit(e, err, self.s.children[i]);
+ }
+ }
+ }
+ return listener;
+}
+
+
+/**
+ * Unref all sockets
+ * @method
+ */
+Db.prototype.unref = function(options, callback) {
+ this.s.topology.unref();
+}
+
+/**
+ * Db close event
+ *
+ * Emitted after a socket closed against a single server or mongos proxy.
+ *
+ * @event Db#close
+ * @type {MongoError}
+ */
+
+/**
+ * Db authenticated event
+ *
+ * Emitted after all server members in the topology (single server, replicaset or mongos) have successfully authenticated.
+ *
+ * @event Db#authenticated
+ * @type {object}
+ */
+
+/**
+ * Db reconnect event
+ *
+ * * Server: Emitted when the driver has reconnected and re-authenticated.
+ * * ReplicaSet: N/A
+ * * Mongos: Emitted when the driver reconnects and re-authenticates successfully against a Mongos.
+ *
+ * @event Db#reconnect
+ * @type {object}
+ */
+
+/**
+ * Db error event
+ *
+ * Emitted after an error occurred against a single server or mongos proxy.
+ *
+ * @event Db#error
+ * @type {MongoError}
+ */
+
+/**
+ * Db timeout event
+ *
+ * Emitted after a socket timeout occurred against a single server or mongos proxy.
+ *
+ * @event Db#timeout
+ * @type {MongoError}
+ */
+
+/**
+ * Db parseError event
+ *
+ * The parseError event is emitted if the driver detects illegal or corrupt BSON being received from the server.
+ *
+ * @event Db#parseError
+ * @type {MongoError}
+ */
+
+/**
+ * Db fullsetup event, emitted when all servers in the topology have been connected to at start up time.
+ *
+ * * Server: Emitted when the driver has connected to the single server and has authenticated.
+ * * ReplSet: Emitted after the driver has attempted to connect to all replicaset members.
+ * * Mongos: Emitted after the driver has attempted to connect to all mongos proxies.
+ *
+ * @event Db#fullsetup
+ * @type {Db}
+ */
+
+// Constants
+Db.SYSTEM_NAMESPACE_COLLECTION = "system.namespaces";
+Db.SYSTEM_INDEX_COLLECTION = "system.indexes";
+Db.SYSTEM_PROFILE_COLLECTION = "system.profile";
+Db.SYSTEM_USER_COLLECTION = "system.users";
+Db.SYSTEM_COMMAND_COLLECTION = "$cmd";
+Db.SYSTEM_JS_COLLECTION = "system.js";
+
+module.exports = Db;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/download.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/download.js
new file mode 100644
index 0000000..494c34a
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/download.js
@@ -0,0 +1,399 @@
+var shallowClone = require('../utils').shallowClone;
+var stream = require('stream');
+var util = require('util');
+
+module.exports = GridFSBucketReadStream;
+
+/**
+ * A readable stream that enables you to read buffers from GridFS.
+ *
+ * Do not instantiate this class directly. Use `openDownloadStream()` instead.
+ *
+ * @class
+ * @param {Collection} chunks Handle for chunks collection
+ * @param {Collection} files Handle for files collection
+ * @param {Object} readPreference The read preference to use
+ * @param {Object} filter The query to use to find the file document
+ * @param {Object} [options=null] Optional settings.
+ * @param {Number} [options.sort=null] Optional sort for the file find query
+ * @param {Number} [options.skip=null] Optional skip for the file find query
+ * @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
+ * @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
+ * @fires GridFSBucketReadStream#error
+ * @fires GridFSBucketReadStream#file
+ * @return {GridFSBucketReadStream} a GridFSBucketReadStream instance.
+ */
+
+function GridFSBucketReadStream(chunks, files, readPreference, filter, options) {
+ var _this = this;
+ this.s = {
+ bytesRead: 0,
+ chunks: chunks,
+ cursor: null,
+ expected: 0,
+ files: files,
+ filter: filter,
+ init: false,
+ expectedEnd: 0,
+ file: null,
+ options: options,
+ readPreference: readPreference
+ };
+
+ stream.Readable.call(this);
+}
+
+util.inherits(GridFSBucketReadStream, stream.Readable);
+
+/**
+ * An error occurred
+ *
+ * @event GridFSBucketReadStream#error
+ * @type {Error}
+ */
+
+/**
+ * Fires when the stream loaded the file document corresponding to the
+ * provided id.
+ *
+ * @event GridFSBucketReadStream#file
+ * @type {object}
+ */
+
+/**
+ * Emitted when a chunk of data is available to be consumed.
+ *
+ * @event GridFSBucketReadStream#data
+ * @type {object}
+ */
+
+/**
+ * Fired when the stream is exhausted (no more data events).
+ *
+ * @event GridFSBucketReadStream#end
+ * @type {object}
+ */
+
+/**
+ * Fired when the stream is exhausted and the underlying cursor is killed
+ *
+ * @event GridFSBucketReadStream#close
+ * @type {object}
+ */
+
+/**
+ * Reads from the cursor and pushes to the stream.
+ * @method
+ */
+
+GridFSBucketReadStream.prototype._read = function() {
+ var _this = this;
+ if (this.destroyed) {
+ return;
+ }
+
+ waitForFile(_this, function() {
+ doRead(_this);
+ });
+};
+
+/**
+ * Sets the 0-based offset in bytes to start streaming from. Throws
+ * an error if this stream has entered flowing mode
+ * (e.g. if you've already called `on('data')`)
+ * @method
+ * @param {Number} start Offset in bytes to start reading at
+ * @return {GridFSBucketReadStream}
+ */
+
+GridFSBucketReadStream.prototype.start = function(start) {
+ throwIfInitialized(this);
+ this.s.options.start = start;
+ return this;
+};
+
+/**
+ * Sets the 0-based offset in bytes to start streaming from. Throws
+ * an error if this stream has entered flowing mode
+ * (e.g. if you've already called `on('data')`)
+ * @method
+ * @param {Number} end Offset in bytes to stop reading at
+ * @return {GridFSBucketReadStream}
+ */
+
+GridFSBucketReadStream.prototype.end = function(end) {
+ throwIfInitialized(this);
+ this.s.options.end = end;
+ return this;
+};
+
+/**
+ * Marks this stream as aborted (will never push another `data` event)
+ * and kills the underlying cursor. Will emit the 'end' event, and then
+ * the 'close' event once the cursor is successfully killed.
+ *
+ * @method
+ * @param {GridFSBucket~errorCallback} [callback] called when the cursor is successfully closed or an error occurred.
+ * @fires GridFSBucketWriteStream#close
+ * @fires GridFSBucketWriteStream#end
+ */
+
+GridFSBucketReadStream.prototype.abort = function(callback) {
+ var _this = this;
+ this.push(null);
+ this.destroyed = true;
+ if (this.s.cursor) {
+ this.s.cursor.close(function(error) {
+ _this.emit('close');
+ callback && callback(error);
+ });
+ } else {
+ if (!this.s.init) {
+ // If not initialized, fire close event because we will never
+ // get a cursor
+ _this.emit('close');
+ }
+ callback && callback();
+ }
+};
+
+/**
+ * @ignore
+ */
+
+function throwIfInitialized(self) {
+ if (self.s.init) {
+ throw new Error('You cannot change options after the stream has entered' +
+ 'flowing mode!');
+ }
+}
+
+/**
+ * @ignore
+ */
+
+function doRead(_this) {
+ if (_this.destroyed) {
+ return;
+ }
+
+ _this.s.cursor.next(function(error, doc) {
+ if (_this.destroyed) {
+ return;
+ }
+ if (error) {
+ return __handleError(_this, error);
+ }
+ if (!doc) {
+ _this.push(null);
+ return _this.s.cursor.close(function(error) {
+ if (error) {
+ return __handleError(_this, error);
+ }
+ _this.emit('close');
+ });
+ }
+
+ var bytesRemaining = _this.s.file.length - _this.s.bytesRead;
+ var expectedN = _this.s.expected++;
+ var expectedLength = Math.min(_this.s.file.chunkSize,
+ bytesRemaining);
+
+ if (doc.n > expectedN) {
+ var errmsg = 'ChunkIsMissing: Got unexpected n: ' + doc.n +
+ ', expected: ' + expectedN;
+ return __handleError(_this, new Error(errmsg));
+ }
+
+ if (doc.n < expectedN) {
+ var errmsg = 'ExtraChunk: Got unexpected n: ' + doc.n +
+ ', expected: ' + expectedN;
+ return __handleError(_this, new Error(errmsg));
+ }
+
+ if (doc.data.length() !== expectedLength) {
+ if (bytesRemaining <= 0) {
+ var errmsg = 'ExtraChunk: Got unexpected n: ' + doc.n;
+ return __handleError(_this, new Error(errmsg));
+ }
+ var errmsg = 'ChunkIsWrongSize: Got unexpected length: ' +
+ doc.data.length() + ', expected: ' + expectedLength;
+ return __handleError(_this, new Error(errmsg));
+ }
+
+ _this.s.bytesRead += doc.data.length();
+
+ if (doc.data.buffer.length === 0) {
+ return _this.push(null);
+ }
+
+ var sliceStart = null;
+ var sliceEnd = null;
+ var buf = doc.data.buffer;
+
+ if (_this.s.bytesToSkip != null) {
+ sliceStart = _this.s.bytesToSkip;
+ _this.s.bytesToSkip = 0;
+ }
+
+ if (expectedN === _this.s.expectedEnd && _this.s.bytesToTrim != null) {
+ sliceEnd = _this.s.bytesToTrim;
+ }
+
+ // If the remaining amount of data left is < chunkSize read the right amount of data
+ if (_this.s.options.end && (
+ (_this.s.options.end - _this.s.bytesToSkip) < doc.data.length()
+ )) {
+ sliceEnd = (_this.s.options.end - _this.s.bytesToSkip);
+ }
+
+ if (sliceStart != null || sliceEnd != null) {
+ buf = buf.slice(sliceStart || 0, sliceEnd || buf.length);
+ }
+
+ _this.push(buf);
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function init(self) {
+ var findOneOptions = {};
+ if (self.s.readPreference) {
+ findOneOptions.readPreference = self.s.readPreference;
+ }
+ if (self.s.options && self.s.options.sort) {
+ findOneOptions.sort = self.s.options.sort;
+ }
+ if (self.s.options && self.s.options.skip) {
+ findOneOptions.skip = self.s.options.skip;
+ }
+
+ self.s.files.findOne(self.s.filter, findOneOptions, function(error, doc) {
+ if (error) {
+ return __handleError(self, error);
+ }
+ if (!doc) {
+ var identifier = self.s.filter._id ?
+ self.s.filter._id.toString() : self.s.filter.filename;
+ var errmsg = 'FileNotFound: file ' + identifier + ' was not found';
+ var err = new Error(errmsg);
+ err.code = 'ENOENT';
+ return __handleError(self, err);
+ }
+
+ // If document is empty, kill the stream immediately and don't
+ // execute any reads
+ if (doc.length <= 0) {
+ self.push(null);
+ return;
+ }
+
+ if (self.destroyed) {
+ // If user destroys the stream before we have a cursor, wait
+ // until the query is done to say we're 'closed' because we can't
+ // cancel a query.
+ self.emit('close');
+ return;
+ }
+
+ self.s.cursor = self.s.chunks.find({ files_id: doc._id }).sort({ n: 1 });
+ if (self.s.readPreference) {
+ self.s.cursor.setReadPreference(self.s.readPreference);
+ }
+
+ self.s.expectedEnd = Math.ceil(doc.length / doc.chunkSize);
+ self.s.file = doc;
+ self.s.bytesToSkip = handleStartOption(self, doc, self.s.cursor,
+ self.s.options);
+ self.s.bytesToTrim = handleEndOption(self, doc, self.s.cursor,
+ self.s.options);
+ self.emit('file', doc);
+ });
+}
+
+/**
+ * @ignore
+ */
+
+function waitForFile(_this, callback) {
+ if (_this.s.file) {
+ return callback();
+ }
+
+ if (!_this.s.init) {
+ init(_this);
+ _this.s.init = true;
+ }
+
+ _this.once('file', function() {
+ callback();
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function handleStartOption(stream, doc, cursor, options) {
+ if (options && options.start != null) {
+ if (options.start > doc.length) {
+ throw new Error('Stream start (' + options.start + ') must not be ' +
+ 'more than the length of the file (' + doc.length +')')
+ }
+ if (options.start < 0) {
+ throw new Error('Stream start (' + options.start + ') must not be ' +
+ 'negative');
+ }
+ if (options.end != null && options.end < options.start) {
+ throw new Error('Stream start (' + options.start + ') must not be ' +
+ 'greater than stream end (' + options.end + ')');
+ }
+
+ cursor.skip(Math.floor(options.start / doc.chunkSize));
+
+ stream.s.bytesRead = Math.floor(options.start / doc.chunkSize) *
+ doc.chunkSize;
+ stream.s.expected = Math.floor(options.start / doc.chunkSize);
+
+ return options.start - stream.s.bytesRead;
+ }
+}
+
+/**
+ * @ignore
+ */
+
+function handleEndOption(stream, doc, cursor, options) {
+ if (options && options.end != null) {
+ if (options.end > doc.length) {
+ throw new Error('Stream end (' + options.end + ') must not be ' +
+ 'more than the length of the file (' + doc.length +')')
+ }
+ if (options.start < 0) {
+ throw new Error('Stream end (' + options.end + ') must not be ' +
+ 'negative');
+ }
+
+ var start = options.start != null ?
+ Math.floor(options.start / doc.chunkSize) :
+ 0;
+
+ cursor.limit(Math.ceil(options.end / doc.chunkSize) - start);
+
+ stream.s.expectedEnd = Math.ceil(options.end / doc.chunkSize);
+
+ return (Math.ceil(options.end / doc.chunkSize) * doc.chunkSize) -
+ options.end;
+ }
+}
+
+/**
+ * @ignore
+ */
+
+function __handleError(_this, error) {
+ _this.emit('error', error);
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/index.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/index.js
new file mode 100644
index 0000000..fc388b9
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/index.js
@@ -0,0 +1,366 @@
+var Emitter = require('events').EventEmitter;
+var GridFSBucketReadStream = require('./download');
+var GridFSBucketWriteStream = require('./upload');
+var shallowClone = require('../utils').shallowClone;
+var toError = require('../utils').toError;
+var util = require('util');
+
+var DEFAULT_GRIDFS_BUCKET_OPTIONS = {
+ bucketName: 'fs',
+ chunkSizeBytes: 255 * 1024
+};
+
+module.exports = GridFSBucket;
+
+/**
+ * Constructor for a streaming GridFS interface
+ * @class
+ * @param {Db} db A db handle
+ * @param {object} [options=null] Optional settings.
+ * @param {string} [options.bucketName="fs"] The 'files' and 'chunks' collections will be prefixed with the bucket name followed by a dot.
+ * @param {number} [options.chunkSizeBytes=255 * 1024] Number of bytes stored in each chunk. Defaults to 255KB
+ * @param {object} [options.writeConcern=null] Optional write concern to be passed to write operations, for instance `{ w: 1 }`
+ * @param {object} [options.readPreference=null] Optional read preference to be passed to read operations
+ * @fires GridFSBucketWriteStream#index
+ * @return {GridFSBucket}
+ */
+
+function GridFSBucket(db, options) {
+ Emitter.apply(this);
+ this.setMaxListeners(0);
+
+ if (options && typeof options === 'object') {
+ options = shallowClone(options);
+ var keys = Object.keys(DEFAULT_GRIDFS_BUCKET_OPTIONS);
+ for (var i = 0; i < keys.length; ++i) {
+ if (!options[keys[i]]) {
+ options[keys[i]] = DEFAULT_GRIDFS_BUCKET_OPTIONS[keys[i]];
+ }
+ }
+ } else {
+ options = DEFAULT_GRIDFS_BUCKET_OPTIONS;
+ }
+
+ this.s = {
+ db: db,
+ options: options,
+ _chunksCollection: db.collection(options.bucketName + '.chunks'),
+ _filesCollection: db.collection(options.bucketName + '.files'),
+ checkedIndexes: false,
+ calledOpenUploadStream: false,
+ promiseLibrary: db.s.promiseLibrary ||
+ (typeof global.Promise == 'function' ? global.Promise : require('es6-promise').Promise)
+ };
+};
+
+util.inherits(GridFSBucket, Emitter);
+
+/**
+ * When the first call to openUploadStream is made, the upload stream will
+ * check to see if it needs to create the proper indexes on the chunks and
+ * files collections. This event is fired either when 1) it determines that
+ * no index creation is necessary, 2) when it successfully creates the
+ * necessary indexes.
+ *
+ * @event GridFSBucket#index
+ * @type {Error}
+ */
+
+/**
+ * Returns a writable stream (GridFSBucketWriteStream) for writing
+ * buffers to GridFS. The stream's 'id' property contains the resulting
+ * file's id.
+ * @method
+ * @param {string} filename The value of the 'filename' key in the files doc
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.chunkSizeBytes=null] Optional overwrite this bucket's chunkSizeBytes for this file
+ * @param {object} [options.metadata=null] Optional object to store in the file document's `metadata` field
+ * @param {string} [options.contentType=null] Optional string to store in the file document's `contentType` field
+ * @param {array} [options.aliases=null] Optional array of strings to store in the file document's `aliases` field
+ * @return {GridFSBucketWriteStream}
+ */
+
+GridFSBucket.prototype.openUploadStream = function(filename, options) {
+ if (options) {
+ options = shallowClone(options);
+ } else {
+ options = {};
+ }
+ if (!options.chunkSizeBytes) {
+ options.chunkSizeBytes = this.s.options.chunkSizeBytes;
+ }
+ return new GridFSBucketWriteStream(this, filename, options);
+};
+
+/**
+ * Returns a writable stream (GridFSBucketWriteStream) for writing
+ * buffers to GridFS for a custom file id. The stream's 'id' property contains the resulting
+ * file's id.
+ * @method
+ * @param {string|number|object} id A custom id used to identify the file
+ * @param {string} filename The value of the 'filename' key in the files doc
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.chunkSizeBytes=null] Optional overwrite this bucket's chunkSizeBytes for this file
+ * @param {object} [options.metadata=null] Optional object to store in the file document's `metadata` field
+ * @param {string} [options.contentType=null] Optional string to store in the file document's `contentType` field
+ * @param {array} [options.aliases=null] Optional array of strings to store in the file document's `aliases` field
+ * @return {GridFSBucketWriteStream}
+ */
+
+GridFSBucket.prototype.openUploadStreamWithId = function(id, filename, options) {
+ if (options) {
+ options = shallowClone(options);
+ } else {
+ options = {};
+ }
+
+ if (!options.chunkSizeBytes) {
+ options.chunkSizeBytes = this.s.options.chunkSizeBytes;
+ }
+
+ options.id = id;
+
+ return new GridFSBucketWriteStream(this, filename, options);
+};
+
+/**
+ * Returns a readable stream (GridFSBucketReadStream) for streaming file
+ * data from GridFS.
+ * @method
+ * @param {ObjectId} id The id of the file doc
+ * @param {Object} [options=null] Optional settings.
+ * @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
+ * @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
+ * @return {GridFSBucketReadStream}
+ */
+
+GridFSBucket.prototype.openDownloadStream = function(id, options) {
+ var filter = { _id: id };
+ var options = {
+ start: options && options.start,
+ end: options && options.end
+ };
+ return new GridFSBucketReadStream(this.s._chunksCollection,
+ this.s._filesCollection, this.s.options.readPreference, filter, options);
+};
+
+/**
+ * Deletes a file with the given id
+ * @method
+ * @param {ObjectId} id The id of the file doc
+ * @param {GridFSBucket~errorCallback} [callback]
+ */
+
+GridFSBucket.prototype.delete = function(id, callback) {
+ if (typeof callback === 'function') {
+ return _delete(this, id, callback);
+ }
+
+ var _this = this;
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ _delete(_this, id, function(error, res) {
+ if (error) {
+ reject(error);
+ } else {
+ resolve(res);
+ }
+ });
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function _delete(_this, id, callback) {
+ _this.s._filesCollection.deleteOne({ _id: id }, function(error, res) {
+ if (error) {
+ return callback(error);
+ }
+
+ _this.s._chunksCollection.deleteMany({ files_id: id }, function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ // Delete orphaned chunks before returning FileNotFound
+ if (!res.result.n) {
+ var errmsg = 'FileNotFound: no file with id ' + id + ' found';
+ return callback(new Error(errmsg));
+ }
+
+ callback();
+ });
+ });
+}
+
+/**
+ * Convenience wrapper around find on the files collection
+ * @method
+ * @param {Object} filter
+ * @param {Object} [options=null] Optional settings for cursor
+ * @param {number} [options.batchSize=null] Optional batch size for cursor
+ * @param {number} [options.limit=null] Optional limit for cursor
+ * @param {number} [options.maxTimeMS=null] Optional maxTimeMS for cursor
+ * @param {boolean} [options.noCursorTimeout=null] Optionally set cursor's `noCursorTimeout` flag
+ * @param {number} [options.skip=null] Optional skip for cursor
+ * @param {object} [options.sort=null] Optional sort for cursor
+ * @return {Cursor}
+ */
+
+GridFSBucket.prototype.find = function(filter, options) {
+ filter = filter || {};
+ options = options || {};
+
+ var cursor = this.s._filesCollection.find(filter);
+
+ if (options.batchSize != null) {
+ cursor.batchSize(options.batchSize);
+ }
+ if (options.limit != null) {
+ cursor.limit(options.limit);
+ }
+ if (options.maxTimeMS != null) {
+ cursor.maxTimeMS(options.maxTimeMS);
+ }
+ if (options.noCursorTimeout != null) {
+ cursor.addCursorFlag('noCursorTimeout', options.noCursorTimeout);
+ }
+ if (options.skip != null) {
+ cursor.skip(options.skip);
+ }
+ if (options.sort != null) {
+ cursor.sort(options.sort);
+ }
+
+ return cursor;
+};
+
+/**
+ * Returns a readable stream (GridFSBucketReadStream) for streaming the
+ * file with the given name from GridFS. If there are multiple files with
+ * the same name, this will stream the most recent file with the given name
+ * (as determined by the `uploadDate` field). You can set the `revision`
+ * option to change this behavior.
+ * @method
+ * @param {String} filename The name of the file to stream
+ * @param {Object} [options=null] Optional settings
+ * @param {number} [options.revision=-1] The revision number relative to the oldest file with the given filename. 0 gets you the oldest file, 1 gets you the 2nd oldest, -1 gets you the newest.
+ * @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
+ * @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
+ * @return {GridFSBucketReadStream}
+ */
+
+GridFSBucket.prototype.openDownloadStreamByName = function(filename, options) {
+ var sort = { uploadDate: -1 };
+ var skip = null;
+ if (options && options.revision != null) {
+ if (options.revision >= 0) {
+ sort = { uploadDate: 1 };
+ skip = options.revision;
+ } else {
+ skip = -options.revision - 1;
+ }
+ }
+
+ var filter = { filename: filename };
+ var options = {
+ sort: sort,
+ skip: skip,
+ start: options && options.start,
+ end: options && options.end
+ };
+ return new GridFSBucketReadStream(this.s._chunksCollection,
+ this.s._filesCollection, this.s.options.readPreference, filter, options);
+};
+
+/**
+ * Renames the file with the given _id to the given string
+ * @method
+ * @param {ObjectId} id the id of the file to rename
+ * @param {String} filename new name for the file
+ * @param {GridFSBucket~errorCallback} [callback]
+ */
+
+GridFSBucket.prototype.rename = function(id, filename, callback) {
+ if (typeof callback === 'function') {
+ return _rename(this, id, filename, callback);
+ }
+
+ var _this = this;
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ _rename(_this, id, filename, function(error, res) {
+ if (error) {
+ reject(error);
+ } else {
+ resolve(res);
+ }
+ });
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function _rename(_this, id, filename, callback) {
+ var filter = { _id: id };
+ var update = { $set: { filename: filename } };
+ _this.s._filesCollection.updateOne(filter, update, function(error, res) {
+ if (error) {
+ return callback(error);
+ }
+ if (!res.result.n) {
+ return callback(toError('File with id ' + id + ' not found'));
+ }
+ callback();
+ });
+}
+
+/**
+ * Removes this bucket's files collection, followed by its chunks collection.
+ * @method
+ * @param {GridFSBucket~errorCallback} [callback]
+ */
+
+GridFSBucket.prototype.drop = function(callback) {
+ if (typeof callback === 'function') {
+ return _drop(this, callback);
+ }
+
+ var _this = this;
+ return new this.s.promiseLibrary(function(resolve, reject) {
+ _drop(_this, function(error, res) {
+ if (error) {
+ reject(error);
+ } else {
+ resolve(res);
+ }
+ });
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function _drop(_this, callback) {
+ _this.s._filesCollection.drop(function(error) {
+ if (error) {
+ return callback(error);
+ }
+ _this.s._chunksCollection.drop(function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ return callback();
+ });
+ });
+}
+
+/**
+ * Callback format for all GridFSBucket methods that can accept a callback.
+ * @callback GridFSBucket~errorCallback
+ * @param {MongoError} error An error instance representing any errors that occurred
+ */
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/upload.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/upload.js
new file mode 100644
index 0000000..7625345
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs-stream/upload.js
@@ -0,0 +1,525 @@
+var core = require('mongodb-core');
+var crypto = require('crypto');
+var shallowClone = require('../utils').shallowClone;
+var stream = require('stream');
+var util = require('util');
+
+var ERROR_NAMESPACE_NOT_FOUND = 26;
+
+module.exports = GridFSBucketWriteStream;
+
+/**
+ * A writable stream that enables you to write buffers to GridFS.
+ *
+ * Do not instantiate this class directly. Use `openUploadStream()` instead.
+ *
+ * @class
+ * @param {GridFSBucket} bucket Handle for this stream's corresponding bucket
+ * @param {string} filename The value of the 'filename' key in the files doc
+ * @param {object} [options=null] Optional settings.
+ * @param {string|number|object} [options.id=null] Custom file id for the GridFS file.
+ * @param {number} [options.chunkSizeBytes=null] The chunk size to use, in bytes
+ * @param {number} [options.w=null] The write concern
+ * @param {number} [options.wtimeout=null] The write concern timeout
+ * @param {number} [options.j=null] The journal write concern
+ * @fires GridFSBucketWriteStream#error
+ * @fires GridFSBucketWriteStream#finish
+ * @return {GridFSBucketWriteStream} a GridFSBucketWriteStream instance.
+ */
+
+function GridFSBucketWriteStream(bucket, filename, options) {
+ options = options || {};
+ this.bucket = bucket;
+ this.chunks = bucket.s._chunksCollection;
+ this.filename = filename;
+ this.files = bucket.s._filesCollection;
+ this.options = options;
+
+ this.id = options.id ? options.id : core.BSON.ObjectId();
+ this.chunkSizeBytes = this.options.chunkSizeBytes;
+ this.bufToStore = new Buffer(this.chunkSizeBytes);
+ this.length = 0;
+ this.md5 = crypto.createHash('md5');
+ this.n = 0;
+ this.pos = 0;
+ this.state = {
+ streamEnd: false,
+ outstandingRequests: 0,
+ errored: false,
+ aborted: false,
+ promiseLibrary: this.bucket.s.promiseLibrary
+ };
+
+ if (!this.bucket.s.calledOpenUploadStream) {
+ this.bucket.s.calledOpenUploadStream = true;
+
+ var _this = this;
+ checkIndexes(this, function() {
+ _this.bucket.s.checkedIndexes = true;
+ _this.bucket.emit('index');
+ });
+ }
+}
+
+util.inherits(GridFSBucketWriteStream, stream.Writable);
+
+/**
+ * An error occurred
+ *
+ * @event GridFSBucketWriteStream#error
+ * @type {Error}
+ */
+
+/**
+ * `end()` was called and the write stream successfully wrote the file
+ * metadata and all the chunks to MongoDB.
+ *
+ * @event GridFSBucketWriteStream#finish
+ * @type {object}
+ */
+
+/**
+ * Write a buffer to the stream.
+ *
+ * @method
+ * @param {Buffer} chunk Buffer to write
+ * @param {String} encoding Optional encoding for the buffer
+ * @param {Function} callback Function to call when the chunk was added to the buffer, or if the entire chunk was persisted to MongoDB if this chunk caused a flush.
+ * @return {Boolean} False if this write required flushing a chunk to MongoDB. True otherwise.
+ */
+
+GridFSBucketWriteStream.prototype.write = function(chunk, encoding, callback) {
+ var _this = this;
+ return waitForIndexes(this, function() {
+ return doWrite(_this, chunk, encoding, callback);
+ });
+};
+
+/**
+ * Places this write stream into an aborted state (all future writes fail)
+ * and deletes all chunks that have already been written.
+ *
+ * @method
+ * @param {GridFSBucket~errorCallback} callback called when chunks are successfully removed or error occurred
+ * @return {Promise} if no callback specified
+ */
+
+GridFSBucketWriteStream.prototype.abort = function(callback) {
+ if (this.state.streamEnd) {
+ var error = new Error('Cannot abort a stream that has already completed');
+ if (typeof callback == 'function') {
+ return callback(error);
+ }
+ return this.state.promiseLibrary.reject(error);
+ }
+ if (this.state.aborted) {
+ var error = new Error('Cannot call abort() on a stream twice');
+ if (typeof callback == 'function') {
+ return callback(error);
+ }
+ return this.state.promiseLibrary.reject(error);
+ }
+ this.state.aborted = true;
+ this.chunks.deleteMany({ files_id: this.id }, function(error) {
+ if(typeof callback == 'function') callback(error);
+ });
+};
+
+/**
+ * Tells the stream that no more data will be coming in. The stream will
+ * persist the remaining data to MongoDB, write the files document, and
+ * then emit a 'finish' event.
+ *
+ * @method
+ * @param {Buffer} chunk Buffer to write
+ * @param {String} encoding Optional encoding for the buffer
+ * @param {Function} callback Function to call when all files and chunks have been persisted to MongoDB
+ */
+
+GridFSBucketWriteStream.prototype.end = function(chunk, encoding, callback) {
+ if(typeof chunk == 'function') {
+ callback = chunk, chunk = null, encoding = null;
+ } else if(typeof encoding == 'function') {
+ callback = encoding, encoding = null;
+ }
+
+ if (checkAborted(this, callback)) {
+ return;
+ }
+ var _this = this;
+ this.state.streamEnd = true;
+
+ if (callback) {
+ this.once('finish', function(result) {
+ callback(null, result);
+ });
+ }
+
+ if (!chunk) {
+ waitForIndexes(this, function() {
+ writeRemnant(_this);
+ });
+ return;
+ }
+
+ var _this = this;
+ var inputBuf = (Buffer.isBuffer(chunk)) ?
+ chunk : new Buffer(chunk, encoding);
+
+ this.write(chunk, encoding, function() {
+ writeRemnant(_this);
+ });
+};
+
+/**
+ * @ignore
+ */
+
+function __handleError(_this, error, callback) {
+ if (_this.state.errored) {
+ return;
+ }
+ _this.state.errored = true;
+ if (callback) {
+ return callback(error);
+ }
+ _this.emit('error', error);
+}
+
+/**
+ * @ignore
+ */
+
+function createChunkDoc(filesId, n, data) {
+ return {
+ _id: core.BSON.ObjectId(),
+ files_id: filesId,
+ n: n,
+ data: data
+ };
+}
+
+/**
+ * @ignore
+ */
+
+function checkChunksIndex(_this, callback) {
+ _this.chunks.listIndexes().toArray(function(error, indexes) {
+ if (error) {
+ // Collection doesn't exist so create index
+ if (error.code === ERROR_NAMESPACE_NOT_FOUND) {
+ var index = { files_id: 1, n: 1 };
+ _this.chunks.createIndex(index, { background: false, unique: true }, function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ callback();
+ });
+ return;
+ }
+ return callback(error);
+ }
+
+ var hasChunksIndex = false;
+ indexes.forEach(function(index) {
+ if (index.key) {
+ var keys = Object.keys(index.key);
+ if (keys.length === 2 && index.key.files_id === 1 &&
+ index.key.n === 1) {
+ hasChunksIndex = true;
+ }
+ }
+ });
+
+ if (hasChunksIndex) {
+ callback();
+ } else {
+ var index = { files_id: 1, n: 1 };
+ var indexOptions = getWriteOptions(_this);
+
+ indexOptions.background = false;
+ indexOptions.unique = true;
+
+ _this.chunks.createIndex(index, indexOptions, function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ callback();
+ });
+ }
+ });
+}
+
+/**
+ * @ignore
+ */
+
+function checkDone(_this, callback) {
+ if (_this.state.streamEnd &&
+ _this.state.outstandingRequests === 0 &&
+ !_this.state.errored) {
+ var filesDoc = createFilesDoc(_this.id, _this.length, _this.chunkSizeBytes,
+ _this.md5.digest('hex'), _this.filename, _this.options.contentType,
+ _this.options.aliases, _this.options.metadata);
+
+ if (checkAborted(_this, callback)) {
+ return false;
+ }
+
+ _this.files.insert(filesDoc, getWriteOptions(_this), function(error) {
+ if (error) {
+ return __handleError(_this, error, callback);
+ }
+ _this.emit('finish', filesDoc);
+ });
+
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * @ignore
+ */
+
+function checkIndexes(_this, callback) {
+ _this.files.findOne({}, { _id: 1 }, function(error, doc) {
+ if (error) {
+ return callback(error);
+ }
+ if (doc) {
+ return callback();
+ }
+
+ _this.files.listIndexes().toArray(function(error, indexes) {
+ if (error) {
+ // Collection doesn't exist so create index
+ if (error.code === ERROR_NAMESPACE_NOT_FOUND) {
+ var index = { filename: 1, uploadDate: 1 };
+ _this.files.createIndex(index, { background: false }, function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ checkChunksIndex(_this, callback);
+ });
+ return;
+ }
+ return callback(error);
+ }
+
+ var hasFileIndex = false;
+ indexes.forEach(function(index) {
+ var keys = Object.keys(index.key);
+ if (keys.length === 2 && index.key.filename === 1 &&
+ index.key.uploadDate === 1) {
+ hasFileIndex = true;
+ }
+ });
+
+ if (hasFileIndex) {
+ checkChunksIndex(_this, callback);
+ } else {
+ var index = { filename: 1, uploadDate: 1 };
+
+ var indexOptions = getWriteOptions(_this);
+
+ indexOptions.background = false;
+
+ _this.files.createIndex(index, indexOptions, function(error) {
+ if (error) {
+ return callback(error);
+ }
+
+ checkChunksIndex(_this, callback);
+ });
+ }
+ });
+ });
+}
+
+/**
+ * @ignore
+ */
+
+function createFilesDoc(_id, length, chunkSize, md5, filename, contentType,
+ aliases, metadata) {
+ var ret = {
+ _id: _id,
+ length: length,
+ chunkSize: chunkSize,
+ uploadDate: new Date(),
+ md5: md5,
+ filename: filename
+ };
+
+ if (contentType) {
+ ret.contentType = contentType;
+ }
+
+ if (aliases) {
+ ret.aliases = aliases;
+ }
+
+ if (metadata) {
+ ret.metadata = metadata;
+ }
+
+ return ret;
+}
+
+/**
+ * @ignore
+ */
+
+function doWrite(_this, chunk, encoding, callback) {
+ if (checkAborted(_this, callback)) {
+ return false;
+ }
+
+ var inputBuf = (Buffer.isBuffer(chunk)) ?
+ chunk : new Buffer(chunk, encoding);
+
+ _this.length += inputBuf.length;
+
+ // Input is small enough to fit in our buffer
+ if (_this.pos + inputBuf.length < _this.chunkSizeBytes) {
+ inputBuf.copy(_this.bufToStore, _this.pos);
+ _this.pos += inputBuf.length;
+
+ callback && callback();
+
+ // Note that we reverse the typical semantics of write's return value
+ // to be compatible with node's `.pipe()` function.
+ // True means client can keep writing.
+ return true;
+ }
+
+ // Otherwise, buffer is too big for current chunk, so we need to flush
+ // to MongoDB.
+ var inputBufRemaining = inputBuf.length;
+ var spaceRemaining = _this.chunkSizeBytes - _this.pos;
+ var numToCopy = Math.min(spaceRemaining, inputBuf.length);
+ var outstandingRequests = 0;
+ while (inputBufRemaining > 0) {
+ var inputBufPos = inputBuf.length - inputBufRemaining;
+ inputBuf.copy(_this.bufToStore, _this.pos,
+ inputBufPos, inputBufPos + numToCopy);
+ _this.pos += numToCopy;
+ spaceRemaining -= numToCopy;
+ if (spaceRemaining === 0) {
+ _this.md5.update(_this.bufToStore);
+ var doc = createChunkDoc(_this.id, _this.n, _this.bufToStore);
+ ++_this.state.outstandingRequests;
+ ++outstandingRequests;
+
+ if (checkAborted(_this, callback)) {
+ return false;
+ }
+
+ _this.chunks.insert(doc, getWriteOptions(_this), function(error) {
+ if (error) {
+ return __handleError(_this, error);
+ }
+ --_this.state.outstandingRequests;
+ --outstandingRequests;
+ if (!outstandingRequests) {
+ _this.emit('drain', doc);
+ callback && callback();
+ checkDone(_this);
+ }
+ });
+
+ spaceRemaining = _this.chunkSizeBytes;
+ _this.pos = 0;
+ ++_this.n;
+ }
+ inputBufRemaining -= numToCopy;
+ numToCopy = Math.min(spaceRemaining, inputBufRemaining);
+ }
+
+ // Note that we reverse the typical semantics of write's return value
+ // to be compatible with node's `.pipe()` function.
+ // False means the client should wait for the 'drain' event.
+ return false;
+}
+
+/**
+ * @ignore
+ */
+
+function getWriteOptions(_this) {
+ var obj = {};
+ if (_this.options.writeConcern) {
+ obj.w = concern.w;
+ obj.wtimeout = concern.wtimeout;
+ obj.j = concern.j;
+ }
+ return obj;
+}
+
+/**
+ * @ignore
+ */
+
+function waitForIndexes(_this, callback) {
+ if (_this.bucket.s.checkedIndexes) {
+ return callback(false);
+ }
+
+ _this.bucket.once('index', function() {
+ callback(true);
+ });
+
+ return true;
+}
+
+/**
+ * @ignore
+ */
+
+function writeRemnant(_this, callback) {
+ // Buffer is empty, so don't bother to insert
+ if (_this.pos === 0) {
+ return checkDone(_this, callback);
+ }
+
+ ++_this.state.outstandingRequests;
+
+ // Create a new buffer to make sure the buffer isn't bigger than it needs
+ // to be.
+ var remnant = new Buffer(_this.pos);
+ _this.bufToStore.copy(remnant, 0, 0, _this.pos);
+ _this.md5.update(remnant);
+ var doc = createChunkDoc(_this.id, _this.n, remnant);
+
+ // If the stream was aborted, do not write remnant
+ if (checkAborted(_this, callback)) {
+ return false;
+ }
+
+ _this.chunks.insert(doc, getWriteOptions(_this), function(error) {
+ if (error) {
+ return __handleError(_this, error);
+ }
+ --_this.state.outstandingRequests;
+ checkDone(_this);
+ });
+}
+
+/**
+ * @ignore
+ */
+
+function checkAborted(_this, callback) {
+ if (_this.state.aborted) {
+ if(typeof callback == 'function') {
+ callback(new Error('this stream has been aborted'));
+ }
+ return true;
+ }
+ return false;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/chunk.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/chunk.js
new file mode 100644
index 0000000..cbd3ee8
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/chunk.js
@@ -0,0 +1,233 @@
+"use strict";
+
+var Binary = require('mongodb-core').BSON.Binary,
+ ObjectID = require('mongodb-core').BSON.ObjectID;
+
+/**
+ * Class for representing a single chunk in GridFS.
+ *
+ * @class
+ *
+ * @param file {GridStore} The {@link GridStore} object holding this chunk.
+ * @param mongoObject {object} The mongo object representation of this chunk.
+ *
+ * @throws Error when the type of data field for {@link mongoObject} is not
+ * supported. Currently supported types for data field are instances of
+ * {@link String}, {@link Array}, {@link Binary} and {@link Binary}
+ * from the bson module
+ *
+ * @see Chunk#buildMongoObject
+ */
+var Chunk = function(file, mongoObject, writeConcern) {
+ if(!(this instanceof Chunk)) return new Chunk(file, mongoObject);
+
+ this.file = file;
+ var self = this;
+ var mongoObjectFinal = mongoObject == null ? {} : mongoObject;
+ this.writeConcern = writeConcern || {w:1};
+ this.objectId = mongoObjectFinal._id == null ? new ObjectID() : mongoObjectFinal._id;
+ this.chunkNumber = mongoObjectFinal.n == null ? 0 : mongoObjectFinal.n;
+ this.data = new Binary();
+
+ if(mongoObjectFinal.data == null) {
+ } else if(typeof mongoObjectFinal.data == "string") {
+ var buffer = new Buffer(mongoObjectFinal.data.length);
+ buffer.write(mongoObjectFinal.data, 0, mongoObjectFinal.data.length, 'binary');
+ this.data = new Binary(buffer);
+ } else if(Array.isArray(mongoObjectFinal.data)) {
+ var buffer = new Buffer(mongoObjectFinal.data.length);
+ var data = mongoObjectFinal.data.join('');
+ buffer.write(data, 0, data.length, 'binary');
+ this.data = new Binary(buffer);
+ } else if(mongoObjectFinal.data._bsontype === 'Binary') {
+ this.data = mongoObjectFinal.data;
+ } else if(Buffer.isBuffer(mongoObjectFinal.data)) {
+ } else {
+ throw Error("Illegal chunk format");
+ }
+
+ // Update position
+ this.internalPosition = 0;
+};
+
+/**
+ * Writes a data to this object and advance the read/write head.
+ *
+ * @param data {string} the data to write
+ * @param callback {function(*, GridStore)} This will be called after executing
+ * this method. The first parameter will contain null and the second one
+ * will contain a reference to this object.
+ */
+Chunk.prototype.write = function(data, callback) {
+ this.data.write(data, this.internalPosition, data.length, 'binary');
+ this.internalPosition = this.data.length();
+ if(callback != null) return callback(null, this);
+ return this;
+};
+
+/**
+ * Reads data and advances the read/write head.
+ *
+ * @param length {number} The length of data to read.
+ *
+ * @return {string} The data read if the given length will not exceed the end of
+ * the chunk. Returns an empty String otherwise.
+ */
+Chunk.prototype.read = function(length) {
+ // Default to full read if no index defined
+ length = length == null || length == 0 ? this.length() : length;
+
+ if(this.length() - this.internalPosition + 1 >= length) {
+ var data = this.data.read(this.internalPosition, length);
+ this.internalPosition = this.internalPosition + length;
+ return data;
+ } else {
+ return '';
+ }
+};
+
+Chunk.prototype.readSlice = function(length) {
+ if ((this.length() - this.internalPosition) >= length) {
+ var data = null;
+ if (this.data.buffer != null) { //Pure BSON
+ data = this.data.buffer.slice(this.internalPosition, this.internalPosition + length);
+ } else { //Native BSON
+ data = new Buffer(length);
+ length = this.data.readInto(data, this.internalPosition);
+ }
+ this.internalPosition = this.internalPosition + length;
+ return data;
+ } else {
+ return null;
+ }
+};
+
+/**
+ * Checks if the read/write head is at the end.
+ *
+ * @return {boolean} Whether the read/write head has reached the end of this
+ * chunk.
+ */
+Chunk.prototype.eof = function() {
+ return this.internalPosition == this.length() ? true : false;
+};
+
+/**
+ * Reads one character from the data of this chunk and advances the read/write
+ * head.
+ *
+ * @return {string} a single character data read if the the read/write head is
+ * not at the end of the chunk. Returns an empty String otherwise.
+ */
+Chunk.prototype.getc = function() {
+ return this.read(1);
+};
+
+/**
+ * Clears the contents of the data in this chunk and resets the read/write head
+ * to the initial position.
+ */
+Chunk.prototype.rewind = function() {
+ this.internalPosition = 0;
+ this.data = new Binary();
+};
+
+/**
+ * Saves this chunk to the database. Also overwrites existing entries having the
+ * same id as this chunk.
+ *
+ * @param callback {function(*, GridStore)} This will be called after executing
+ * this method. The first parameter will contain null and the second one
+ * will contain a reference to this object.
+ */
+Chunk.prototype.save = function(options, callback) {
+ var self = this;
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ }
+
+ self.file.chunkCollection(function(err, collection) {
+ if(err) return callback(err);
+
+ // Merge the options
+ var writeOptions = { upsert: true };
+ for(var name in options) writeOptions[name] = options[name];
+ for(var name in self.writeConcern) writeOptions[name] = self.writeConcern[name];
+
+ if(self.data.length() > 0) {
+ self.buildMongoObject(function(mongoObject) {
+ var options = {forceServerObjectId:true};
+ for(var name in self.writeConcern) {
+ options[name] = self.writeConcern[name];
+ }
+
+ collection.replaceOne({'_id':self.objectId}, mongoObject, writeOptions, function(err, collection) {
+ callback(err, self);
+ });
+ });
+ } else {
+ callback(null, self);
+ }
+ // });
+ });
+};
+
+/**
+ * Creates a mongoDB object representation of this chunk.
+ *
+ * @param callback {function(Object)} This will be called after executing this
+ * method. The object will be passed to the first parameter and will have
+ * the structure:
+ *
+ *
+ * {
+ * '_id' : , // {number} id for this chunk
+ * 'files_id' : , // {number} foreign key to the file collection
+ * 'n' : , // {number} chunk number
+ * 'data' : , // {bson#Binary} the chunk data itself
+ * }
+ *
+ *
+ * @see MongoDB GridFS Chunk Object Structure
+ */
+Chunk.prototype.buildMongoObject = function(callback) {
+ var mongoObject = {
+ 'files_id': this.file.fileId,
+ 'n': this.chunkNumber,
+ 'data': this.data};
+ // If we are saving using a specific ObjectId
+ if(this.objectId != null) mongoObject._id = this.objectId;
+
+ callback(mongoObject);
+};
+
+/**
+ * @return {number} the length of the data
+ */
+Chunk.prototype.length = function() {
+ return this.data.length();
+};
+
+/**
+ * The position of the read/write head
+ * @name position
+ * @lends Chunk#
+ * @field
+ */
+Object.defineProperty(Chunk.prototype, "position", { enumerable: true
+ , get: function () {
+ return this.internalPosition;
+ }
+ , set: function(value) {
+ this.internalPosition = value;
+ }
+});
+
+/**
+ * The default chunk size
+ * @constant
+ */
+Chunk.DEFAULT_CHUNK_SIZE = 1024 * 255;
+
+module.exports = Chunk;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/grid_store.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/grid_store.js
new file mode 100644
index 0000000..93afa50
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/gridfs/grid_store.js
@@ -0,0 +1,1956 @@
+"use strict";
+
+/**
+ * @fileOverview GridFS is a tool for MongoDB to store files to the database.
+ * Because of the restrictions of the object size the database can hold, a
+ * facility to split a file into several chunks is needed. The {@link GridStore}
+ * class offers a simplified api to interact with files while managing the
+ * chunks of split files behind the scenes. More information about GridFS can be
+ * found here.
+ *
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * GridStore = require('mongodb').GridStore,
+ * ObjectID = require('mongodb').ObjectID,
+ * test = require('assert');
+ *
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * var gridStore = new GridStore(db, null, "w");
+ * gridStore.open(function(err, gridStore) {
+ * gridStore.write("hello world!", function(err, gridStore) {
+ * gridStore.close(function(err, result) {
+ *
+ * // Let's read the file using object Id
+ * GridStore.read(db, result._id, function(err, data) {
+ * test.equal('hello world!', data);
+ * db.close();
+ * test.done();
+ * });
+ * });
+ * });
+ * });
+ * });
+ */
+var Chunk = require('./chunk'),
+ ObjectID = require('mongodb-core').BSON.ObjectID,
+ ReadPreference = require('../read_preference'),
+ Buffer = require('buffer').Buffer,
+ Collection = require('../collection'),
+ fs = require('fs'),
+ timers = require('timers'),
+ f = require('util').format,
+ util = require('util'),
+ Define = require('../metadata'),
+ MongoError = require('mongodb-core').MongoError,
+ inherits = util.inherits,
+ Duplex = require('stream').Duplex || require('readable-stream').Duplex,
+ shallowClone = require('../utils').shallowClone;
+
+var REFERENCE_BY_FILENAME = 0,
+ REFERENCE_BY_ID = 1;
+
+/**
+ * Namespace provided by the mongodb-core and node.js
+ * @external Duplex
+ */
+
+/**
+ * Create a new GridStore instance
+ *
+ * Modes
+ * - **"r"** - read only. This is the default mode.
+ * - **"w"** - write in truncate mode. Existing data will be overwriten.
+ *
+ * @class
+ * @param {Db} db A database instance to interact with.
+ * @param {object} [id] optional unique id for this file
+ * @param {string} [filename] optional filename for this file, no unique constrain on the field
+ * @param {string} mode set the mode for this file.
+ * @param {object} [options=null] Optional settings.
+ * @param {(number|string)} [options.w=null] The write concern.
+ * @param {number} [options.wtimeout=null] The write concern timeout.
+ * @param {boolean} [options.j=false] Specify a journal write concern.
+ * @param {boolean} [options.fsync=false] Specify a file sync write concern.
+ * @param {string} [options.root=null] Root collection to use. Defaults to **{GridStore.DEFAULT_ROOT_COLLECTION}**.
+ * @param {string} [options.content_type=null] MIME type of the file. Defaults to **{GridStore.DEFAULT_CONTENT_TYPE}**.
+ * @param {number} [options.chunk_size=261120] Size for the chunk. Defaults to **{Chunk.DEFAULT_CHUNK_SIZE}**.
+ * @param {object} [options.metadata=null] Arbitrary data the user wants to store.
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @property {number} chunkSize Get the gridstore chunk size.
+ * @property {number} md5 The md5 checksum for this file.
+ * @property {number} chunkNumber The current chunk number the gridstore has materialized into memory
+ * @return {GridStore} a GridStore instance.
+ * @deprecated Use GridFSBucket API instead
+ */
+var GridStore = function GridStore(db, id, filename, mode, options) {
+ if(!(this instanceof GridStore)) return new GridStore(db, id, filename, mode, options);
+ var self = this;
+ this.db = db;
+
+ // Handle options
+ if(typeof options === 'undefined') options = {};
+ // Handle mode
+ if(typeof mode === 'undefined') {
+ mode = filename;
+ filename = undefined;
+ } else if(typeof mode == 'object') {
+ options = mode;
+ mode = filename;
+ filename = undefined;
+ }
+
+ if(id instanceof ObjectID) {
+ this.referenceBy = REFERENCE_BY_ID;
+ this.fileId = id;
+ this.filename = filename;
+ } else if(typeof filename == 'undefined') {
+ this.referenceBy = REFERENCE_BY_FILENAME;
+ this.filename = id;
+ if (mode.indexOf('w') != null) {
+ this.fileId = new ObjectID();
+ }
+ } else {
+ this.referenceBy = REFERENCE_BY_ID;
+ this.fileId = id;
+ this.filename = filename;
+ }
+
+ // Set up the rest
+ this.mode = mode == null ? "r" : mode;
+ this.options = options || {};
+
+ // Opened
+ this.isOpen = false;
+
+ // Set the root if overridden
+ this.root = this.options['root'] == null ? GridStore.DEFAULT_ROOT_COLLECTION : this.options['root'];
+ this.position = 0;
+ this.readPreference = this.options.readPreference || db.options.readPreference || ReadPreference.PRIMARY;
+ this.writeConcern = _getWriteConcern(db, this.options);
+ // Set default chunk size
+ this.internalChunkSize = this.options['chunkSize'] == null ? Chunk.DEFAULT_CHUNK_SIZE : this.options['chunkSize'];
+
+ // Get the promiseLibrary
+ var promiseLibrary = this.options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Set the promiseLibrary
+ this.promiseLibrary = promiseLibrary;
+
+ Object.defineProperty(this, "chunkSize", { enumerable: true
+ , get: function () {
+ return this.internalChunkSize;
+ }
+ , set: function(value) {
+ if(!(this.mode[0] == "w" && this.position == 0 && this.uploadDate == null)) {
+ this.internalChunkSize = this.internalChunkSize;
+ } else {
+ this.internalChunkSize = value;
+ }
+ }
+ });
+
+ Object.defineProperty(this, "md5", { enumerable: true
+ , get: function () {
+ return this.internalMd5;
+ }
+ });
+
+ Object.defineProperty(this, "chunkNumber", { enumerable: true
+ , get: function () {
+ return this.currentChunk && this.currentChunk.chunkNumber ? this.currentChunk.chunkNumber : null;
+ }
+ });
+}
+
+var define = GridStore.define = new Define('Gridstore', GridStore, true);
+
+/**
+ * The callback format for the Gridstore.open method
+ * @callback GridStore~openCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {GridStore} gridStore The GridStore instance if the open method was successful.
+ */
+
+/**
+ * Opens the file from the database and initialize this object. Also creates a
+ * new one if file does not exist.
+ *
+ * @method
+ * @param {GridStore~openCallback} [callback] this will be called after executing this method
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.open = function(callback) {
+ var self = this;
+ if( this.mode != "w" && this.mode != "w+" && this.mode != "r"){
+ throw MongoError.create({message: "Illegal mode " + this.mode, driver:true});
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return open(self, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ open(self, function(err, store) {
+ if(err) return reject(err);
+ resolve(store);
+ })
+ });
+};
+
+var open = function(self, callback) {
+ // Get the write concern
+ var writeConcern = _getWriteConcern(self.db, self.options);
+
+ // If we are writing we need to ensure we have the right indexes for md5's
+ if((self.mode == "w" || self.mode == "w+")) {
+ // Get files collection
+ var collection = self.collection();
+ // Put index on filename
+ collection.ensureIndex([['filename', 1]], writeConcern, function(err, index) {
+ // Get chunk collection
+ var chunkCollection = self.chunkCollection();
+ // Make an unique index for compatibility with mongo-cxx-driver:legacy
+ var chunkIndexOptions = shallowClone(writeConcern);
+ chunkIndexOptions.unique = true;
+ // Ensure index on chunk collection
+ chunkCollection.ensureIndex([['files_id', 1], ['n', 1]], chunkIndexOptions, function(err, index) {
+ // Open the connection
+ _open(self, writeConcern, function(err, r) {
+ if(err) return callback(err);
+ self.isOpen = true;
+ callback(err, r);
+ });
+ });
+ });
+ } else {
+ // Open the gridstore
+ _open(self, writeConcern, function(err, r) {
+ if(err) return callback(err);
+ self.isOpen = true;
+ callback(err, r);
+ });
+ }
+}
+
+// Push the definition for open
+define.classMethod('open', {callback: true, promise:true});
+
+/**
+ * Verify if the file is at EOF.
+ *
+ * @method
+ * @return {boolean} true if the read/write head is at the end of this file.
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.eof = function() {
+ return this.position == this.length ? true : false;
+}
+
+define.classMethod('eof', {callback: false, promise:false, returns: [Boolean]});
+
+/**
+ * The callback result format.
+ * @callback GridStore~resultCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {object} result The result from the callback.
+ */
+
+/**
+ * Retrieves a single character from this file.
+ *
+ * @method
+ * @param {GridStore~resultCallback} [callback] this gets called after this method is executed. Passes null to the first parameter and the character read to the second or null to the second if the read/write head is at the end of the file.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.getc = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return eof(self, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ eof(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+}
+
+var eof = function(self, callback) {
+ if(self.eof()) {
+ callback(null, null);
+ } else if(self.currentChunk.eof()) {
+ nthChunk(self, self.currentChunk.chunkNumber + 1, function(err, chunk) {
+ self.currentChunk = chunk;
+ self.position = self.position + 1;
+ callback(err, self.currentChunk.getc());
+ });
+ } else {
+ self.position = self.position + 1;
+ callback(null, self.currentChunk.getc());
+ }
+}
+
+define.classMethod('getc', {callback: true, promise:true});
+
+/**
+ * Writes a string to the file with a newline character appended at the end if
+ * the given string does not have one.
+ *
+ * @method
+ * @param {string} string the string to write.
+ * @param {GridStore~resultCallback} [callback] this will be called after executing this method. The first parameter will contain null and the second one will contain a reference to this object.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.puts = function(string, callback) {
+ var self = this;
+ var finalString = string.match(/\n$/) == null ? string + "\n" : string;
+ // We provided a callback leg
+ if(typeof callback == 'function') return this.write(finalString, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ self.write(finalString, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+}
+
+define.classMethod('puts', {callback: true, promise:true});
+
+/**
+ * Return a modified Readable stream including a possible transform method.
+ *
+ * @method
+ * @return {GridStoreStream}
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.stream = function() {
+ return new GridStoreStream(this);
+}
+
+define.classMethod('stream', {callback: false, promise:false, returns: [GridStoreStream]});
+
+/**
+ * Writes some data. This method will work properly only if initialized with mode "w" or "w+".
+ *
+ * @method
+ * @param {(string|Buffer)} data the data to write.
+ * @param {boolean} [close] closes this file after writing if set to true.
+ * @param {GridStore~resultCallback} [callback] this will be called after executing this method. The first parameter will contain null and the second one will contain a reference to this object.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.write = function write(data, close, callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return _writeNormal(this, data, close, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ _writeNormal(self, data, close, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+}
+
+define.classMethod('write', {callback: true, promise:true});
+
+/**
+ * Handles the destroy part of a stream
+ *
+ * @method
+ * @result {null}
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.destroy = function destroy() {
+ // close and do not emit any more events. queued data is not sent.
+ if(!this.writable) return;
+ this.readable = false;
+ if(this.writable) {
+ this.writable = false;
+ this._q.length = 0;
+ this.emit('close');
+ }
+}
+
+define.classMethod('destroy', {callback: false, promise:false});
+
+/**
+ * Stores a file from the file system to the GridFS database.
+ *
+ * @method
+ * @param {(string|Buffer|FileHandle)} file the file to store.
+ * @param {GridStore~resultCallback} [callback] this will be called after executing this method. The first parameter will contain null and the second one will contain a reference to this object.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.writeFile = function (file, callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return writeFile(self, file, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ writeFile(self, file, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var writeFile = function(self, file, callback) {
+ if (typeof file === 'string') {
+ fs.open(file, 'r', function (err, fd) {
+ if(err) return callback(err);
+ self.writeFile(fd, callback);
+ });
+ return;
+ }
+
+ self.open(function (err, self) {
+ if(err) return callback(err, self);
+
+ fs.fstat(file, function (err, stats) {
+ if(err) return callback(err, self);
+
+ var offset = 0;
+ var index = 0;
+ var numberOfChunksLeft = Math.min(stats.size / self.chunkSize);
+
+ // Write a chunk
+ var writeChunk = function() {
+ fs.read(file, self.chunkSize, offset, 'binary', function(err, data, bytesRead) {
+ if(err) return callback(err, self);
+
+ offset = offset + bytesRead;
+
+ // Create a new chunk for the data
+ var chunk = new Chunk(self, {n:index++}, self.writeConcern);
+ chunk.write(data, function(err, chunk) {
+ if(err) return callback(err, self);
+
+ chunk.save({}, function(err, result) {
+ if(err) return callback(err, self);
+
+ self.position = self.position + data.length;
+
+ // Point to current chunk
+ self.currentChunk = chunk;
+
+ if(offset >= stats.size) {
+ fs.close(file);
+ self.close(function(err, result) {
+ if(err) return callback(err, self);
+ return callback(null, self);
+ });
+ } else {
+ return process.nextTick(writeChunk);
+ }
+ });
+ });
+ });
+ }
+
+ // Process the first write
+ process.nextTick(writeChunk);
+ });
+ });
+}
+
+define.classMethod('writeFile', {callback: true, promise:true});
+
+/**
+ * Saves this file to the database. This will overwrite the old entry if it
+ * already exists. This will work properly only if mode was initialized to
+ * "w" or "w+".
+ *
+ * @method
+ * @param {GridStore~resultCallback} [callback] this will be called after executing this method. The first parameter will contain null and the second one will contain a reference to this object.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.close = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return close(self, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ close(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var close = function(self, callback) {
+ if(self.mode[0] == "w") {
+ // Set up options
+ var options = self.writeConcern;
+
+ if(self.currentChunk != null && self.currentChunk.position > 0) {
+ self.currentChunk.save({}, function(err, chunk) {
+ if(err && typeof callback == 'function') return callback(err);
+
+ self.collection(function(err, files) {
+ if(err && typeof callback == 'function') return callback(err);
+
+ // Build the mongo object
+ if(self.uploadDate != null) {
+ buildMongoObject(self, function(err, mongoObject) {
+ if(err) {
+ if(typeof callback == 'function') return callback(err); else throw err;
+ }
+
+ files.save(mongoObject, options, function(err) {
+ if(typeof callback == 'function')
+ callback(err, mongoObject);
+ });
+ });
+ } else {
+ self.uploadDate = new Date();
+ buildMongoObject(self, function(err, mongoObject) {
+ if(err) {
+ if(typeof callback == 'function') return callback(err); else throw err;
+ }
+
+ files.save(mongoObject, options, function(err) {
+ if(typeof callback == 'function')
+ callback(err, mongoObject);
+ });
+ });
+ }
+ });
+ });
+ } else {
+ self.collection(function(err, files) {
+ if(err && typeof callback == 'function') return callback(err);
+
+ self.uploadDate = new Date();
+ buildMongoObject(self, function(err, mongoObject) {
+ if(err) {
+ if(typeof callback == 'function') return callback(err); else throw err;
+ }
+
+ files.save(mongoObject, options, function(err) {
+ if(typeof callback == 'function')
+ callback(err, mongoObject);
+ });
+ });
+ });
+ }
+ } else if(self.mode[0] == "r") {
+ if(typeof callback == 'function')
+ callback(null, null);
+ } else {
+ if(typeof callback == 'function')
+ callback(MongoError.create({message: f("Illegal mode %s", self.mode), driver:true}));
+ }
+}
+
+define.classMethod('close', {callback: true, promise:true});
+
+/**
+ * The collection callback format.
+ * @callback GridStore~collectionCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Collection} collection The collection from the command execution.
+ */
+
+/**
+ * Retrieve this file's chunks collection.
+ *
+ * @method
+ * @param {GridStore~collectionCallback} callback the command callback.
+ * @return {Collection}
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.chunkCollection = function(callback) {
+ if(typeof callback == 'function')
+ return this.db.collection((this.root + ".chunks"), callback);
+ return this.db.collection((this.root + ".chunks"));
+};
+
+define.classMethod('chunkCollection', {callback: true, promise:false, returns: [Collection]});
+
+/**
+ * Deletes all the chunks of this file in the database.
+ *
+ * @method
+ * @param {GridStore~resultCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.unlink = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return unlink(self, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ unlink(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var unlink = function(self, callback) {
+ deleteChunks(self, function(err) {
+ if(err!==null) {
+ err.message = "at deleteChunks: " + err.message;
+ return callback(err);
+ }
+
+ self.collection(function(err, collection) {
+ if(err!==null) {
+ err.message = "at collection: " + err.message;
+ return callback(err);
+ }
+
+ collection.remove({'_id':self.fileId}, self.writeConcern, function(err) {
+ callback(err, self);
+ });
+ });
+ });
+}
+
+define.classMethod('unlink', {callback: true, promise:true});
+
+/**
+ * Retrieves the file collection associated with this object.
+ *
+ * @method
+ * @param {GridStore~collectionCallback} callback the command callback.
+ * @return {Collection}
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.collection = function(callback) {
+ if(typeof callback == 'function')
+ this.db.collection(this.root + ".files", callback);
+ return this.db.collection(this.root + ".files");
+};
+
+define.classMethod('collection', {callback: true, promise:false, returns: [Collection]});
+
+/**
+ * The readlines callback format.
+ * @callback GridStore~readlinesCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {string[]} strings The array of strings returned.
+ */
+
+/**
+ * Read the entire file as a list of strings splitting by the provided separator.
+ *
+ * @method
+ * @param {string} [separator] The character to be recognized as the newline separator.
+ * @param {GridStore~readlinesCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.readlines = function(separator, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ separator = args.length ? args.shift() : "\n";
+ separator = separator || "\n";
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return readlines(self, separator, callback);
+
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ readlines(self, separator, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var readlines = function(self, separator, callback) {
+ self.read(function(err, data) {
+ if(err) return callback(err);
+
+ var items = data.toString().split(separator);
+ items = items.length > 0 ? items.splice(0, items.length - 1) : [];
+ for(var i = 0; i < items.length; i++) {
+ items[i] = items[i] + separator;
+ }
+
+ callback(null, items);
+ });
+}
+
+define.classMethod('readlines', {callback: true, promise:true});
+
+/**
+ * Deletes all the chunks of this file in the database if mode was set to "w" or
+ * "w+" and resets the read/write head to the initial position.
+ *
+ * @method
+ * @param {GridStore~resultCallback} [callback] this will be called after executing this method. The first parameter will contain null and the second one will contain a reference to this object.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.rewind = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return rewind(self, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ rewind(self, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var rewind = function(self, callback) {
+ if(self.currentChunk.chunkNumber != 0) {
+ if(self.mode[0] == "w") {
+ deleteChunks(self, function(err, gridStore) {
+ if(err) return callback(err);
+ self.currentChunk = new Chunk(self, {'n': 0}, self.writeConcern);
+ self.position = 0;
+ callback(null, self);
+ });
+ } else {
+ self.currentChunk(0, function(err, chunk) {
+ if(err) return callback(err);
+ self.currentChunk = chunk;
+ self.currentChunk.rewind();
+ self.position = 0;
+ callback(null, self);
+ });
+ }
+ } else {
+ self.currentChunk.rewind();
+ self.position = 0;
+ callback(null, self);
+ }
+}
+
+define.classMethod('rewind', {callback: true, promise:true});
+
+/**
+ * The read callback format.
+ * @callback GridStore~readCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Buffer} data The data read from the GridStore object
+ */
+
+/**
+ * Retrieves the contents of this file and advances the read/write head. Works with Buffers only.
+ *
+ * There are 3 signatures for this method:
+ *
+ * (callback)
+ * (length, callback)
+ * (length, buffer, callback)
+ *
+ * @method
+ * @param {number} [length] the number of characters to read. Reads all the characters from the read/write head to the EOF if not specified.
+ * @param {(string|Buffer)} [buffer] a string to hold temporary data. This is used for storing the string data read so far when recursively calling this method.
+ * @param {GridStore~readCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.read = function(length, buffer, callback) {
+ var self = this;
+
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ length = args.length ? args.shift() : null;
+ buffer = args.length ? args.shift() : null;
+ // We provided a callback leg
+ if(typeof callback == 'function') return read(self, length, buffer, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ read(self, length, buffer, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+}
+
+var read = function(self, length, buffer, callback) {
+ // The data is a c-terminated string and thus the length - 1
+ var finalLength = length == null ? self.length - self.position : length;
+ var finalBuffer = buffer == null ? new Buffer(finalLength) : buffer;
+ // Add a index to buffer to keep track of writing position or apply current index
+ finalBuffer._index = buffer != null && buffer._index != null ? buffer._index : 0;
+
+ if((self.currentChunk.length() - self.currentChunk.position + finalBuffer._index) >= finalLength) {
+ var slice = self.currentChunk.readSlice(finalLength - finalBuffer._index);
+ // Copy content to final buffer
+ slice.copy(finalBuffer, finalBuffer._index);
+ // Update internal position
+ self.position = self.position + finalBuffer.length;
+ // Check if we don't have a file at all
+ if(finalLength == 0 && finalBuffer.length == 0) return callback(MongoError.create({message: "File does not exist", driver:true}), null);
+ // Else return data
+ return callback(null, finalBuffer);
+ }
+
+ // Read the next chunk
+ var slice = self.currentChunk.readSlice(self.currentChunk.length() - self.currentChunk.position);
+ // Copy content to final buffer
+ slice.copy(finalBuffer, finalBuffer._index);
+ // Update index position
+ finalBuffer._index += slice.length;
+
+ // Load next chunk and read more
+ nthChunk(self, self.currentChunk.chunkNumber + 1, function(err, chunk) {
+ if(err) return callback(err);
+
+ if(chunk.length() > 0) {
+ self.currentChunk = chunk;
+ self.read(length, finalBuffer, callback);
+ } else {
+ if(finalBuffer._index > 0) {
+ callback(null, finalBuffer)
+ } else {
+ callback(MongoError.create({message: "no chunks found for file, possibly corrupt", driver:true}), null);
+ }
+ }
+ });
+}
+
+define.classMethod('read', {callback: true, promise:true});
+
+/**
+ * The tell callback format.
+ * @callback GridStore~tellCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {number} position The current read position in the GridStore.
+ */
+
+/**
+ * Retrieves the position of the read/write head of this file.
+ *
+ * @method
+ * @param {number} [length] the number of characters to read. Reads all the characters from the read/write head to the EOF if not specified.
+ * @param {(string|Buffer)} [buffer] a string to hold temporary data. This is used for storing the string data read so far when recursively calling this method.
+ * @param {GridStore~tellCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.tell = function(callback) {
+ var self = this;
+ // We provided a callback leg
+ if(typeof callback == 'function') return callback(null, this.position);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ resolve(self.position);
+ });
+};
+
+define.classMethod('tell', {callback: true, promise:true});
+
+/**
+ * The tell callback format.
+ * @callback GridStore~gridStoreCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {GridStore} gridStore The gridStore.
+ */
+
+/**
+ * Moves the read/write head to a new location.
+ *
+ * There are 3 signatures for this method
+ *
+ * Seek Location Modes
+ * - **GridStore.IO_SEEK_SET**, **(default)** set the position from the start of the file.
+ * - **GridStore.IO_SEEK_CUR**, set the position from the current position in the file.
+ * - **GridStore.IO_SEEK_END**, set the position from the end of the file.
+ *
+ * @method
+ * @param {number} [position] the position to seek to
+ * @param {number} [seekLocation] seek mode. Use one of the Seek Location modes.
+ * @param {GridStore~gridStoreCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.prototype.seek = function(position, seekLocation, callback) {
+ var self = this;
+
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ seekLocation = args.length ? args.shift() : null;
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return seek(self, position, seekLocation, callback);
+ // Return promise
+ return new self.promiseLibrary(function(resolve, reject) {
+ seek(self, position, seekLocation, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+}
+
+var seek = function(self, position, seekLocation, callback) {
+ // Seek only supports read mode
+ if(self.mode != 'r') {
+ return callback(MongoError.create({message: "seek is only supported for mode r", driver:true}))
+ }
+
+ var seekLocationFinal = seekLocation == null ? GridStore.IO_SEEK_SET : seekLocation;
+ var finalPosition = position;
+ var targetPosition = 0;
+
+ // Calculate the position
+ if(seekLocationFinal == GridStore.IO_SEEK_CUR) {
+ targetPosition = self.position + finalPosition;
+ } else if(seekLocationFinal == GridStore.IO_SEEK_END) {
+ targetPosition = self.length + finalPosition;
+ } else {
+ targetPosition = finalPosition;
+ }
+
+ // Get the chunk
+ var newChunkNumber = Math.floor(targetPosition/self.chunkSize);
+ var seekChunk = function() {
+ nthChunk(self, newChunkNumber, function(err, chunk) {
+ if(err) return callback(err, null);
+ if(chunk == null) return callback(new Error('no chunk found'));
+
+ // Set the current chunk
+ self.currentChunk = chunk;
+ self.position = targetPosition;
+ self.currentChunk.position = (self.position % self.chunkSize);
+ callback(err, self);
+ });
+ };
+
+ seekChunk();
+}
+
+define.classMethod('seek', {callback: true, promise:true});
+
+/**
+ * @ignore
+ */
+var _open = function(self, options, callback) {
+ var collection = self.collection();
+ // Create the query
+ var query = self.referenceBy == REFERENCE_BY_ID ? {_id:self.fileId} : {filename:self.filename};
+ query = null == self.fileId && self.filename == null ? null : query;
+ options.readPreference = self.readPreference;
+
+ // Fetch the chunks
+ if(query != null) {
+ collection.findOne(query, options, function(err, doc) {
+ if(err) return error(err);
+
+ // Check if the collection for the files exists otherwise prepare the new one
+ if(doc != null) {
+ self.fileId = doc._id;
+ // Prefer a new filename over the existing one if this is a write
+ self.filename = ((self.mode == 'r') || (self.filename == undefined)) ? doc.filename : self.filename;
+ self.contentType = doc.contentType;
+ self.internalChunkSize = doc.chunkSize;
+ self.uploadDate = doc.uploadDate;
+ self.aliases = doc.aliases;
+ self.length = doc.length;
+ self.metadata = doc.metadata;
+ self.internalMd5 = doc.md5;
+ } else if (self.mode != 'r') {
+ self.fileId = self.fileId == null ? new ObjectID() : self.fileId;
+ self.contentType = GridStore.DEFAULT_CONTENT_TYPE;
+ self.internalChunkSize = self.internalChunkSize == null ? Chunk.DEFAULT_CHUNK_SIZE : self.internalChunkSize;
+ self.length = 0;
+ } else {
+ self.length = 0;
+ var txtId = self.fileId instanceof ObjectID ? self.fileId.toHexString() : self.fileId;
+ return error(MongoError.create({message: f("file with id %s not opened for writing", (self.referenceBy == REFERENCE_BY_ID ? txtId : self.filename)), driver:true}), self);
+ }
+
+ // Process the mode of the object
+ if(self.mode == "r") {
+ nthChunk(self, 0, options, function(err, chunk) {
+ if(err) return error(err);
+ self.currentChunk = chunk;
+ self.position = 0;
+ callback(null, self);
+ });
+ } else if(self.mode == "w" && doc) {
+ // Delete any existing chunks
+ deleteChunks(self, options, function(err, result) {
+ if(err) return error(err);
+ self.currentChunk = new Chunk(self, {'n':0}, self.writeConcern);
+ self.contentType = self.options['content_type'] == null ? self.contentType : self.options['content_type'];
+ self.internalChunkSize = self.options['chunk_size'] == null ? self.internalChunkSize : self.options['chunk_size'];
+ self.metadata = self.options['metadata'] == null ? self.metadata : self.options['metadata'];
+ self.aliases = self.options['aliases'] == null ? self.aliases : self.options['aliases'];
+ self.position = 0;
+ callback(null, self);
+ });
+ } else if(self.mode == "w") {
+ self.currentChunk = new Chunk(self, {'n':0}, self.writeConcern);
+ self.contentType = self.options['content_type'] == null ? self.contentType : self.options['content_type'];
+ self.internalChunkSize = self.options['chunk_size'] == null ? self.internalChunkSize : self.options['chunk_size'];
+ self.metadata = self.options['metadata'] == null ? self.metadata : self.options['metadata'];
+ self.aliases = self.options['aliases'] == null ? self.aliases : self.options['aliases'];
+ self.position = 0;
+ callback(null, self);
+ } else if(self.mode == "w+") {
+ nthChunk(self, lastChunkNumber(self), options, function(err, chunk) {
+ if(err) return error(err);
+ // Set the current chunk
+ self.currentChunk = chunk == null ? new Chunk(self, {'n':0}, self.writeConcern) : chunk;
+ self.currentChunk.position = self.currentChunk.data.length();
+ self.metadata = self.options['metadata'] == null ? self.metadata : self.options['metadata'];
+ self.aliases = self.options['aliases'] == null ? self.aliases : self.options['aliases'];
+ self.position = self.length;
+ callback(null, self);
+ });
+ }
+ });
+ } else {
+ // Write only mode
+ self.fileId = null == self.fileId ? new ObjectID() : self.fileId;
+ self.contentType = GridStore.DEFAULT_CONTENT_TYPE;
+ self.internalChunkSize = self.internalChunkSize == null ? Chunk.DEFAULT_CHUNK_SIZE : self.internalChunkSize;
+ self.length = 0;
+
+ var collection2 = self.chunkCollection();
+ // No file exists set up write mode
+ if(self.mode == "w") {
+ // Delete any existing chunks
+ deleteChunks(self, options, function(err, result) {
+ if(err) return error(err);
+ self.currentChunk = new Chunk(self, {'n':0}, self.writeConcern);
+ self.contentType = self.options['content_type'] == null ? self.contentType : self.options['content_type'];
+ self.internalChunkSize = self.options['chunk_size'] == null ? self.internalChunkSize : self.options['chunk_size'];
+ self.metadata = self.options['metadata'] == null ? self.metadata : self.options['metadata'];
+ self.aliases = self.options['aliases'] == null ? self.aliases : self.options['aliases'];
+ self.position = 0;
+ callback(null, self);
+ });
+ } else if(self.mode == "w+") {
+ nthChunk(self, lastChunkNumber(self), options, function(err, chunk) {
+ if(err) return error(err);
+ // Set the current chunk
+ self.currentChunk = chunk == null ? new Chunk(self, {'n':0}, self.writeConcern) : chunk;
+ self.currentChunk.position = self.currentChunk.data.length();
+ self.metadata = self.options['metadata'] == null ? self.metadata : self.options['metadata'];
+ self.aliases = self.options['aliases'] == null ? self.aliases : self.options['aliases'];
+ self.position = self.length;
+ callback(null, self);
+ });
+ }
+ }
+
+ // only pass error to callback once
+ function error (err) {
+ if(error.err) return;
+ callback(error.err = err);
+ }
+};
+
+/**
+ * @ignore
+ */
+var writeBuffer = function(self, buffer, close, callback) {
+ if(typeof close === "function") { callback = close; close = null; }
+ var finalClose = typeof close == 'boolean' ? close : false;
+
+ if(self.mode != "w") {
+ callback(MongoError.create({message: f("file with id %s not opened for writing", (self.referenceBy == REFERENCE_BY_ID ? self.referenceBy : self.filename)), driver:true}), null);
+ } else {
+ if(self.currentChunk.position + buffer.length >= self.chunkSize) {
+ // Write out the current Chunk and then keep writing until we have less data left than a chunkSize left
+ // to a new chunk (recursively)
+ var previousChunkNumber = self.currentChunk.chunkNumber;
+ var leftOverDataSize = self.chunkSize - self.currentChunk.position;
+ var firstChunkData = buffer.slice(0, leftOverDataSize);
+ var leftOverData = buffer.slice(leftOverDataSize);
+ // A list of chunks to write out
+ var chunksToWrite = [self.currentChunk.write(firstChunkData)];
+ // If we have more data left than the chunk size let's keep writing new chunks
+ while(leftOverData.length >= self.chunkSize) {
+ // Create a new chunk and write to it
+ var newChunk = new Chunk(self, {'n': (previousChunkNumber + 1)}, self.writeConcern);
+ var firstChunkData = leftOverData.slice(0, self.chunkSize);
+ leftOverData = leftOverData.slice(self.chunkSize);
+ // Update chunk number
+ previousChunkNumber = previousChunkNumber + 1;
+ // Write data
+ newChunk.write(firstChunkData);
+ // Push chunk to save list
+ chunksToWrite.push(newChunk);
+ }
+
+ // Set current chunk with remaining data
+ self.currentChunk = new Chunk(self, {'n': (previousChunkNumber + 1)}, self.writeConcern);
+ // If we have left over data write it
+ if(leftOverData.length > 0) self.currentChunk.write(leftOverData);
+
+ // Update the position for the gridstore
+ self.position = self.position + buffer.length;
+ // Total number of chunks to write
+ var numberOfChunksToWrite = chunksToWrite.length;
+
+ for(var i = 0; i < chunksToWrite.length; i++) {
+ chunksToWrite[i].save({}, function(err, result) {
+ if(err) return callback(err);
+
+ numberOfChunksToWrite = numberOfChunksToWrite - 1;
+
+ if(numberOfChunksToWrite <= 0) {
+ // We care closing the file before returning
+ if(finalClose) {
+ return self.close(function(err, result) {
+ callback(err, self);
+ });
+ }
+
+ // Return normally
+ return callback(null, self);
+ }
+ });
+ }
+ } else {
+ // Update the position for the gridstore
+ self.position = self.position + buffer.length;
+ // We have less data than the chunk size just write it and callback
+ self.currentChunk.write(buffer);
+ // We care closing the file before returning
+ if(finalClose) {
+ return self.close(function(err, result) {
+ callback(err, self);
+ });
+ }
+ // Return normally
+ return callback(null, self);
+ }
+ }
+};
+
+/**
+ * Creates a mongoDB object representation of this object.
+ *
+ *
+ * {
+ * '_id' : , // {number} id for this file
+ * 'filename' : , // {string} name for this file
+ * 'contentType' : , // {string} mime type for this file
+ * 'length' : , // {number} size of this file?
+ * 'chunksize' : , // {number} chunk size used by this file
+ * 'uploadDate' : , // {Date}
+ * 'aliases' : , // {array of string}
+ * 'metadata' : , // {string}
+ * }
+ *
+ *
+ * @ignore
+ */
+var buildMongoObject = function(self, callback) {
+ // Calcuate the length
+ var mongoObject = {
+ '_id': self.fileId,
+ 'filename': self.filename,
+ 'contentType': self.contentType,
+ 'length': self.position ? self.position : 0,
+ 'chunkSize': self.chunkSize,
+ 'uploadDate': self.uploadDate,
+ 'aliases': self.aliases,
+ 'metadata': self.metadata
+ };
+
+ var md5Command = {filemd5:self.fileId, root:self.root};
+ self.db.command(md5Command, function(err, results) {
+ if(err) return callback(err);
+
+ mongoObject.md5 = results.md5;
+ callback(null, mongoObject);
+ });
+};
+
+/**
+ * Gets the nth chunk of this file.
+ * @ignore
+ */
+var nthChunk = function(self, chunkNumber, options, callback) {
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ }
+
+ options = options || self.writeConcern;
+ options.readPreference = self.readPreference;
+ // Get the nth chunk
+ self.chunkCollection().findOne({'files_id':self.fileId, 'n':chunkNumber}, options, function(err, chunk) {
+ if(err) return callback(err);
+
+ var finalChunk = chunk == null ? {} : chunk;
+ callback(null, new Chunk(self, finalChunk, self.writeConcern));
+ });
+};
+
+/**
+ * @ignore
+ */
+var lastChunkNumber = function(self) {
+ return Math.floor((self.length ? self.length - 1 : 0)/self.chunkSize);
+};
+
+/**
+ * Deletes all the chunks of this file in the database.
+ *
+ * @ignore
+ */
+var deleteChunks = function(self, options, callback) {
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ }
+
+ options = options || self.writeConcern;
+
+ if(self.fileId != null) {
+ self.chunkCollection().remove({'files_id':self.fileId}, options, function(err, result) {
+ if(err) return callback(err, false);
+ callback(null, true);
+ });
+ } else {
+ callback(null, true);
+ }
+};
+
+/**
+* The collection to be used for holding the files and chunks collection.
+*
+* @classconstant DEFAULT_ROOT_COLLECTION
+**/
+GridStore.DEFAULT_ROOT_COLLECTION = 'fs';
+
+/**
+* Default file mime type
+*
+* @classconstant DEFAULT_CONTENT_TYPE
+**/
+GridStore.DEFAULT_CONTENT_TYPE = 'binary/octet-stream';
+
+/**
+* Seek mode where the given length is absolute.
+*
+* @classconstant IO_SEEK_SET
+**/
+GridStore.IO_SEEK_SET = 0;
+
+/**
+* Seek mode where the given length is an offset to the current read/write head.
+*
+* @classconstant IO_SEEK_CUR
+**/
+GridStore.IO_SEEK_CUR = 1;
+
+/**
+* Seek mode where the given length is an offset to the end of the file.
+*
+* @classconstant IO_SEEK_END
+**/
+GridStore.IO_SEEK_END = 2;
+
+/**
+ * Checks if a file exists in the database.
+ *
+ * @method
+ * @static
+ * @param {Db} db the database to query.
+ * @param {string} name The name of the file to look for.
+ * @param {string} [rootCollection] The root collection that holds the files and chunks collection. Defaults to **{GridStore.DEFAULT_ROOT_COLLECTION}**.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {GridStore~resultCallback} [callback] result from exists.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.exist = function(db, fileIdObject, rootCollection, options, callback) {
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ rootCollection = args.length ? args.shift() : null;
+ options = args.length ? args.shift() : {};
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return exists(db, fileIdObject, rootCollection, options, callback);
+ // Return promise
+ return new promiseLibrary(function(resolve, reject) {
+ exists(db, fileIdObject, rootCollection, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var exists = function(db, fileIdObject, rootCollection, options, callback) {
+ // Establish read preference
+ var readPreference = options.readPreference || ReadPreference.PRIMARY;
+ // Fetch collection
+ var rootCollectionFinal = rootCollection != null ? rootCollection : GridStore.DEFAULT_ROOT_COLLECTION;
+ db.collection(rootCollectionFinal + ".files", function(err, collection) {
+ if(err) return callback(err);
+
+ // Build query
+ var query = (typeof fileIdObject == 'string' || Object.prototype.toString.call(fileIdObject) == '[object RegExp]' )
+ ? {'filename':fileIdObject}
+ : {'_id':fileIdObject}; // Attempt to locate file
+
+ // We have a specific query
+ if(fileIdObject != null
+ && typeof fileIdObject == 'object'
+ && Object.prototype.toString.call(fileIdObject) != '[object RegExp]') {
+ query = fileIdObject;
+ }
+
+ // Check if the entry exists
+ collection.findOne(query, {readPreference:readPreference}, function(err, item) {
+ if(err) return callback(err);
+ callback(null, item == null ? false : true);
+ });
+ });
+}
+
+define.staticMethod('exist', {callback: true, promise:true});
+
+/**
+ * Gets the list of files stored in the GridFS.
+ *
+ * @method
+ * @static
+ * @param {Db} db the database to query.
+ * @param {string} [rootCollection] The root collection that holds the files and chunks collection. Defaults to **{GridStore.DEFAULT_ROOT_COLLECTION}**.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {GridStore~resultCallback} [callback] result from exists.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.list = function(db, rootCollection, options, callback) {
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ rootCollection = args.length ? args.shift() : null;
+ options = args.length ? args.shift() : {};
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return list(db, rootCollection, options, callback);
+ // Return promise
+ return new promiseLibrary(function(resolve, reject) {
+ list(db, rootCollection, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var list = function(db, rootCollection, options, callback) {
+ // Ensure we have correct values
+ if(rootCollection != null && typeof rootCollection == 'object') {
+ options = rootCollection;
+ rootCollection = null;
+ }
+
+ // Establish read preference
+ var readPreference = options.readPreference || ReadPreference.PRIMARY;
+ // Check if we are returning by id not filename
+ var byId = options['id'] != null ? options['id'] : false;
+ // Fetch item
+ var rootCollectionFinal = rootCollection != null ? rootCollection : GridStore.DEFAULT_ROOT_COLLECTION;
+ var items = [];
+ db.collection((rootCollectionFinal + ".files"), function(err, collection) {
+ if(err) return callback(err);
+
+ collection.find({}, {readPreference:readPreference}, function(err, cursor) {
+ if(err) return callback(err);
+
+ cursor.each(function(err, item) {
+ if(item != null) {
+ items.push(byId ? item._id : item.filename);
+ } else {
+ callback(err, items);
+ }
+ });
+ });
+ });
+}
+
+define.staticMethod('list', {callback: true, promise:true});
+
+/**
+ * Reads the contents of a file.
+ *
+ * This method has the following signatures
+ *
+ * (db, name, callback)
+ * (db, name, length, callback)
+ * (db, name, length, offset, callback)
+ * (db, name, length, offset, options, callback)
+ *
+ * @method
+ * @static
+ * @param {Db} db the database to query.
+ * @param {string} name The name of the file.
+ * @param {number} [length] The size of data to read.
+ * @param {number} [offset] The offset from the head of the file of which to start reading from.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {GridStore~readCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.read = function(db, name, length, offset, options, callback) {
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ length = args.length ? args.shift() : null;
+ offset = args.length ? args.shift() : null;
+ options = args.length ? args.shift() : null;
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options ? options.promiseLibrary : null;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return readStatic(db, name, length, offset, options, callback);
+ // Return promise
+ return new promiseLibrary(function(resolve, reject) {
+ readStatic(db, name, length, offset, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var readStatic = function(db, name, length, offset, options, callback) {
+ new GridStore(db, name, "r", options).open(function(err, gridStore) {
+ if(err) return callback(err);
+ // Make sure we are not reading out of bounds
+ if(offset && offset >= gridStore.length) return callback("offset larger than size of file", null);
+ if(length && length > gridStore.length) return callback("length is larger than the size of the file", null);
+ if(offset && length && (offset + length) > gridStore.length) return callback("offset and length is larger than the size of the file", null);
+
+ if(offset != null) {
+ gridStore.seek(offset, function(err, gridStore) {
+ if(err) return callback(err);
+ gridStore.read(length, callback);
+ });
+ } else {
+ gridStore.read(length, callback);
+ }
+ });
+}
+
+define.staticMethod('read', {callback: true, promise:true});
+
+/**
+ * Read the entire file as a list of strings splitting by the provided separator.
+ *
+ * @method
+ * @static
+ * @param {Db} db the database to query.
+ * @param {(String|object)} name the name of the file.
+ * @param {string} [separator] The character to be recognized as the newline separator.
+ * @param {object} [options=null] Optional settings.
+ * @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {GridStore~readlinesCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.readlines = function(db, name, separator, options, callback) {
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ separator = args.length ? args.shift() : null;
+ options = args.length ? args.shift() : null;
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options ? options.promiseLibrary : null;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return readlinesStatic(db, name, separator, options, callback);
+ // Return promise
+ return new promiseLibrary(function(resolve, reject) {
+ readlinesStatic(db, name, separator, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var readlinesStatic = function(db, name, separator, options, callback) {
+ var finalSeperator = separator == null ? "\n" : separator;
+ new GridStore(db, name, "r", options).open(function(err, gridStore) {
+ if(err) return callback(err);
+ gridStore.readlines(finalSeperator, callback);
+ });
+}
+
+define.staticMethod('readlines', {callback: true, promise:true});
+
+/**
+ * Deletes the chunks and metadata information of a file from GridFS.
+ *
+ * @method
+ * @static
+ * @param {Db} db The database to query.
+ * @param {(string|array)} names The name/names of the files to delete.
+ * @param {object} [options=null] Optional settings.
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {GridStore~resultCallback} [callback] the command callback.
+ * @return {Promise} returns Promise if no callback passed
+ * @deprecated Use GridFSBucket API instead
+ */
+GridStore.unlink = function(db, names, options, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ options = args.length ? args.shift() : {};
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // We provided a callback leg
+ if(typeof callback == 'function') return unlinkStatic(self, db, names, options, callback);
+
+ // Return promise
+ return new promiseLibrary(function(resolve, reject) {
+ unlinkStatic(self, db, names, options, function(err, r) {
+ if(err) return reject(err);
+ resolve(r);
+ })
+ });
+};
+
+var unlinkStatic = function(self, db, names, options, callback) {
+ // Get the write concern
+ var writeConcern = _getWriteConcern(db, options);
+
+ // List of names
+ if(names.constructor == Array) {
+ var tc = 0;
+ for(var i = 0; i < names.length; i++) {
+ ++tc;
+ GridStore.unlink(db, names[i], options, function(result) {
+ if(--tc == 0) {
+ callback(null, self);
+ }
+ });
+ }
+ } else {
+ new GridStore(db, names, "w", options).open(function(err, gridStore) {
+ if(err) return callback(err);
+ deleteChunks(gridStore, function(err, result) {
+ if(err) return callback(err);
+ gridStore.collection(function(err, collection) {
+ if(err) return callback(err);
+ collection.remove({'_id':gridStore.fileId}, writeConcern, function(err, result) {
+ callback(err, self);
+ });
+ });
+ });
+ });
+ }
+}
+
+define.staticMethod('unlink', {callback: true, promise:true});
+
+/**
+ * @ignore
+ */
+var _writeNormal = function(self, data, close, callback) {
+ // If we have a buffer write it using the writeBuffer method
+ if(Buffer.isBuffer(data)) {
+ return writeBuffer(self, data, close, callback);
+ } else {
+ return writeBuffer(self, new Buffer(data, 'binary'), close, callback);
+ }
+}
+
+/**
+ * @ignore
+ */
+var _setWriteConcernHash = function(options) {
+ var finalOptions = {};
+ if(options.w != null) finalOptions.w = options.w;
+ if(options.journal == true) finalOptions.j = options.journal;
+ if(options.j == true) finalOptions.j = options.j;
+ if(options.fsync == true) finalOptions.fsync = options.fsync;
+ if(options.wtimeout != null) finalOptions.wtimeout = options.wtimeout;
+ return finalOptions;
+}
+
+/**
+ * @ignore
+ */
+var _getWriteConcern = function(self, options) {
+ // Final options
+ var finalOptions = {w:1};
+ options = options || {};
+
+ // Local options verification
+ if(options.w != null || typeof options.j == 'boolean' || typeof options.journal == 'boolean' || typeof options.fsync == 'boolean') {
+ finalOptions = _setWriteConcernHash(options);
+ } else if(options.safe != null && typeof options.safe == 'object') {
+ finalOptions = _setWriteConcernHash(options.safe);
+ } else if(typeof options.safe == "boolean") {
+ finalOptions = {w: (options.safe ? 1 : 0)};
+ } else if(self.options.w != null || typeof self.options.j == 'boolean' || typeof self.options.journal == 'boolean' || typeof self.options.fsync == 'boolean') {
+ finalOptions = _setWriteConcernHash(self.options);
+ } else if(self.safe && (self.safe.w != null || typeof self.safe.j == 'boolean' || typeof self.safe.journal == 'boolean' || typeof self.safe.fsync == 'boolean')) {
+ finalOptions = _setWriteConcernHash(self.safe);
+ } else if(typeof self.safe == "boolean") {
+ finalOptions = {w: (self.safe ? 1 : 0)};
+ }
+
+ // Ensure we don't have an invalid combination of write concerns
+ if(finalOptions.w < 1
+ && (finalOptions.journal == true || finalOptions.j == true || finalOptions.fsync == true)) throw MongoError.create({message: "No acknowledgement using w < 1 cannot be combined with journal:true or fsync:true", driver:true});
+
+ // Return the options
+ return finalOptions;
+}
+
+/**
+ * Create a new GridStoreStream instance (INTERNAL TYPE, do not instantiate directly)
+ *
+ * @class
+ * @extends external:Duplex
+ * @return {GridStoreStream} a GridStoreStream instance.
+ * @deprecated Use GridFSBucket API instead
+ */
+var GridStoreStream = function(gs) {
+ var self = this;
+ // Initialize the duplex stream
+ Duplex.call(this);
+
+ // Get the gridstore
+ this.gs = gs;
+
+ // End called
+ this.endCalled = false;
+
+ // If we have a seek
+ this.totalBytesToRead = this.gs.length - this.gs.position;
+ this.seekPosition = this.gs.position;
+}
+
+//
+// Inherit duplex
+inherits(GridStoreStream, Duplex);
+
+GridStoreStream.prototype._pipe = GridStoreStream.prototype.pipe;
+
+// Set up override
+GridStoreStream.prototype.pipe = function(destination) {
+ var self = this;
+
+ // Only open gridstore if not already open
+ if(!self.gs.isOpen) {
+ self.gs.open(function(err) {
+ if(err) return self.emit('error', err);
+ self.totalBytesToRead = self.gs.length - self.gs.position;
+ self._pipe.apply(self, [destination]);
+ });
+ } else {
+ self.totalBytesToRead = self.gs.length - self.gs.position;
+ self._pipe.apply(self, [destination]);
+ }
+
+ return destination;
+}
+
+// Called by stream
+GridStoreStream.prototype._read = function(n) {
+ var self = this;
+
+ var read = function() {
+ // Read data
+ self.gs.read(length, function(err, buffer) {
+ if(err && !self.endCalled) return self.emit('error', err);
+
+ // Stream is closed
+ if(self.endCalled || buffer == null) return self.push(null);
+ // Remove bytes read
+ if(buffer.length <= self.totalBytesToRead) {
+ self.totalBytesToRead = self.totalBytesToRead - buffer.length;
+ self.push(buffer);
+ } else if(buffer.length > self.totalBytesToRead) {
+ self.totalBytesToRead = self.totalBytesToRead - buffer._index;
+ self.push(buffer.slice(0, buffer._index));
+ }
+
+ // Finished reading
+ if(self.totalBytesToRead <= 0) {
+ self.endCalled = true;
+ }
+ });
+ }
+
+ // Set read length
+ var length = self.gs.length < self.gs.chunkSize ? self.gs.length - self.seekPosition : self.gs.chunkSize;
+ if(!self.gs.isOpen) {
+ self.gs.open(function(err, gs) {
+ self.totalBytesToRead = self.gs.length - self.gs.position;
+ if(err) return self.emit('error', err);
+ read();
+ });
+ } else {
+ read();
+ }
+}
+
+GridStoreStream.prototype.destroy = function() {
+ this.pause();
+ this.endCalled = true;
+ this.gs.close();
+ this.emit('end');
+}
+
+GridStoreStream.prototype.write = function(chunk, encoding, callback) {
+ var self = this;
+ if(self.endCalled) return self.emit('error', MongoError.create({message: 'attempting to write to stream after end called', driver:true}))
+ // Do we have to open the gridstore
+ if(!self.gs.isOpen) {
+ self.gs.open(function() {
+ self.gs.isOpen = true;
+ self.gs.write(chunk, function() {
+ process.nextTick(function() {
+ self.emit('drain');
+ });
+ });
+ });
+ return false;
+ } else {
+ self.gs.write(chunk, function() {
+ self.emit('drain');
+ });
+ return true;
+ }
+}
+
+GridStoreStream.prototype.end = function(chunk, encoding, callback) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ callback = args.pop();
+ if(typeof callback != 'function') args.push(callback);
+ chunk = args.length ? args.shift() : null;
+ encoding = args.length ? args.shift() : null;
+ self.endCalled = true;
+
+ if(chunk) {
+ self.gs.write(chunk, function() {
+ self.gs.close(function() {
+ if(typeof callback == 'function') callback();
+ self.emit('end')
+ });
+ });
+ }
+
+ self.gs.close(function() {
+ if(typeof callback == 'function') callback();
+ self.emit('end')
+ });
+}
+
+/**
+ * The read() method pulls some data out of the internal buffer and returns it. If there is no data available, then it will return null.
+ * @function external:Duplex#read
+ * @param {number} size Optional argument to specify how much data to read.
+ * @return {(String | Buffer | null)}
+ */
+
+/**
+ * Call this function to cause the stream to return strings of the specified encoding instead of Buffer objects.
+ * @function external:Duplex#setEncoding
+ * @param {string} encoding The encoding to use.
+ * @return {null}
+ */
+
+/**
+ * This method will cause the readable stream to resume emitting data events.
+ * @function external:Duplex#resume
+ * @return {null}
+ */
+
+/**
+ * This method will cause a stream in flowing-mode to stop emitting data events. Any data that becomes available will remain in the internal buffer.
+ * @function external:Duplex#pause
+ * @return {null}
+ */
+
+/**
+ * This method pulls all the data out of a readable stream, and writes it to the supplied destination, automatically managing the flow so that the destination is not overwhelmed by a fast readable stream.
+ * @function external:Duplex#pipe
+ * @param {Writable} destination The destination for writing data
+ * @param {object} [options] Pipe options
+ * @return {null}
+ */
+
+/**
+ * This method will remove the hooks set up for a previous pipe() call.
+ * @function external:Duplex#unpipe
+ * @param {Writable} [destination] The destination for writing data
+ * @return {null}
+ */
+
+/**
+ * This is useful in certain cases where a stream is being consumed by a parser, which needs to "un-consume" some data that it has optimistically pulled out of the source, so that the stream can be passed on to some other party.
+ * @function external:Duplex#unshift
+ * @param {(Buffer|string)} chunk Chunk of data to unshift onto the read queue.
+ * @return {null}
+ */
+
+/**
+ * Versions of Node prior to v0.10 had streams that did not implement the entire Streams API as it is today. (See "Compatibility" below for more information.)
+ * @function external:Duplex#wrap
+ * @param {Stream} stream An "old style" readable stream.
+ * @return {null}
+ */
+
+/**
+ * This method writes some data to the underlying system, and calls the supplied callback once the data has been fully handled.
+ * @function external:Duplex#write
+ * @param {(string|Buffer)} chunk The data to write
+ * @param {string} encoding The encoding, if chunk is a String
+ * @param {function} callback Callback for when this chunk of data is flushed
+ * @return {boolean}
+ */
+
+/**
+ * Call this method when no more data will be written to the stream. If supplied, the callback is attached as a listener on the finish event.
+ * @function external:Duplex#end
+ * @param {(string|Buffer)} chunk The data to write
+ * @param {string} encoding The encoding, if chunk is a String
+ * @param {function} callback Callback for when this chunk of data is flushed
+ * @return {null}
+ */
+
+/**
+ * GridStoreStream stream data event, fired for each document in the cursor.
+ *
+ * @event GridStoreStream#data
+ * @type {object}
+ */
+
+/**
+ * GridStoreStream stream end event
+ *
+ * @event GridStoreStream#end
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream close event
+ *
+ * @event GridStoreStream#close
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream readable event
+ *
+ * @event GridStoreStream#readable
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream drain event
+ *
+ * @event GridStoreStream#drain
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream finish event
+ *
+ * @event GridStoreStream#finish
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream pipe event
+ *
+ * @event GridStoreStream#pipe
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream unpipe event
+ *
+ * @event GridStoreStream#unpipe
+ * @type {null}
+ */
+
+/**
+ * GridStoreStream stream error event
+ *
+ * @event GridStoreStream#error
+ * @type {null}
+ */
+
+/**
+ * @ignore
+ */
+module.exports = GridStore;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/metadata.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/metadata.js
new file mode 100644
index 0000000..7dae562
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/metadata.js
@@ -0,0 +1,64 @@
+var f = require('util').format;
+
+var Define = function(name, object, stream) {
+ this.name = name;
+ this.object = object;
+ this.stream = typeof stream == 'boolean' ? stream : false;
+ this.instrumentations = {};
+}
+
+Define.prototype.classMethod = function(name, options) {
+ var keys = Object.keys(options).sort();
+ var key = generateKey(keys, options);
+
+ // Add a list of instrumentations
+ if(this.instrumentations[key] == null) {
+ this.instrumentations[key] = {
+ methods: [], options: options
+ }
+ }
+
+ // Push to list of method for this instrumentation
+ this.instrumentations[key].methods.push(name);
+}
+
+var generateKey = function(keys, options) {
+ var parts = [];
+ for(var i = 0; i < keys.length; i++) {
+ parts.push(f('%s=%s', keys[i], options[keys[i]]));
+ }
+
+ return parts.join();
+}
+
+Define.prototype.staticMethod = function(name, options) {
+ options.static = true;
+ var keys = Object.keys(options).sort();
+ var key = generateKey(keys, options);
+
+ // Add a list of instrumentations
+ if(this.instrumentations[key] == null) {
+ this.instrumentations[key] = {
+ methods: [], options: options
+ }
+ }
+
+ // Push to list of method for this instrumentation
+ this.instrumentations[key].methods.push(name);
+}
+
+Define.prototype.generate = function(keys, options) {
+ // Generate the return object
+ var object = {
+ name: this.name, obj: this.object, stream: this.stream,
+ instrumentations: []
+ }
+
+ for(var name in this.instrumentations) {
+ object.instrumentations.push(this.instrumentations[name]);
+ }
+
+ return object;
+}
+
+module.exports = Define;
\ No newline at end of file
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongo_client.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongo_client.js
new file mode 100644
index 0000000..deb1940
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongo_client.js
@@ -0,0 +1,328 @@
+"use strict";
+
+var parse = require('./url_parser')
+ , Server = require('./server')
+ , Mongos = require('./mongos')
+ , ReplSet = require('./replset')
+ , Define = require('./metadata')
+ , ReadPreference = require('./read_preference')
+ , Logger = require('mongodb-core').Logger
+ , MongoError = require('mongodb-core').MongoError
+ , Db = require('./db')
+ , dns = require('dns')
+ , f = require('util').format
+ , shallowClone = require('./utils').shallowClone;
+
+/**
+ * @fileOverview The **MongoClient** class is a class that allows for making Connections to MongoDB.
+ *
+ * @example
+ * var MongoClient = require('mongodb').MongoClient,
+ * test = require('assert');
+ * // Connection url
+ * var url = 'mongodb://localhost:27017/test';
+ * // Connect using MongoClient
+ * MongoClient.connect(url, function(err, db) {
+ * // Get an additional db
+ * db.close();
+ * });
+ */
+
+/**
+ * Creates a new MongoClient instance
+ * @class
+ * @return {MongoClient} a MongoClient instance.
+ */
+function MongoClient() {
+ /**
+ * The callback format for results
+ * @callback MongoClient~connectCallback
+ * @param {MongoError} error An error instance representing the error during the execution.
+ * @param {Db} db The connected database.
+ */
+
+ /**
+ * Connect to MongoDB using a url as documented at
+ *
+ * docs.mongodb.org/manual/reference/connection-string/
+ *
+ * Note that for replicasets the replicaSet query parameter is required in the 2.0 driver
+ *
+ * @method
+ * @param {string} url The connection URI string
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.uri_decode_auth=false] Uri decode the user name and password for authentication
+ * @param {object} [options.db=null] A hash of options to set on the db object, see **Db constructor**
+ * @param {object} [options.server=null] A hash of options to set on the server objects, see **Server** constructor**
+ * @param {object} [options.replSet=null] A hash of options to set on the replSet object, see **ReplSet** constructor**
+ * @param {object} [options.mongos=null] A hash of options to set on the mongos object, see **Mongos** constructor**
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {MongoClient~connectCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+ this.connect = MongoClient.connect;
+}
+
+var define = MongoClient.define = new Define('MongoClient', MongoClient, false);
+
+/**
+ * Connect to MongoDB using a url as documented at
+ *
+ * docs.mongodb.org/manual/reference/connection-string/
+ *
+ * Note that for replicasets the replicaSet query parameter is required in the 2.0 driver
+ *
+ * @method
+ * @static
+ * @param {string} url The connection URI string
+ * @param {object} [options=null] Optional settings.
+ * @param {boolean} [options.uri_decode_auth=false] Uri decode the user name and password for authentication
+ * @param {object} [options.db=null] A hash of options to set on the db object, see **Db constructor**
+ * @param {object} [options.server=null] A hash of options to set on the server objects, see **Server** constructor**
+ * @param {object} [options.replSet=null] A hash of options to set on the replSet object, see **ReplSet** constructor**
+ * @param {object} [options.mongos=null] A hash of options to set on the mongos object, see **Mongos** constructor**
+ * @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
+ * @param {MongoClient~connectCallback} [callback] The command result callback
+ * @return {Promise} returns Promise if no callback passed
+ */
+MongoClient.connect = function(url, options, callback) {
+ var args = Array.prototype.slice.call(arguments, 1);
+ callback = typeof args[args.length - 1] == 'function' ? args.pop() : null;
+ options = args.length ? args.shift() : null;
+ options = options || {};
+
+ // Get the promiseLibrary
+ var promiseLibrary = options.promiseLibrary;
+
+ // No promise library selected fall back
+ if(!promiseLibrary) {
+ promiseLibrary = typeof global.Promise == 'function' ?
+ global.Promise : require('es6-promise').Promise;
+ }
+
+ // Return a promise
+ if(typeof callback != 'function') {
+ return new promiseLibrary(function(resolve, reject) {
+ connect(url, options, function(err, db) {
+ if(err) return reject(err);
+ resolve(db);
+ });
+ });
+ }
+
+ // Fallback to callback based connect
+ connect(url, options, callback);
+}
+
+define.staticMethod('connect', {callback: true, promise:true});
+
+var mergeOptions = function(target, source, flatten) {
+ for(var name in source) {
+ if(source[name] && typeof source[name] == 'object' && flatten) {
+ target = mergeOptions(target, source[name], flatten);
+ } else {
+ target[name] = source[name];
+ }
+ }
+
+ return target;
+}
+
+var createUnifiedOptions = function(finalOptions, options) {
+ var childOptions = ['mongos', 'server', 'db'
+ , 'replset', 'db_options', 'server_options', 'rs_options', 'mongos_options'];
+ var noMerge = [];
+
+ for(var name in options) {
+ if(noMerge.indexOf(name.toLowerCase()) != -1) {
+ finalOptions[name] = options[name];
+ } else if(childOptions.indexOf(name.toLowerCase()) != -1) {
+ finalOptions = mergeOptions(finalOptions, options[name], false);
+ } else {
+ if(options[name] && typeof options[name] == 'object' && !Buffer.isBuffer(options[name]) && !Array.isArray(options[name])) {
+ finalOptions = mergeOptions(finalOptions, options[name], true);
+ } else {
+ finalOptions[name] = options[name];
+ }
+ }
+ }
+
+ return finalOptions;
+}
+
+function translateOptions(options) {
+ // If we have a readPreference passed in by the db options
+ if(typeof options.readPreference == 'string' || typeof options.read_preference == 'string') {
+ options.readPreference = new ReadPreference(options.readPreference || options.read_preference);
+ }
+
+ // Do we have readPreference tags, add them
+ if(options.readPreference && (options.readPreferenceTags || options.read_preference_tags)) {
+ options.readPreference.tags = options.readPreferenceTags || options.read_preference_tags;
+ }
+
+ // Do we have maxStalenessMS
+ if(options.maxStalenessMS) {
+ options.readPreference.maxStalenessMS = options.maxStalenessMS;
+ }
+
+ // Set the socket and connection timeouts
+ if(!options.socketTimeoutMS == null) options.socketTimeoutMS = 30000;
+ if(!options.connectTimeoutMS == null) options.connectTimeoutMS = 30000;
+
+ // Create server instances
+ return options.servers.map(function(serverObj) {
+ return serverObj.domain_socket ?
+ new Server(serverObj.domain_socket, 27017, options)
+ : new Server(serverObj.host, serverObj.port, options);
+ });
+}
+
+function createReplicaset(options, callback) {
+ // Set default options
+ var servers = translateOptions(options);
+ // Create Db instance
+ new Db(options.dbName, new ReplSet(servers, options), options).open(callback);
+}
+
+function createMongos(options, callback) {
+ // Set default options
+ var servers = translateOptions(options);
+ // Create Db instance
+ new Db(options.dbName, new Mongos(servers, options), options).open(callback);
+}
+
+function createServer(options, callback) {
+ // Set default options
+ var servers = translateOptions(options);
+ // Create Db instance
+ new Db(options.dbName, servers[0], options).open(function(err, db) {
+ if(err) return callback(err);
+ // Check if we are really speaking to a mongos
+ var ismaster = db.serverConfig.lastIsMaster();
+
+ // Do we actually have a mongos
+ if(ismaster && ismaster.msg == 'isdbgrid') {
+ // Destroy the current connection
+ db.close();
+ // Create mongos connection instead
+ return createMongos(options, callback);
+ }
+
+ // Otherwise callback
+ callback(err, db);
+ });
+}
+
+function connectHandler(options, callback) {
+ return function (err, db) {
+ if(err) {
+ return process.nextTick(function() {
+ try {
+ callback(err, null);
+ } catch (err) {
+ if(db) db.close();
+ throw err
+ }
+ });
+ }
+
+ // No authentication just reconnect
+ if(!options.auth) {
+ return process.nextTick(function() {
+ try {
+ callback(err, db);
+ } catch (err) {
+ if(db) db.close();
+ throw err
+ }
+ })
+ }
+
+ // What db to authenticate against
+ var authentication_db = db;
+ if(options.authSource) {
+ authentication_db = db.db(options.authSource);
+ }
+
+ // Authenticate
+ authentication_db.authenticate(options.user, options.password, options, function(err, success){
+ if(success){
+ process.nextTick(function() {
+ try {
+ callback(null, db);
+ } catch (err) {
+ if(db) db.close();
+ throw err
+ }
+ });
+ } else {
+ if(db) db.close();
+ process.nextTick(function() {
+ try {
+ callback(err ? err : new Error('Could not authenticate user ' + options.auth[0]), null);
+ } catch (err) {
+ if(db) db.close();
+ throw err
+ }
+ });
+ }
+ });
+ }
+}
+
+/*
+ * Connect using MongoClient
+ */
+var connect = function(url, options, callback) {
+ options = options || {};
+ options = shallowClone(options);
+
+ // If callback is null throw an exception
+ if(callback == null) {
+ throw new Error("no callback function provided");
+ }
+
+ // Get a logger for MongoClient
+ var logger = Logger('MongoClient', options);
+
+ // Parse the string
+ var object = parse(url, options);
+ var _finalOptions = createUnifiedOptions({}, object);
+ _finalOptions = mergeOptions(_finalOptions, object, false);
+ _finalOptions = createUnifiedOptions(_finalOptions, options);
+
+ // Check if we have connection and socket timeout set
+ if(!_finalOptions.socketTimeoutMS == null) _finalOptions.socketTimeoutMS = 120000;
+ if(!_finalOptions.connectTimeoutMS == null) _finalOptions.connectTimeoutMS = 120000;
+
+ // Failure modes
+ if(object.servers.length == 0) {
+ throw new Error("connection string must contain at least one seed host");
+ }
+
+ function connectCallback(err, db) {
+ if(err && err.message == 'no mongos proxies found in seed list') {
+ if(logger.isWarn()) {
+ logger.warn(f('seed list contains no mongos proxies, replicaset connections requires the parameter replicaSet to be supplied in the URI or options object, mongodb://server:port/db?replicaSet=name'));
+ }
+
+ // Return a more specific error message for MongoClient.connect
+ return callback(new MongoError('seed list contains no mongos proxies, replicaset connections requires the parameter replicaSet to be supplied in the URI or options object, mongodb://server:port/db?replicaSet=name'));
+ }
+
+ // Return the error and db instance
+ callback(err, db);
+ }
+
+ // Do we have a replicaset then skip discovery and go straight to connectivity
+ if(_finalOptions.replicaSet || _finalOptions.rs_name) {
+ return createReplicaset(_finalOptions, connectHandler(_finalOptions, connectCallback));
+ } else if(object.servers.length > 1) {
+ return createMongos(_finalOptions, connectHandler(_finalOptions, connectCallback));
+ } else {
+ return createServer(_finalOptions, connectHandler(_finalOptions, connectCallback));
+ }
+}
+
+module.exports = MongoClient
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongos.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongos.js
new file mode 100644
index 0000000..a630b6a
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/mongos.js
@@ -0,0 +1,509 @@
+"use strict";
+
+var EventEmitter = require('events').EventEmitter
+ , inherits = require('util').inherits
+ , f = require('util').format
+ , ServerCapabilities = require('./topology_base').ServerCapabilities
+ , MongoCR = require('mongodb-core').MongoCR
+ , MongoError = require('mongodb-core').MongoError
+ , CMongos = require('mongodb-core').Mongos
+ , Cursor = require('./cursor')
+ , AggregationCursor = require('./aggregation_cursor')
+ , CommandCursor = require('./command_cursor')
+ , Define = require('./metadata')
+ , Server = require('./server')
+ , Store = require('./topology_base').Store
+ , shallowClone = require('./utils').shallowClone
+ , MAX_JS_INT = require('./utils').MAX_JS_INT
+ , translateOptions = require('./utils').translateOptions
+ , filterOptions = require('./utils').filterOptions
+ , mergeOptions = require('./utils').mergeOptions
+ , os = require('os');
+
+// Get package.json variable
+var driverVersion = require(__dirname + '/../package.json').version;
+var nodejsversion = f('Node.js %s, %s', process.version, os.endianness());
+var type = os.type();
+var name = process.platform;
+var architecture = process.arch;
+var release = os.release();
+
+/**
+ * @fileOverview The **Mongos** class is a class that represents a Mongos Proxy topology and is
+ * used to construct connections.
+ *
+ * **Mongos Should not be used, use MongoClient.connect**
+ * @example
+ * var Db = require('mongodb').Db,
+ * Mongos = require('mongodb').Mongos,
+ * Server = require('mongodb').Server,
+ * test = require('assert');
+ * // Connect using Mongos
+ * var server = new Server('localhost', 27017);
+ * var db = new Db('test', new Mongos([server]));
+ * db.open(function(err, db) {
+ * // Get an additional db
+ * db.close();
+ * });
+ */
+
+ // Allowed parameters
+ var legalOptionNames = ['ha', 'haInterval', 'acceptableLatencyMS'
+ , 'poolSize', 'ssl', 'checkServerIdentity', 'sslValidate'
+ , 'sslCA', 'sslCert', 'sslKey', 'sslPass', 'socketOptions', 'bufferMaxEntries'
+ , 'store', 'auto_reconnect', 'autoReconnect', 'emitError'
+ , 'keepAlive', 'noDelay', 'connectTimeoutMS', 'socketTimeoutMS'
+ , 'loggerLevel', 'logger', 'reconnectTries', 'appname', 'domainsEnabled'
+ , 'servername', 'promoteLongs', 'promoteValues', 'promoteBuffers'];
+
+/**
+ * Creates a new Mongos instance
+ * @class
+ * @deprecated
+ * @param {Server[]} servers A seedlist of servers participating in the replicaset.
+ * @param {object} [options=null] Optional settings.
+ * @param {booelan} [options.ha=true] Turn on high availability monitoring.
+ * @param {number} [options.haInterval=5000] Time between each replicaset status check.
+ * @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
+ * @param {number} [options.acceptableLatencyMS=15] Cutoff latency point in MS for MongoS proxy selection
+ * @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {object} [options.socketOptions=null] Socket options
+ * @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
+ * @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
+ * @param {number} [options.socketOptions.connectTimeoutMS=0] TCP Connection timeout setting
+ * @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @fires Mongos#connect
+ * @fires Mongos#ha
+ * @fires Mongos#joined
+ * @fires Mongos#left
+ * @fires Mongos#fullsetup
+ * @fires Mongos#open
+ * @fires Mongos#close
+ * @fires Mongos#error
+ * @fires Mongos#timeout
+ * @fires Mongos#parseError
+ * @return {Mongos} a Mongos instance.
+ */
+var Mongos = function(servers, options) {
+ if(!(this instanceof Mongos)) return new Mongos(servers, options);
+ options = options || {};
+ var self = this;
+
+ // Filter the options
+ options = filterOptions(options, legalOptionNames);
+
+ // Ensure all the instances are Server
+ for(var i = 0; i < servers.length; i++) {
+ if(!(servers[i] instanceof Server)) {
+ throw MongoError.create({message: "all seed list instances must be of the Server type", driver:true});
+ }
+ }
+
+ // Stored options
+ var storeOptions = {
+ force: false
+ , bufferMaxEntries: typeof options.bufferMaxEntries == 'number' ? options.bufferMaxEntries : MAX_JS_INT
+ }
+
+ // Shared global store
+ var store = options.store || new Store(self, storeOptions);
+
+ // Set up event emitter
+ EventEmitter.call(this);
+
+ // Build seed list
+ var seedlist = servers.map(function(x) {
+ return {host: x.host, port: x.port}
+ });
+
+ // Get the reconnect option
+ var reconnect = typeof options.auto_reconnect == 'boolean' ? options.auto_reconnect : true;
+ reconnect = typeof options.autoReconnect == 'boolean' ? options.autoReconnect : reconnect;
+
+ // Clone options
+ var clonedOptions = mergeOptions({}, {
+ disconnectHandler: store,
+ cursorFactory: Cursor,
+ reconnect: reconnect,
+ emitError: typeof options.emitError == 'boolean' ? options.emitError : true,
+ size: typeof options.poolSize == 'number' ? options.poolSize : 5
+ });
+
+ // Translate any SSL options and other connectivity options
+ clonedOptions = translateOptions(clonedOptions, options);
+
+ // Socket options
+ var socketOptions = options.socketOptions && Object.keys(options.socketOptions).length > 0
+ ? options.socketOptions : options;
+
+ // Translate all the options to the mongodb-core ones
+ clonedOptions = translateOptions(clonedOptions, socketOptions);
+ if(typeof clonedOptions.keepAlive == 'number') {
+ clonedOptions.keepAliveInitialDelay = clonedOptions.keepAlive;
+ clonedOptions.keepAlive = clonedOptions.keepAlive > 0;
+ }
+
+ // Build default client information
+ this.clientInfo = {
+ driver: {
+ name: "nodejs",
+ version: driverVersion
+ },
+ os: {
+ type: type,
+ name: name,
+ architecture: architecture,
+ version: release
+ },
+ platform: nodejsversion
+ }
+
+ // Build default client information
+ clonedOptions.clientInfo = this.clientInfo;
+ // Do we have an application specific string
+ if(options.appname) {
+ clonedOptions.clientInfo.application = { name: options.appname };
+ }
+
+ // Create the Mongos
+ var mongos = new CMongos(seedlist, clonedOptions)
+ // Server capabilities
+ var sCapabilities = null;
+
+ // Internal state
+ this.s = {
+ // Create the Mongos
+ mongos: mongos
+ // Server capabilities
+ , sCapabilities: sCapabilities
+ // Debug turned on
+ , debug: clonedOptions.debug
+ // Store option defaults
+ , storeOptions: storeOptions
+ // Cloned options
+ , clonedOptions: clonedOptions
+ // Actual store of callbacks
+ , store: store
+ // Options
+ , options: options
+ }
+}
+
+var define = Mongos.define = new Define('Mongos', Mongos, false);
+
+/**
+ * @ignore
+ */
+inherits(Mongos, EventEmitter);
+
+// Last ismaster
+Object.defineProperty(Mongos.prototype, 'isMasterDoc', {
+ enumerable:true, get: function() { return this.s.mongos.lastIsMaster(); }
+});
+
+// BSON property
+Object.defineProperty(Mongos.prototype, 'bson', {
+ enumerable: true, get: function() {
+ return this.s.mongos.s.bson;
+ }
+});
+
+Object.defineProperty(Mongos.prototype, 'haInterval', {
+ enumerable:true, get: function() { return this.s.mongos.s.haInterval; }
+});
+
+// Connect
+Mongos.prototype.connect = function(db, _options, callback) {
+ var self = this;
+ if('function' === typeof _options) callback = _options, _options = {};
+ if(_options == null) _options = {};
+ if(!('function' === typeof callback)) callback = null;
+ self.s.options = _options;
+
+ // Update bufferMaxEntries
+ self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
+
+ // Error handler
+ var connectErrorHandler = function(event) {
+ return function(err) {
+ // Remove all event handlers
+ var events = ['timeout', 'error', 'close'];
+ events.forEach(function(e) {
+ self.removeListener(e, connectErrorHandler);
+ });
+
+ self.s.mongos.removeListener('connect', connectErrorHandler);
+
+ // Try to callback
+ try {
+ callback(err);
+ } catch(err) {
+ process.nextTick(function() { throw err; })
+ }
+ }
+ }
+
+ // Actual handler
+ var errorHandler = function(event) {
+ return function(err) {
+ if(event != 'error') {
+ self.emit(event, err);
+ }
+ }
+ }
+
+ // Error handler
+ var reconnectHandler = function(err) {
+ self.emit('reconnect');
+ self.s.store.execute();
+ }
+
+ // relay the event
+ var relay = function(event) {
+ return function(t, server) {
+ self.emit(event, t, server);
+ }
+ }
+
+ // Connect handler
+ var connectHandler = function() {
+ // Clear out all the current handlers left over
+ ["timeout", "error", "close", 'serverOpening', 'serverDescriptionChanged', 'serverHeartbeatStarted',
+ 'serverHeartbeatSucceeded', 'serverHeartbeatFailed', 'serverClosed', 'topologyOpening',
+ 'topologyClosed', 'topologyDescriptionChanged'].forEach(function(e) {
+ self.s.mongos.removeAllListeners(e);
+ });
+
+ // Set up listeners
+ self.s.mongos.once('timeout', errorHandler('timeout'));
+ self.s.mongos.once('error', errorHandler('error'));
+ self.s.mongos.once('close', errorHandler('close'));
+
+ // Set up SDAM listeners
+ self.s.mongos.on('serverDescriptionChanged', relay('serverDescriptionChanged'));
+ self.s.mongos.on('serverHeartbeatStarted', relay('serverHeartbeatStarted'));
+ self.s.mongos.on('serverHeartbeatSucceeded', relay('serverHeartbeatSucceeded'));
+ self.s.mongos.on('serverHeartbeatFailed', relay('serverHeartbeatFailed'));
+ self.s.mongos.on('serverOpening', relay('serverOpening'));
+ self.s.mongos.on('serverClosed', relay('serverClosed'));
+ self.s.mongos.on('topologyOpening', relay('topologyOpening'));
+ self.s.mongos.on('topologyClosed', relay('topologyClosed'));
+ self.s.mongos.on('topologyDescriptionChanged', relay('topologyDescriptionChanged'));
+
+ // Set up serverConfig listeners
+ self.s.mongos.on('fullsetup', relay('fullsetup'));
+
+ // Emit open event
+ self.emit('open', null, self);
+
+ // Return correctly
+ try {
+ callback(null, self);
+ } catch(err) {
+ process.nextTick(function() { throw err; })
+ }
+ }
+
+ // Set up listeners
+ self.s.mongos.once('timeout', connectErrorHandler('timeout'));
+ self.s.mongos.once('error', connectErrorHandler('error'));
+ self.s.mongos.once('close', connectErrorHandler('close'));
+ self.s.mongos.once('connect', connectHandler);
+ // Join and leave events
+ self.s.mongos.on('joined', relay('joined'));
+ self.s.mongos.on('left', relay('left'));
+
+ // Reconnect server
+ self.s.mongos.on('reconnect', reconnectHandler);
+
+ // Start connection
+ self.s.mongos.connect(_options);
+}
+
+// Server capabilities
+Mongos.prototype.capabilities = function() {
+ if(this.s.sCapabilities) return this.s.sCapabilities;
+ if(this.s.mongos.lastIsMaster() == null) return null;
+ this.s.sCapabilities = new ServerCapabilities(this.s.mongos.lastIsMaster());
+ return this.s.sCapabilities;
+}
+
+define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
+
+// Command
+Mongos.prototype.command = function(ns, cmd, options, callback) {
+ this.s.mongos.command(ns, cmd, options, callback);
+}
+
+define.classMethod('command', {callback: true, promise:false});
+
+// Insert
+Mongos.prototype.insert = function(ns, ops, options, callback) {
+ this.s.mongos.insert(ns, ops, options, function(e, m) {
+ callback(e, m)
+ });
+}
+
+define.classMethod('insert', {callback: true, promise:false});
+
+// Update
+Mongos.prototype.update = function(ns, ops, options, callback) {
+ this.s.mongos.update(ns, ops, options, callback);
+}
+
+define.classMethod('update', {callback: true, promise:false});
+
+// Remove
+Mongos.prototype.remove = function(ns, ops, options, callback) {
+ this.s.mongos.remove(ns, ops, options, callback);
+}
+
+define.classMethod('remove', {callback: true, promise:false});
+
+// Destroyed
+Mongos.prototype.isDestroyed = function() {
+ return this.s.mongos.isDestroyed();
+}
+
+// IsConnected
+Mongos.prototype.isConnected = function() {
+ return this.s.mongos.isConnected();
+}
+
+define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
+
+// Insert
+Mongos.prototype.cursor = function(ns, cmd, options) {
+ options.disconnectHandler = this.s.store;
+ return this.s.mongos.cursor(ns, cmd, options);
+}
+
+define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
+
+Mongos.prototype.lastIsMaster = function() {
+ return this.s.mongos.lastIsMaster();
+}
+
+Mongos.prototype.close = function(forceClosed) {
+ this.s.mongos.destroy();
+ // We need to wash out all stored processes
+ if(forceClosed == true) {
+ this.s.storeOptions.force = forceClosed;
+ this.s.store.flush();
+ }
+}
+
+define.classMethod('close', {callback: false, promise:false});
+
+Mongos.prototype.auth = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.mongos.auth.apply(this.s.mongos, args);
+}
+
+define.classMethod('auth', {callback: true, promise:false});
+
+Mongos.prototype.logout = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.mongos.logout.apply(this.s.mongos, args);
+}
+
+define.classMethod('logout', {callback: true, promise:false});
+
+/**
+ * All raw connections
+ * @method
+ * @return {array}
+ */
+Mongos.prototype.connections = function() {
+ return this.s.mongos.connections();
+}
+
+define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
+
+/**
+ * A mongos connect event, used to verify that the connection is up and running
+ *
+ * @event Mongos#connect
+ * @type {Mongos}
+ */
+
+/**
+ * The mongos high availability event
+ *
+ * @event Mongos#ha
+ * @type {function}
+ * @param {string} type The stage in the high availability event (start|end)
+ * @param {boolean} data.norepeat This is a repeating high availability process or a single execution only
+ * @param {number} data.id The id for this high availability request
+ * @param {object} data.state An object containing the information about the current replicaset
+ */
+
+/**
+ * A server member left the mongos set
+ *
+ * @event Mongos#left
+ * @type {function}
+ * @param {string} type The type of member that left (primary|secondary|arbiter)
+ * @param {Server} server The server object that left
+ */
+
+/**
+ * A server member joined the mongos set
+ *
+ * @event Mongos#joined
+ * @type {function}
+ * @param {string} type The type of member that joined (primary|secondary|arbiter)
+ * @param {Server} server The server object that joined
+ */
+
+/**
+ * Mongos fullsetup event, emitted when all proxies in the topology have been connected to.
+ *
+ * @event Mongos#fullsetup
+ * @type {Mongos}
+ */
+
+/**
+ * Mongos open event, emitted when mongos can start processing commands.
+ *
+ * @event Mongos#open
+ * @type {Mongos}
+ */
+
+/**
+ * Mongos close event
+ *
+ * @event Mongos#close
+ * @type {object}
+ */
+
+/**
+ * Mongos error event, emitted if there is an error listener.
+ *
+ * @event Mongos#error
+ * @type {MongoError}
+ */
+
+/**
+ * Mongos timeout event
+ *
+ * @event Mongos#timeout
+ * @type {object}
+ */
+
+/**
+ * Mongos parseError event
+ *
+ * @event Mongos#parseError
+ * @type {object}
+ */
+
+module.exports = Mongos;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/read_preference.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/read_preference.js
new file mode 100644
index 0000000..1763b2c
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/read_preference.js
@@ -0,0 +1,131 @@
+"use strict";
+
+/**
+ * @fileOverview The **ReadPreference** class is a class that represents a MongoDB ReadPreference and is
+ * used to construct connections.
+ *
+ * @example
+ * var Db = require('mongodb').Db,
+ * ReplSet = require('mongodb').ReplSet,
+ * Server = require('mongodb').Server,
+ * ReadPreference = require('mongodb').ReadPreference,
+ * test = require('assert');
+ * // Connect using ReplSet
+ * var server = new Server('localhost', 27017);
+ * var db = new Db('test', new ReplSet([server]));
+ * db.open(function(err, db) {
+ * test.equal(null, err);
+ * // Perform a read
+ * var cursor = db.collection('t').find({});
+ * cursor.setReadPreference(ReadPreference.PRIMARY);
+ * cursor.toArray(function(err, docs) {
+ * test.equal(null, err);
+ * db.close();
+ * });
+ * });
+ */
+
+/**
+ * Creates a new ReadPreference instance
+ *
+ * Read Preferences
+ * - **ReadPreference.PRIMARY**, Read from primary only. All operations produce an error (throw an exception where applicable) if primary is unavailable. Cannot be combined with tags (This is the default.).
+ * - **ReadPreference.PRIMARY_PREFERRED**, Read from primary if available, otherwise a secondary.
+ * - **ReadPreference.SECONDARY**, Read from secondary if available, otherwise error.
+ * - **ReadPreference.SECONDARY_PREFERRED**, Read from a secondary if available, otherwise read from the primary.
+ * - **ReadPreference.NEAREST**, All modes read from among the nearest candidates, but unlike other modes, NEAREST will include both the primary and all secondaries in the random selection.
+ *
+ * @class
+ * @param {string} mode The ReadPreference mode as listed above.
+ * @param {array|object} tags An object representing read preference tags.
+ * @param {object} [options] Additional read preference options
+ * @param {number} [options.maxStalenessMS] Max Secondary Read Stalleness in Miliseconds
+ * @return {ReadPreference} a ReadPreference instance.
+ */
+var ReadPreference = function(mode, tags, options) {
+ if(!(this instanceof ReadPreference)) {
+ return new ReadPreference(mode, tags, options);
+ }
+
+ this._type = 'ReadPreference';
+ this.mode = mode;
+ this.tags = tags;
+ this.options = options;
+
+ // If no tags were passed in
+ if(tags && typeof tags == 'object' && !Array.isArray(tags)) {
+ if(tags.maxStalenessMS) {
+ this.options = tags;
+ this.tags = null;
+ }
+ }
+
+ // Add the maxStalenessMS value to the read Preference
+ if(this.options && this.options.maxStalenessMS) {
+ this.maxStalenessMS = this.options.maxStalenessMS;
+ }
+}
+
+/**
+ * Validate if a mode is legal
+ *
+ * @method
+ * @param {string} mode The string representing the read preference mode.
+ * @return {boolean}
+ */
+ReadPreference.isValid = function(_mode) {
+ return (_mode == ReadPreference.PRIMARY || _mode == ReadPreference.PRIMARY_PREFERRED
+ || _mode == ReadPreference.SECONDARY || _mode == ReadPreference.SECONDARY_PREFERRED
+ || _mode == ReadPreference.NEAREST
+ || _mode == true || _mode == false || _mode == null);
+}
+
+/**
+ * Validate if a mode is legal
+ *
+ * @method
+ * @param {string} mode The string representing the read preference mode.
+ * @return {boolean}
+ */
+ReadPreference.prototype.isValid = function(mode) {
+ var _mode = typeof mode == 'string' ? mode : this.mode;
+ return ReadPreference.isValid(_mode);
+}
+
+/**
+ * @ignore
+ */
+ReadPreference.prototype.toObject = function() {
+ var object = {mode:this.mode};
+
+ if(this.tags != null) {
+ object['tags'] = this.tags;
+ }
+
+ if(this.maxStalenessMS) {
+ object['maxStalenessMS'] = this.maxStalenessMS;
+ }
+
+ return object;
+}
+
+/**
+ * @ignore
+ */
+ReadPreference.prototype.toJSON = function() {
+ return this.toObject();
+}
+
+/**
+ * @ignore
+ */
+ReadPreference.PRIMARY = 'primary';
+ReadPreference.PRIMARY_PREFERRED = 'primaryPreferred';
+ReadPreference.SECONDARY = 'secondary';
+ReadPreference.SECONDARY_PREFERRED = 'secondaryPreferred';
+ReadPreference.NEAREST = 'nearest'
+
+/**
+ * @ignore
+ */
+module.exports = ReadPreference;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/replset.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/replset.js
new file mode 100644
index 0000000..97791bf
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/replset.js
@@ -0,0 +1,559 @@
+"use strict";
+
+var EventEmitter = require('events').EventEmitter
+ , inherits = require('util').inherits
+ , f = require('util').format
+ , Server = require('./server')
+ , Mongos = require('./mongos')
+ , Cursor = require('./cursor')
+ , AggregationCursor = require('./aggregation_cursor')
+ , CommandCursor = require('./command_cursor')
+ , ReadPreference = require('./read_preference')
+ , MongoCR = require('mongodb-core').MongoCR
+ , MongoError = require('mongodb-core').MongoError
+ , ServerCapabilities = require('./topology_base').ServerCapabilities
+ , Store = require('./topology_base').Store
+ , Define = require('./metadata')
+ , CServer = require('mongodb-core').Server
+ , CReplSet = require('mongodb-core').ReplSet
+ , CoreReadPreference = require('mongodb-core').ReadPreference
+ , shallowClone = require('./utils').shallowClone
+ , MAX_JS_INT = require('./utils').MAX_JS_INT
+ , translateOptions = require('./utils').translateOptions
+ , filterOptions = require('./utils').filterOptions
+ , mergeOptions = require('./utils').mergeOptions
+ , os = require('os');
+/**
+ * @fileOverview The **ReplSet** class is a class that represents a Replicaset topology and is
+ * used to construct connections.
+ *
+ * **ReplSet Should not be used, use MongoClient.connect**
+ * @example
+ * var Db = require('mongodb').Db,
+ * ReplSet = require('mongodb').ReplSet,
+ * Server = require('mongodb').Server,
+ * test = require('assert');
+ * // Connect using ReplSet
+ * var server = new Server('localhost', 27017);
+ * var db = new Db('test', new ReplSet([server]));
+ * db.open(function(err, db) {
+ * // Get an additional db
+ * db.close();
+ * });
+ */
+
+// Allowed parameters
+var legalOptionNames = ['ha', 'haInterval', 'replicaSet', 'rs_name', 'secondaryAcceptableLatencyMS'
+ , 'connectWithNoPrimary', 'poolSize', 'ssl', 'checkServerIdentity', 'sslValidate'
+ , 'sslCA', 'sslCert', 'sslKey', 'sslPass', 'socketOptions', 'bufferMaxEntries'
+ , 'store', 'auto_reconnect', 'autoReconnect', 'emitError'
+ , 'keepAlive', 'noDelay', 'connectTimeoutMS', 'socketTimeoutMS', 'strategy', 'debug'
+ , 'loggerLevel', 'logger', 'reconnectTries', 'appname', 'domainsEnabled'
+ , 'servername', 'promoteLongs', 'promoteValues', 'promoteBuffers'];
+
+// Get package.json variable
+var driverVersion = require(__dirname + '/../package.json').version;
+var nodejsversion = f('Node.js %s, %s', process.version, os.endianness());
+var type = os.type();
+var name = process.platform;
+var architecture = process.arch;
+var release = os.release();
+
+/**
+ * Creates a new ReplSet instance
+ * @class
+ * @deprecated
+ * @param {Server[]} servers A seedlist of servers participating in the replicaset.
+ * @param {object} [options=null] Optional settings.
+ * @param {booelan} [options.ha=true] Turn on high availability monitoring.
+ * @param {number} [options.haInterval=10000] Time between each replicaset status check.
+ * @param {string} [options.replicaSet] The name of the replicaset to connect to.
+ * @param {number} [options.secondaryAcceptableLatencyMS=15] Sets the range of servers to pick when using NEAREST (lowest ping ms + the latency fence, ex: range of 1 to (1 + 15) ms)
+ * @param {boolean} [options.connectWithNoPrimary=false] Sets if the driver should connect even if no primary is available
+ * @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
+ * @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {object} [options.socketOptions=null] Socket options
+ * @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
+ * @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
+ * @param {number} [options.socketOptions.connectTimeoutMS=10000] TCP Connection timeout setting
+ * @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @fires ReplSet#connect
+ * @fires ReplSet#ha
+ * @fires ReplSet#joined
+ * @fires ReplSet#left
+ * @fires ReplSet#fullsetup
+ * @fires ReplSet#open
+ * @fires ReplSet#close
+ * @fires ReplSet#error
+ * @fires ReplSet#timeout
+ * @fires ReplSet#parseError
+ * @return {ReplSet} a ReplSet instance.
+ */
+var ReplSet = function(servers, options) {
+ if(!(this instanceof ReplSet)) return new ReplSet(servers, options);
+ options = options || {};
+ var self = this;
+ // Set up event emitter
+ EventEmitter.call(this);
+
+ // Filter the options
+ options = filterOptions(options, legalOptionNames);
+
+ // Ensure all the instances are Server
+ for(var i = 0; i < servers.length; i++) {
+ if(!(servers[i] instanceof Server)) {
+ throw MongoError.create({message: "all seed list instances must be of the Server type", driver:true});
+ }
+ }
+
+ // Stored options
+ var storeOptions = {
+ force: false
+ , bufferMaxEntries: typeof options.bufferMaxEntries == 'number' ? options.bufferMaxEntries : MAX_JS_INT
+ }
+
+ // Shared global store
+ var store = options.store || new Store(self, storeOptions);
+
+ // Build seed list
+ var seedlist = servers.map(function(x) {
+ return {host: x.host, port: x.port}
+ });
+
+ // Clone options
+ var clonedOptions = mergeOptions({}, {
+ disconnectHandler: store,
+ cursorFactory: Cursor,
+ reconnect: false,
+ emitError: typeof options.emitError == 'boolean' ? options.emitError : true,
+ size: typeof options.poolSize == 'number' ? options.poolSize : 5
+ });
+
+ // Translate any SSL options and other connectivity options
+ clonedOptions = translateOptions(clonedOptions, options);
+
+ // Socket options
+ var socketOptions = options.socketOptions && Object.keys(options.socketOptions).length > 0
+ ? options.socketOptions : options;
+
+ // Translate all the options to the mongodb-core ones
+ clonedOptions = translateOptions(clonedOptions, socketOptions);
+ if(typeof clonedOptions.keepAlive == 'number') {
+ clonedOptions.keepAliveInitialDelay = clonedOptions.keepAlive;
+ clonedOptions.keepAlive = clonedOptions.keepAlive > 0;
+ }
+
+ // Client info
+ this.clientInfo = {
+ driver: {
+ name: "nodejs",
+ version: driverVersion
+ },
+ os: {
+ type: type,
+ name: name,
+ architecture: architecture,
+ version: release
+ },
+ platform: nodejsversion
+ }
+
+ // Build default client information
+ clonedOptions.clientInfo = this.clientInfo;
+ // Do we have an application specific string
+ if(options.appname) {
+ clonedOptions.clientInfo.application = { name: options.appname };
+ }
+
+ // Create the ReplSet
+ var replset = new CReplSet(seedlist, clonedOptions);
+ // Server capabilities
+ var sCapabilities = null;
+
+ // Listen to reconnect event
+ replset.on('reconnect', function() {
+ self.emit('reconnect');
+ store.execute();
+ });
+
+ // Internal state
+ this.s = {
+ // Replicaset
+ replset: replset
+ // Server capabilities
+ , sCapabilities: null
+ // Debug tag
+ , tag: options.tag
+ // Store options
+ , storeOptions: storeOptions
+ // Cloned options
+ , clonedOptions: clonedOptions
+ // Store
+ , store: store
+ // Options
+ , options: options
+ }
+
+ // Debug
+ if(clonedOptions.debug) {
+ // Last ismaster
+ Object.defineProperty(this, 'replset', {
+ enumerable:true, get: function() { return replset; }
+ });
+ }
+}
+
+/**
+ * @ignore
+ */
+inherits(ReplSet, EventEmitter);
+
+// Last ismaster
+Object.defineProperty(ReplSet.prototype, 'isMasterDoc', {
+ enumerable:true, get: function() { return this.s.replset.lastIsMaster(); }
+});
+
+// BSON property
+Object.defineProperty(ReplSet.prototype, 'bson', {
+ enumerable: true, get: function() {
+ return this.s.replset.s.bson;
+ }
+});
+
+Object.defineProperty(ReplSet.prototype, 'haInterval', {
+ enumerable:true, get: function() { return this.s.replset.s.haInterval; }
+});
+
+var define = ReplSet.define = new Define('ReplSet', ReplSet, false);
+
+// Ensure the right read Preference object
+var translateReadPreference = function(options) {
+ if(typeof options.readPreference == 'string') {
+ options.readPreference = new CoreReadPreference(options.readPreference);
+ } else if(options.readPreference instanceof ReadPreference) {
+ options.readPreference = new CoreReadPreference(options.readPreference.mode
+ , options.readPreference.tags, {maxStalenessMS: options.readPreference.maxStalenessMS});
+ }
+
+ return options;
+}
+
+// Connect method
+ReplSet.prototype.connect = function(db, _options, callback) {
+ var self = this;
+ if('function' === typeof _options) callback = _options, _options = {};
+ if(_options == null) _options = {};
+ if(!('function' === typeof callback)) callback = null;
+ self.s.options = _options;
+
+ // Update bufferMaxEntries
+ self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
+
+ // Actual handler
+ var errorHandler = function(event) {
+ return function(err) {
+ if(event != 'error') {
+ self.emit(event, err);
+ }
+ }
+ }
+
+ // Connect handler
+ var connectHandler = function() {
+ // Clear out all the current handlers left over
+ ["timeout", "error", "close", 'serverOpening', 'serverDescriptionChanged', 'serverHeartbeatStarted',
+ 'serverHeartbeatSucceeded', 'serverHeartbeatFailed', 'serverClosed', 'topologyOpening',
+ 'topologyClosed', 'topologyDescriptionChanged'].forEach(function(e) {
+ self.s.replset.removeAllListeners(e);
+ });
+
+ // Set up listeners
+ self.s.replset.once('timeout', errorHandler('timeout'));
+ self.s.replset.once('error', errorHandler('error'));
+ self.s.replset.once('close', errorHandler('close'));
+
+ // relay the event
+ var relay = function(event) {
+ return function(t, server) {
+ self.emit(event, t, server);
+ }
+ }
+
+ // Replset events relay
+ var replsetRelay = function(event) {
+ return function(t, server) {
+ self.emit(event, t, server.lastIsMaster(), server);
+ }
+ }
+
+ // Relay ha
+ var relayHa = function(t, state) {
+ self.emit('ha', t, state);
+
+ if(t == 'start') {
+ self.emit('ha_connect', t, state);
+ } else if(t == 'end') {
+ self.emit('ha_ismaster', t, state);
+ }
+ }
+
+ // Set up serverConfig listeners
+ self.s.replset.on('joined', replsetRelay('joined'));
+ self.s.replset.on('left', relay('left'));
+ self.s.replset.on('ping', relay('ping'));
+ self.s.replset.on('ha', relayHa);
+
+ // Set up SDAM listeners
+ self.s.replset.on('serverDescriptionChanged', relay('serverDescriptionChanged'));
+ self.s.replset.on('serverHeartbeatStarted', relay('serverHeartbeatStarted'));
+ self.s.replset.on('serverHeartbeatSucceeded', relay('serverHeartbeatSucceeded'));
+ self.s.replset.on('serverHeartbeatFailed', relay('serverHeartbeatFailed'));
+ self.s.replset.on('serverOpening', relay('serverOpening'));
+ self.s.replset.on('serverClosed', relay('serverClosed'));
+ self.s.replset.on('topologyOpening', relay('topologyOpening'));
+ self.s.replset.on('topologyClosed', relay('topologyClosed'));
+ self.s.replset.on('topologyDescriptionChanged', relay('topologyDescriptionChanged'));
+
+ self.s.replset.on('fullsetup', function(topology) {
+ self.emit('fullsetup', null, self);
+ });
+
+ self.s.replset.on('all', function(topology) {
+ self.emit('all', null, self);
+ });
+
+ // Emit open event
+ self.emit('open', null, self);
+
+ // Return correctly
+ try {
+ callback(null, self);
+ } catch(err) {
+ process.nextTick(function() { throw err; })
+ }
+ }
+
+ // Error handler
+ var connectErrorHandler = function(event) {
+ return function(err) {
+ ['timeout', 'error', 'close'].forEach(function(e) {
+ self.s.replset.removeListener(e, connectErrorHandler);
+ });
+
+ self.s.replset.removeListener('connect', connectErrorHandler);
+ // Destroy the replset
+ self.s.replset.destroy();
+
+ // Try to callback
+ try {
+ callback(err);
+ } catch(err) {
+ if(!self.s.replset.isConnected())
+ process.nextTick(function() { throw err; })
+ }
+ }
+ }
+
+ // Set up listeners
+ self.s.replset.once('timeout', connectErrorHandler('timeout'));
+ self.s.replset.once('error', connectErrorHandler('error'));
+ self.s.replset.once('close', connectErrorHandler('close'));
+ self.s.replset.once('connect', connectHandler);
+
+ // Start connection
+ self.s.replset.connect(_options);
+}
+
+// Server capabilities
+ReplSet.prototype.capabilities = function() {
+ if(this.s.sCapabilities) return this.s.sCapabilities;
+ if(this.s.replset.lastIsMaster() == null) return null;
+ this.s.sCapabilities = new ServerCapabilities(this.s.replset.lastIsMaster());
+ return this.s.sCapabilities;
+}
+
+define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
+
+// Command
+ReplSet.prototype.command = function(ns, cmd, options, callback) {
+ options = translateReadPreference(options);
+ this.s.replset.command(ns, cmd, options, callback);
+}
+
+define.classMethod('command', {callback: true, promise:false});
+
+// Insert
+ReplSet.prototype.insert = function(ns, ops, options, callback) {
+ this.s.replset.insert(ns, ops, options, callback);
+}
+
+define.classMethod('insert', {callback: true, promise:false});
+
+// Update
+ReplSet.prototype.update = function(ns, ops, options, callback) {
+ this.s.replset.update(ns, ops, options, callback);
+}
+
+define.classMethod('update', {callback: true, promise:false});
+
+// Remove
+ReplSet.prototype.remove = function(ns, ops, options, callback) {
+ this.s.replset.remove(ns, ops, options, callback);
+}
+
+define.classMethod('remove', {callback: true, promise:false});
+
+// Destroyed
+ReplSet.prototype.isDestroyed = function() {
+ return this.s.replset.isDestroyed();
+}
+
+// IsConnected
+ReplSet.prototype.isConnected = function() {
+ return this.s.replset.isConnected();
+}
+
+define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
+
+// Insert
+ReplSet.prototype.cursor = function(ns, cmd, options) {
+ options = translateReadPreference(options);
+ options.disconnectHandler = this.s.store;
+ return this.s.replset.cursor(ns, cmd, options);
+}
+
+define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
+
+ReplSet.prototype.lastIsMaster = function() {
+ return this.s.replset.lastIsMaster();
+}
+
+ReplSet.prototype.close = function(forceClosed) {
+ var self = this;
+ this.s.replset.destroy();
+ // We need to wash out all stored processes
+ if(forceClosed == true) {
+ this.s.storeOptions.force = forceClosed;
+ this.s.store.flush();
+ }
+
+ var events = ['timeout', 'error', 'close', 'joined', 'left'];
+ events.forEach(function(e) {
+ self.removeAllListeners(e);
+ });
+}
+
+define.classMethod('close', {callback: false, promise:false});
+
+ReplSet.prototype.auth = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.replset.auth.apply(this.s.replset, args);
+}
+
+define.classMethod('auth', {callback: true, promise:false});
+
+ReplSet.prototype.logout = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.replset.logout.apply(this.s.replset, args);
+}
+
+define.classMethod('logout', {callback: true, promise:false});
+
+/**
+ * All raw connections
+ * @method
+ * @return {array}
+ */
+ReplSet.prototype.connections = function() {
+ return this.s.replset.connections();
+}
+
+define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
+
+/**
+ * A replset connect event, used to verify that the connection is up and running
+ *
+ * @event ReplSet#connect
+ * @type {ReplSet}
+ */
+
+/**
+ * The replset high availability event
+ *
+ * @event ReplSet#ha
+ * @type {function}
+ * @param {string} type The stage in the high availability event (start|end)
+ * @param {boolean} data.norepeat This is a repeating high availability process or a single execution only
+ * @param {number} data.id The id for this high availability request
+ * @param {object} data.state An object containing the information about the current replicaset
+ */
+
+/**
+ * A server member left the replicaset
+ *
+ * @event ReplSet#left
+ * @type {function}
+ * @param {string} type The type of member that left (primary|secondary|arbiter)
+ * @param {Server} server The server object that left
+ */
+
+/**
+ * A server member joined the replicaset
+ *
+ * @event ReplSet#joined
+ * @type {function}
+ * @param {string} type The type of member that joined (primary|secondary|arbiter)
+ * @param {Server} server The server object that joined
+ */
+
+/**
+ * ReplSet open event, emitted when replicaset can start processing commands.
+ *
+ * @event ReplSet#open
+ * @type {Replset}
+ */
+
+/**
+ * ReplSet fullsetup event, emitted when all servers in the topology have been connected to.
+ *
+ * @event ReplSet#fullsetup
+ * @type {Replset}
+ */
+
+/**
+ * ReplSet close event
+ *
+ * @event ReplSet#close
+ * @type {object}
+ */
+
+/**
+ * ReplSet error event, emitted if there is an error listener.
+ *
+ * @event ReplSet#error
+ * @type {MongoError}
+ */
+
+/**
+ * ReplSet timeout event
+ *
+ * @event ReplSet#timeout
+ * @type {object}
+ */
+
+/**
+ * ReplSet parseError event
+ *
+ * @event ReplSet#parseError
+ * @type {object}
+ */
+
+module.exports = ReplSet;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/server.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/server.js
new file mode 100644
index 0000000..127675d
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/server.js
@@ -0,0 +1,507 @@
+"use strict";
+
+var EventEmitter = require('events').EventEmitter
+ , inherits = require('util').inherits
+ , CServer = require('mongodb-core').Server
+ , Cursor = require('./cursor')
+ , AggregationCursor = require('./aggregation_cursor')
+ , CommandCursor = require('./command_cursor')
+ , f = require('util').format
+ , ServerCapabilities = require('./topology_base').ServerCapabilities
+ , Store = require('./topology_base').Store
+ , Define = require('./metadata')
+ , MongoError = require('mongodb-core').MongoError
+ , shallowClone = require('./utils').shallowClone
+ , MAX_JS_INT = require('./utils').MAX_JS_INT
+ , translateOptions = require('./utils').translateOptions
+ , filterOptions = require('./utils').filterOptions
+ , mergeOptions = require('./utils').mergeOptions
+ , os = require('os');
+
+// Get package.json variable
+var driverVersion = require(__dirname + '/../package.json').version;
+var nodejsversion = f('Node.js %s, %s', process.version, os.endianness());
+var type = os.type();
+var name = process.platform;
+var architecture = process.arch;
+var release = os.release();
+
+/**
+ * @fileOverview The **Server** class is a class that represents a single server topology and is
+ * used to construct connections.
+ *
+ * **Server Should not be used, use MongoClient.connect**
+ * @example
+ * var Db = require('mongodb').Db,
+ * Server = require('mongodb').Server,
+ * test = require('assert');
+ * // Connect using single Server
+ * var db = new Db('test', new Server('localhost', 27017););
+ * db.open(function(err, db) {
+ * // Get an additional db
+ * db.close();
+ * });
+ */
+
+ // Allowed parameters
+ var legalOptionNames = ['ha', 'haInterval', 'acceptableLatencyMS'
+ , 'poolSize', 'ssl', 'checkServerIdentity', 'sslValidate'
+ , 'sslCA', 'sslCert', 'sslKey', 'sslPass', 'socketOptions', 'bufferMaxEntries'
+ , 'store', 'auto_reconnect', 'autoReconnect', 'emitError'
+ , 'keepAlive', 'noDelay', 'connectTimeoutMS', 'socketTimeoutMS'
+ , 'loggerLevel', 'logger', 'reconnectTries', 'reconnectInterval', 'monitoring'
+ , 'appname', 'domainsEnabled'
+ , 'servername', 'promoteLongs', 'promoteValues', 'promoteBuffers'];
+
+/**
+ * Creates a new Server instance
+ * @class
+ * @deprecated
+ * @param {string} host The host for the server, can be either an IP4, IP6 or domain socket style host.
+ * @param {number} [port] The server port if IP4.
+ * @param {object} [options=null] Optional settings.
+ * @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
+ * @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
+ * @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {object} [options.socketOptions=null] Socket options
+ * @param {boolean} [options.socketOptions.autoReconnect=true] Reconnect on error.
+ * @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
+ * @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
+ * @param {number} [options.socketOptions.connectTimeoutMS=0] TCP Connection timeout setting
+ * @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
+ * @param {number} [options.reconnectTries=30] Server attempt to reconnect #times
+ * @param {number} [options.reconnectInterval=1000] Server will wait # milliseconds between retries
+ * @param {number} [options.monitoring=true] Triggers the server instance to call ismaster
+ * @param {number} [options.haInterval=10000] The interval of calling ismaster when monitoring is enabled.
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @fires Server#connect
+ * @fires Server#close
+ * @fires Server#error
+ * @fires Server#timeout
+ * @fires Server#parseError
+ * @fires Server#reconnect
+ * @return {Server} a Server instance.
+ */
+var Server = function(host, port, options) {
+ options = options || {};
+ if(!(this instanceof Server)) return new Server(host, port, options);
+ EventEmitter.call(this);
+ var self = this;
+
+ // Filter the options
+ options = filterOptions(options, legalOptionNames);
+
+ // Stored options
+ var storeOptions = {
+ force: false
+ , bufferMaxEntries: typeof options.bufferMaxEntries == 'number' ? options.bufferMaxEntries : MAX_JS_INT
+ }
+
+ // Shared global store
+ var store = options.store || new Store(self, storeOptions);
+
+ // Detect if we have a socket connection
+ if(host.indexOf('\/') != -1) {
+ if(port != null && typeof port == 'object') {
+ options = port;
+ port = null;
+ }
+ } else if(port == null) {
+ throw MongoError.create({message: 'port must be specified', driver:true});
+ }
+
+ // Get the reconnect option
+ var reconnect = typeof options.auto_reconnect == 'boolean' ? options.auto_reconnect : true;
+ reconnect = typeof options.autoReconnect == 'boolean' ? options.autoReconnect : reconnect;
+
+ // Clone options
+ var clonedOptions = mergeOptions({}, {
+ host: host, port: port, disconnectHandler: store,
+ cursorFactory: Cursor,
+ reconnect: reconnect,
+ emitError: typeof options.emitError == 'boolean' ? options.emitError : true,
+ size: typeof options.poolSize == 'number' ? options.poolSize : 5
+ });
+
+ // Translate any SSL options and other connectivity options
+ clonedOptions = translateOptions(clonedOptions, options);
+
+ // Socket options
+ var socketOptions = options.socketOptions && Object.keys(options.socketOptions).length > 0
+ ? options.socketOptions : options;
+
+ // Translate all the options to the mongodb-core ones
+ clonedOptions = translateOptions(clonedOptions, socketOptions);
+ if(typeof clonedOptions.keepAlive == 'number') {
+ clonedOptions.keepAliveInitialDelay = clonedOptions.keepAlive;
+ clonedOptions.keepAlive = clonedOptions.keepAlive > 0;
+ }
+
+ // Build default client information
+ this.clientInfo = {
+ driver: {
+ name: "nodejs",
+ version: driverVersion
+ },
+ os: {
+ type: type,
+ name: name,
+ architecture: architecture,
+ version: release
+ },
+ platform: nodejsversion
+ }
+
+ // Build default client information
+ clonedOptions.clientInfo = this.clientInfo;
+ // Do we have an application specific string
+ if(options.appname) {
+ clonedOptions.clientInfo.application = { name: options.appname };
+ }
+
+ // Create an instance of a server instance from mongodb-core
+ var server = new CServer(clonedOptions);
+ // Server capabilities
+ var sCapabilities = null;
+
+ // Define the internal properties
+ this.s = {
+ // Create an instance of a server instance from mongodb-core
+ server: server
+ // Server capabilities
+ , sCapabilities: null
+ // Cloned options
+ , clonedOptions: clonedOptions
+ // Reconnect
+ , reconnect: clonedOptions.reconnect
+ // Emit error
+ , emitError: clonedOptions.emitError
+ // Pool size
+ , poolSize: clonedOptions.size
+ // Store Options
+ , storeOptions: storeOptions
+ // Store
+ , store: store
+ // Host
+ , host: host
+ // Port
+ , port: port
+ // Options
+ , options: options
+ }
+}
+
+inherits(Server, EventEmitter);
+
+var define = Server.define = new Define('Server', Server, false);
+
+// BSON property
+Object.defineProperty(Server.prototype, 'bson', {
+ enumerable: true, get: function() {
+ return this.s.server.s.bson;
+ }
+});
+
+// Last ismaster
+Object.defineProperty(Server.prototype, 'isMasterDoc', {
+ enumerable:true, get: function() {
+ return this.s.server.lastIsMaster();
+ }
+});
+
+// Last ismaster
+Object.defineProperty(Server.prototype, 'poolSize', {
+ enumerable:true, get: function() { return this.s.server.connections().length; }
+});
+
+Object.defineProperty(Server.prototype, 'autoReconnect', {
+ enumerable:true, get: function() { return this.s.reconnect; }
+});
+
+Object.defineProperty(Server.prototype, 'host', {
+ enumerable:true, get: function() { return this.s.host; }
+});
+
+Object.defineProperty(Server.prototype, 'port', {
+ enumerable:true, get: function() { return this.s.port; }
+});
+
+// Connect
+Server.prototype.connect = function(db, _options, callback) {
+ var self = this;
+ if('function' === typeof _options) callback = _options, _options = {};
+ if(_options == null) _options = {};
+ if(!('function' === typeof callback)) callback = null;
+ self.s.options = _options;
+
+ // Update bufferMaxEntries
+ self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
+
+ // Error handler
+ var connectErrorHandler = function(event) {
+ return function(err) {
+ // Remove all event handlers
+ var events = ['timeout', 'error', 'close'];
+ events.forEach(function(e) {
+ self.s.server.removeListener(e, connectHandlers[e]);
+ });
+
+ self.s.server.removeListener('connect', connectErrorHandler);
+
+ // Try to callback
+ try {
+ callback(err);
+ } catch(err) {
+ process.nextTick(function() { throw err; })
+ }
+ }
+ }
+
+ // Actual handler
+ var errorHandler = function(event) {
+ return function(err) {
+ if(event != 'error') {
+ self.emit(event, err);
+ }
+ }
+ }
+
+ // Error handler
+ var reconnectHandler = function(err) {
+ self.emit('reconnect', self);
+ self.s.store.execute();
+ }
+
+ // Reconnect failed
+ var reconnectFailedHandler = function(err) {
+ self.emit('reconnectFailed', err);
+ self.s.store.flush(err);
+ }
+
+ // Destroy called on topology, perform cleanup
+ var destroyHandler = function() {
+ self.s.store.flush();
+ }
+
+ // Connect handler
+ var connectHandler = function() {
+ // Clear out all the current handlers left over
+ ["timeout", "error", "close", 'serverOpening', 'serverDescriptionChanged', 'serverHeartbeatStarted',
+ 'serverHeartbeatSucceeded', 'serverHeartbeatFailed', 'serverClosed', 'topologyOpening',
+ 'topologyClosed', 'topologyDescriptionChanged'].forEach(function(e) {
+ self.s.server.removeAllListeners(e);
+ });
+
+ // Set up listeners
+ self.s.server.once('timeout', errorHandler('timeout'));
+ self.s.server.once('error', errorHandler('error'));
+ self.s.server.on('close', errorHandler('close'));
+ // Only called on destroy
+ self.s.server.once('destroy', destroyHandler);
+
+ // relay the event
+ var relay = function(event) {
+ return function(t, server) {
+ self.emit(event, t, server);
+ }
+ }
+
+ // Set up SDAM listeners
+ self.s.server.on('serverDescriptionChanged', relay('serverDescriptionChanged'));
+ self.s.server.on('serverHeartbeatStarted', relay('serverHeartbeatStarted'));
+ self.s.server.on('serverHeartbeatSucceeded', relay('serverHeartbeatSucceeded'));
+ self.s.server.on('serverHeartbeatFailed', relay('serverHeartbeatFailed'));
+ self.s.server.on('serverOpening', relay('serverOpening'));
+ self.s.server.on('serverClosed', relay('serverClosed'));
+ self.s.server.on('topologyOpening', relay('topologyOpening'));
+ self.s.server.on('topologyClosed', relay('topologyClosed'));
+ self.s.server.on('topologyDescriptionChanged', relay('topologyDescriptionChanged'));
+ self.s.server.on('attemptReconnect', relay('attemptReconnect'));
+ self.s.server.on('monitoring', relay('monitoring'));
+
+ // Emit open event
+ self.emit('open', null, self);
+
+ // Return correctly
+ try {
+ callback(null, self);
+ } catch(err) {
+ console.log(err.stack)
+ process.nextTick(function() { throw err; })
+ }
+ }
+
+ // Set up listeners
+ var connectHandlers = {
+ timeout: connectErrorHandler('timeout'),
+ error: connectErrorHandler('error'),
+ close: connectErrorHandler('close')
+ };
+
+ // Add the event handlers
+ self.s.server.once('timeout', connectHandlers.timeout);
+ self.s.server.once('error', connectHandlers.error);
+ self.s.server.once('close', connectHandlers.close);
+ self.s.server.once('connect', connectHandler);
+ // Reconnect server
+ self.s.server.on('reconnect', reconnectHandler);
+ self.s.server.on('reconnectFailed', reconnectFailedHandler);
+
+ // Start connection
+ self.s.server.connect(_options);
+}
+
+// Server capabilities
+Server.prototype.capabilities = function() {
+ if(this.s.sCapabilities) return this.s.sCapabilities;
+ if(this.s.server.lastIsMaster() == null) return null;
+ this.s.sCapabilities = new ServerCapabilities(this.s.server.lastIsMaster());
+ return this.s.sCapabilities;
+}
+
+define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
+
+// Command
+Server.prototype.command = function(ns, cmd, options, callback) {
+ this.s.server.command(ns, cmd, options, callback);
+}
+
+define.classMethod('command', {callback: true, promise:false});
+
+// Insert
+Server.prototype.insert = function(ns, ops, options, callback) {
+ this.s.server.insert(ns, ops, options, callback);
+}
+
+define.classMethod('insert', {callback: true, promise:false});
+
+// Update
+Server.prototype.update = function(ns, ops, options, callback) {
+ this.s.server.update(ns, ops, options, callback);
+}
+
+define.classMethod('update', {callback: true, promise:false});
+
+// Remove
+Server.prototype.remove = function(ns, ops, options, callback) {
+ this.s.server.remove(ns, ops, options, callback);
+}
+
+define.classMethod('remove', {callback: true, promise:false});
+
+// IsConnected
+Server.prototype.isConnected = function() {
+ return this.s.server.isConnected();
+}
+
+Server.prototype.isDestroyed = function() {
+ return this.s.server.isDestroyed();
+}
+
+define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
+
+// Insert
+Server.prototype.cursor = function(ns, cmd, options) {
+ options.disconnectHandler = this.s.store;
+ return this.s.server.cursor(ns, cmd, options);
+}
+
+define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
+
+Server.prototype.lastIsMaster = function() {
+ return this.s.server.lastIsMaster();
+}
+
+/**
+ * Unref all sockets
+ * @method
+ */
+Server.prototype.unref = function() {
+ this.s.server.unref();
+}
+
+Server.prototype.close = function(forceClosed) {
+ this.s.server.destroy();
+ // We need to wash out all stored processes
+ if(forceClosed == true) {
+ this.s.storeOptions.force = forceClosed;
+ this.s.store.flush();
+ }
+}
+
+define.classMethod('close', {callback: false, promise:false});
+
+Server.prototype.auth = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.server.auth.apply(this.s.server, args);
+}
+
+define.classMethod('auth', {callback: true, promise:false});
+
+Server.prototype.logout = function() {
+ var args = Array.prototype.slice.call(arguments, 0);
+ this.s.server.logout.apply(this.s.server, args);
+}
+
+define.classMethod('logout', {callback: true, promise:false});
+
+/**
+ * All raw connections
+ * @method
+ * @return {array}
+ */
+Server.prototype.connections = function() {
+ return this.s.server.connections();
+}
+
+define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
+
+/**
+ * Server connect event
+ *
+ * @event Server#connect
+ * @type {object}
+ */
+
+/**
+ * Server close event
+ *
+ * @event Server#close
+ * @type {object}
+ */
+
+/**
+ * Server reconnect event
+ *
+ * @event Server#reconnect
+ * @type {object}
+ */
+
+/**
+ * Server error event
+ *
+ * @event Server#error
+ * @type {MongoError}
+ */
+
+/**
+ * Server timeout event
+ *
+ * @event Server#timeout
+ * @type {object}
+ */
+
+/**
+ * Server parseError event
+ *
+ * @event Server#parseError
+ * @type {object}
+ */
+
+module.exports = Server;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/topology_base.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/topology_base.js
new file mode 100644
index 0000000..ebfd616
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/topology_base.js
@@ -0,0 +1,191 @@
+"use strict";
+
+var MongoError = require('mongodb-core').MongoError
+ , f = require('util').format;
+
+// The store of ops
+var Store = function(topology, storeOptions) {
+ var self = this;
+ var storedOps = [];
+ storeOptions = storeOptions || {force:false, bufferMaxEntries: -1}
+
+ // Internal state
+ this.s = {
+ storedOps: storedOps
+ , storeOptions: storeOptions
+ , topology: topology
+ }
+
+ Object.defineProperty(this, 'length', {
+ enumerable:true, get: function() { return self.s.storedOps.length; }
+ });
+}
+
+Store.prototype.add = function(opType, ns, ops, options, callback) {
+ if(this.s.storeOptions.force) {
+ return callback(MongoError.create({message: "db closed by application", driver:true}));
+ }
+
+ if(this.s.storeOptions.bufferMaxEntries == 0) {
+ return callback(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
+ }
+
+ if(this.s.storeOptions.bufferMaxEntries > 0 && this.s.storedOps.length > this.s.storeOptions.bufferMaxEntries) {
+ while(this.s.storedOps.length > 0) {
+ var op = this.s.storedOps.shift();
+ op.c(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
+ }
+
+ return;
+ }
+
+ this.s.storedOps.push({t: opType, n: ns, o: ops, op: options, c: callback})
+}
+
+Store.prototype.addObjectAndMethod = function(opType, object, method, params, callback) {
+ if(this.s.storeOptions.force) {
+ return callback(MongoError.create({message: "db closed by application", driver:true }));
+ }
+
+ if(this.s.storeOptions.bufferMaxEntries == 0) {
+ return callback(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
+ }
+
+ if(this.s.storeOptions.bufferMaxEntries > 0 && this.s.storedOps.length > this.s.storeOptions.bufferMaxEntries) {
+ while(this.s.storedOps.length > 0) {
+ var op = this.s.storedOps.shift();
+ op.c(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
+ }
+
+ return;
+ }
+
+ this.s.storedOps.push({t: opType, m: method, o: object, p: params, c: callback})
+}
+
+Store.prototype.flush = function(err) {
+ while(this.s.storedOps.length > 0) {
+ this.s.storedOps.shift().c(err || MongoError.create({message: f("no connection available for operation"), driver:true }));
+ }
+}
+
+var primaryOptions = ['primary', 'primaryPreferred', 'nearest', 'secondaryPreferred'];
+var secondaryOptions = ['secondary', 'secondaryPreferred'];
+
+Store.prototype.execute = function(options) {
+ options = options || {};
+ // Get current ops
+ var ops = this.s.storedOps;
+ // Reset the ops
+ this.s.storedOps = [];
+
+ // Unpack options
+ var executePrimary = typeof options.executePrimary === 'boolean'
+ ? options.executePrimary : true;
+ var executeSecondary = typeof options.executeSecondary === 'boolean'
+ ? options.executeSecondary : true;
+
+ // Execute all the stored ops
+ while(ops.length > 0) {
+ var op = ops.shift();
+
+ if(op.t == 'cursor') {
+ if(executePrimary && executeSecondary) {
+ op.o[op.m].apply(op.o, op.p);
+ } else if(executePrimary && op.o.options
+ && op.o.options.readPreference
+ && primaryOptions.indexOf(op.o.options.readPreference.mode) != -1) {
+ op.o[op.m].apply(op.o, op.p);
+ } else if(!executePrimary && executeSecondary && op.o.options
+ && op.o.options.readPreference
+ && secondaryOptions.indexOf(op.o.options.readPreference.mode) != -1) {
+ op.o[op.m].apply(op.o, op.p);
+ }
+ } else if(op.t == 'auth') {
+ this.s.topology[op.t].apply(this.s.topology, op.o);
+ } else {
+ if(executePrimary && executeSecondary) {
+ this.s.topology[op.t](op.n, op.o, op.op, op.c);
+ } else if(executePrimary && op.op && op.op.readPreference
+ && primaryOptions.indexOf(op.op.readPreference.mode) != -1) {
+ this.s.topology[op.t](op.n, op.o, op.op, op.c);
+ } else if(!executePrimary && executeSecondary && op.op && op.op.readPreference
+ && secondaryOptions.indexOf(op.op.readPreference.mode) != -1) {
+ this.s.topology[op.t](op.n, op.o, op.op, op.c);
+ }
+ }
+ }
+}
+
+Store.prototype.all = function() {
+ return this.s.storedOps;
+}
+
+// Server capabilities
+var ServerCapabilities = function(ismaster) {
+ var setup_get_property = function(object, name, value) {
+ Object.defineProperty(object, name, {
+ enumerable: true
+ , get: function () { return value; }
+ });
+ }
+
+ // Capabilities
+ var aggregationCursor = false;
+ var writeCommands = false;
+ var textSearch = false;
+ var authCommands = false;
+ var listCollections = false;
+ var listIndexes = false;
+ var maxNumberOfDocsInBatch = ismaster.maxWriteBatchSize || 1000;
+ var commandsTakeWriteConcern = false;
+ var commandsTakeCollation = false;
+
+ if(ismaster.minWireVersion >= 0) {
+ textSearch = true;
+ }
+
+ if(ismaster.maxWireVersion >= 1) {
+ aggregationCursor = true;
+ authCommands = true;
+ }
+
+ if(ismaster.maxWireVersion >= 2) {
+ writeCommands = true;
+ }
+
+ if(ismaster.maxWireVersion >= 3) {
+ listCollections = true;
+ listIndexes = true;
+ }
+
+ if(ismaster.maxWireVersion >= 5) {
+ commandsTakeWriteConcern = true;
+ commandsTakeCollation = true;
+ }
+
+ // If no min or max wire version set to 0
+ if(ismaster.minWireVersion == null) {
+ ismaster.minWireVersion = 0;
+ }
+
+ if(ismaster.maxWireVersion == null) {
+ ismaster.maxWireVersion = 0;
+ }
+
+ // Map up read only parameters
+ setup_get_property(this, "hasAggregationCursor", aggregationCursor);
+ setup_get_property(this, "hasWriteCommands", writeCommands);
+ setup_get_property(this, "hasTextSearch", textSearch);
+ setup_get_property(this, "hasAuthCommands", authCommands);
+ setup_get_property(this, "hasListCollectionsCommand", listCollections);
+ setup_get_property(this, "hasListIndexesCommand", listIndexes);
+ setup_get_property(this, "minWireVersion", ismaster.minWireVersion);
+ setup_get_property(this, "maxWireVersion", ismaster.maxWireVersion);
+ setup_get_property(this, "maxNumberOfDocsInBatch", maxNumberOfDocsInBatch);
+ setup_get_property(this, "commandsTakeWriteConcern", commandsTakeWriteConcern);
+ setup_get_property(this, "commandsTakeCollation", commandsTakeCollation);
+}
+
+exports.Store = Store;
+exports.ServerCapabilities = ServerCapabilities;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/url_parser.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/url_parser.js
new file mode 100644
index 0000000..663e5dc
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/url_parser.js
@@ -0,0 +1,406 @@
+"use strict";
+
+var ReadPreference = require('./read_preference'),
+ parser = require('url'),
+ f = require('util').format;
+
+module.exports = function(url, options) {
+ // Ensure we have a default options object if none set
+ options = options || {};
+ // Variables
+ var connection_part = '';
+ var auth_part = '';
+ var query_string_part = '';
+ var dbName = 'admin';
+
+ // Url parser result
+ var result = parser.parse(url, true);
+
+ if(result.protocol != 'mongodb:') {
+ throw new Error('invalid schema, expected mongodb');
+ }
+
+ if((result.hostname == null || result.hostname == '') && url.indexOf('.sock') == -1) {
+ throw new Error('no hostname or hostnames provided in connection string');
+ }
+
+ if(result.port == '0') {
+ throw new Error('invalid port (zero) with hostname');
+ }
+
+ if(!isNaN(parseInt(result.port, 10)) && parseInt(result.port, 10) > 65535) {
+ throw new Error('invalid port (larger than 65535) with hostname');
+ }
+
+ if(result.path
+ && result.path.length > 0
+ && result.path[0] != '/'
+ && url.indexOf('.sock') == -1) {
+ throw new Error('missing delimiting slash between hosts and options');
+ }
+
+ if(result.query) {
+ for(var name in result.query) {
+ if(name.indexOf('::') != -1) {
+ throw new Error('double colon in host identifier');
+ }
+
+ if(result.query[name] == '') {
+ throw new Error('query parameter ' + name + ' is an incomplete value pair');
+ }
+ }
+ }
+
+ if(result.auth) {
+ var parts = result.auth.split(':');
+ if(url.indexOf(result.auth) != -1 && parts.length > 2) {
+ throw new Error('Username with password containing an unescaped colon');
+ }
+
+ if(url.indexOf(result.auth) != -1 && result.auth.indexOf('@') != -1) {
+ throw new Error('Username containing an unescaped at-sign');
+ }
+ }
+
+ // Remove query
+ var clean = url.split('?').shift();
+
+ // Extract the list of hosts
+ var strings = clean.split(',');
+ var hosts = [];
+
+ for(var i = 0; i < strings.length; i++) {
+ var hostString = strings[i];
+
+ if(hostString.indexOf('mongodb') != -1) {
+ if(hostString.indexOf('@') != -1) {
+ hosts.push(hostString.split('@').pop())
+ } else {
+ hosts.push(hostString.substr('mongodb://'.length));
+ }
+ } else if(hostString.indexOf('/') != -1) {
+ hosts.push(hostString.split('/').shift());
+ } else if(hostString.indexOf('/') == -1) {
+ hosts.push(hostString.trim());
+ }
+ }
+
+ for(var i = 0; i < hosts.length; i++) {
+ var r = parser.parse(f('mongodb://%s', hosts[i].trim()));
+ if(r.path && r.path.indexOf(':') != -1) {
+ throw new Error('double colon in host identifier');
+ }
+ }
+
+ // If we have a ? mark cut the query elements off
+ if(url.indexOf("?") != -1) {
+ query_string_part = url.substr(url.indexOf("?") + 1);
+ connection_part = url.substring("mongodb://".length, url.indexOf("?"))
+ } else {
+ connection_part = url.substring("mongodb://".length);
+ }
+
+ // Check if we have auth params
+ if(connection_part.indexOf("@") != -1) {
+ auth_part = connection_part.split("@")[0];
+ connection_part = connection_part.split("@")[1];
+ }
+
+ // Check if the connection string has a db
+ if(connection_part.indexOf(".sock") != -1) {
+ if(connection_part.indexOf(".sock/") != -1) {
+ dbName = connection_part.split(".sock/")[1];
+ // Check if multiple database names provided, or just an illegal trailing backslash
+ if (dbName.indexOf("/") != -1) {
+ if (dbName.split("/").length == 2 && dbName.split("/")[1].length == 0) {
+ throw new Error('Illegal trailing backslash after database name');
+ }
+ throw new Error('More than 1 database name in URL');
+ }
+ connection_part = connection_part.split("/", connection_part.indexOf(".sock") + ".sock".length);
+ }
+ } else if(connection_part.indexOf("/") != -1) {
+ // Check if multiple database names provided, or just an illegal trailing backslash
+ if (connection_part.split("/").length > 2) {
+ if (connection_part.split("/")[2].length == 0) {
+ throw new Error('Illegal trailing backslash after database name');
+ }
+ throw new Error('More than 1 database name in URL');
+ }
+ dbName = connection_part.split("/")[1];
+ connection_part = connection_part.split("/")[0];
+ }
+
+ // Result object
+ var object = {};
+
+ // Pick apart the authentication part of the string
+ var authPart = auth_part || '';
+ var auth = authPart.split(':', 2);
+
+ // Decode the URI components
+ auth[0] = decodeURIComponent(auth[0]);
+ if(auth[1]){
+ auth[1] = decodeURIComponent(auth[1]);
+ }
+
+ // Add auth to final object if we have 2 elements
+ if(auth.length == 2) object.auth = {user: auth[0], password: auth[1]};
+
+ // Variables used for temporary storage
+ var hostPart;
+ var urlOptions;
+ var servers;
+ var serverOptions = {socketOptions: {}};
+ var dbOptions = {read_preference_tags: []};
+ var replSetServersOptions = {socketOptions: {}};
+ var mongosOptions = {socketOptions: {}};
+ // Add server options to final object
+ object.server_options = serverOptions;
+ object.db_options = dbOptions;
+ object.rs_options = replSetServersOptions;
+ object.mongos_options = mongosOptions;
+
+ // Let's check if we are using a domain socket
+ if(url.match(/\.sock/)) {
+ // Split out the socket part
+ var domainSocket = url.substring(
+ url.indexOf("mongodb://") + "mongodb://".length
+ , url.lastIndexOf(".sock") + ".sock".length);
+ // Clean out any auth stuff if any
+ if(domainSocket.indexOf("@") != -1) domainSocket = domainSocket.split("@")[1];
+ servers = [{domain_socket: domainSocket}];
+ } else {
+ // Split up the db
+ hostPart = connection_part;
+ // Deduplicate servers
+ var deduplicatedServers = {};
+
+ // Parse all server results
+ servers = hostPart.split(',').map(function(h) {
+ var _host, _port, ipv6match;
+ //check if it matches [IPv6]:port, where the port number is optional
+ if ((ipv6match = /\[([^\]]+)\](?:\:(.+))?/.exec(h))) {
+ _host = ipv6match[1];
+ _port = parseInt(ipv6match[2], 10) || 27017;
+ } else {
+ //otherwise assume it's IPv4, or plain hostname
+ var hostPort = h.split(':', 2);
+ _host = hostPort[0] || 'localhost';
+ _port = hostPort[1] != null ? parseInt(hostPort[1], 10) : 27017;
+ // Check for localhost?safe=true style case
+ if(_host.indexOf("?") != -1) _host = _host.split(/\?/)[0];
+ }
+
+ // No entry returned for duplicate servr
+ if(deduplicatedServers[_host + "_" + _port]) return null;
+ deduplicatedServers[_host + "_" + _port] = 1;
+
+ // Return the mapped object
+ return {host: _host, port: _port};
+ }).filter(function(x) {
+ return x != null;
+ });
+ }
+
+ // Get the db name
+ object.dbName = dbName || 'admin';
+ // Split up all the options
+ urlOptions = (query_string_part || '').split(/[&;]/);
+ // Ugh, we have to figure out which options go to which constructor manually.
+ urlOptions.forEach(function(opt) {
+ if(!opt) return;
+ var splitOpt = opt.split('='), name = splitOpt[0], value = splitOpt[1];
+ // Options implementations
+ switch(name) {
+ case 'slaveOk':
+ case 'slave_ok':
+ serverOptions.slave_ok = (value == 'true');
+ dbOptions.slaveOk = (value == 'true');
+ break;
+ case 'maxPoolSize':
+ case 'poolSize':
+ serverOptions.poolSize = parseInt(value, 10);
+ replSetServersOptions.poolSize = parseInt(value, 10);
+ break;
+ case 'appname':
+ object.appname = decodeURIComponent(value);
+ case 'autoReconnect':
+ case 'auto_reconnect':
+ serverOptions.auto_reconnect = (value == 'true');
+ break;
+ case 'minPoolSize':
+ throw new Error("minPoolSize not supported");
+ case 'maxIdleTimeMS':
+ throw new Error("maxIdleTimeMS not supported");
+ case 'waitQueueMultiple':
+ throw new Error("waitQueueMultiple not supported");
+ case 'waitQueueTimeoutMS':
+ throw new Error("waitQueueTimeoutMS not supported");
+ case 'uuidRepresentation':
+ throw new Error("uuidRepresentation not supported");
+ case 'ssl':
+ if(value == 'prefer') {
+ serverOptions.ssl = value;
+ replSetServersOptions.ssl = value;
+ mongosOptions.ssl = value;
+ break;
+ }
+ serverOptions.ssl = (value == 'true');
+ replSetServersOptions.ssl = (value == 'true');
+ mongosOptions.ssl = (value == 'true');
+ break;
+ case 'sslValidate':
+ serverOptions.sslValidate = (value == 'true');
+ replSetServersOptions.sslValidate = (value == 'true');
+ mongosOptions.sslValidate = (value == 'true');
+ break;
+ case 'replicaSet':
+ case 'rs_name':
+ replSetServersOptions.rs_name = value;
+ break;
+ case 'reconnectWait':
+ replSetServersOptions.reconnectWait = parseInt(value, 10);
+ break;
+ case 'retries':
+ replSetServersOptions.retries = parseInt(value, 10);
+ break;
+ case 'readSecondary':
+ case 'read_secondary':
+ replSetServersOptions.read_secondary = (value == 'true');
+ break;
+ case 'fsync':
+ dbOptions.fsync = (value == 'true');
+ break;
+ case 'journal':
+ dbOptions.j = (value == 'true');
+ break;
+ case 'safe':
+ dbOptions.safe = (value == 'true');
+ break;
+ case 'nativeParser':
+ case 'native_parser':
+ dbOptions.native_parser = (value == 'true');
+ break;
+ case 'readConcernLevel':
+ dbOptions.readConcern = {level: value};
+ break;
+ case 'connectTimeoutMS':
+ serverOptions.socketOptions.connectTimeoutMS = parseInt(value, 10);
+ replSetServersOptions.socketOptions.connectTimeoutMS = parseInt(value, 10);
+ mongosOptions.socketOptions.connectTimeoutMS = parseInt(value, 10);
+ break;
+ case 'socketTimeoutMS':
+ serverOptions.socketOptions.socketTimeoutMS = parseInt(value, 10);
+ replSetServersOptions.socketOptions.socketTimeoutMS = parseInt(value, 10);
+ mongosOptions.socketOptions.socketTimeoutMS = parseInt(value, 10);
+ break;
+ case 'w':
+ dbOptions.w = parseInt(value, 10);
+ if(isNaN(dbOptions.w)) dbOptions.w = value;
+ break;
+ case 'authSource':
+ dbOptions.authSource = value;
+ break;
+ case 'gssapiServiceName':
+ dbOptions.gssapiServiceName = value;
+ break;
+ case 'authMechanism':
+ if(value == 'GSSAPI') {
+ // If no password provided decode only the principal
+ if(object.auth == null) {
+ var urlDecodeAuthPart = decodeURIComponent(authPart);
+ if(urlDecodeAuthPart.indexOf("@") == -1) throw new Error("GSSAPI requires a provided principal");
+ object.auth = {user: urlDecodeAuthPart, password: null};
+ } else {
+ object.auth.user = decodeURIComponent(object.auth.user);
+ }
+ } else if(value == 'MONGODB-X509') {
+ object.auth = {user: decodeURIComponent(authPart)};
+ }
+
+ // Only support GSSAPI or MONGODB-CR for now
+ if(value != 'GSSAPI'
+ && value != 'MONGODB-X509'
+ && value != 'MONGODB-CR'
+ && value != 'DEFAULT'
+ && value != 'SCRAM-SHA-1'
+ && value != 'PLAIN')
+ throw new Error("only DEFAULT, GSSAPI, PLAIN, MONGODB-X509, SCRAM-SHA-1 or MONGODB-CR is supported by authMechanism");
+
+ // Authentication mechanism
+ dbOptions.authMechanism = value;
+ break;
+ case 'authMechanismProperties':
+ // Split up into key, value pairs
+ var values = value.split(',');
+ var o = {};
+ // For each value split into key, value
+ values.forEach(function(x) {
+ var v = x.split(':');
+ o[v[0]] = v[1];
+ });
+
+ // Set all authMechanismProperties
+ dbOptions.authMechanismProperties = o;
+ // Set the service name value
+ if(typeof o.SERVICE_NAME == 'string') dbOptions.gssapiServiceName = o.SERVICE_NAME;
+ if(typeof o.SERVICE_REALM == 'string') dbOptions.gssapiServiceRealm = o.SERVICE_REALM;
+ if(typeof o.CANONICALIZE_HOST_NAME == 'string') dbOptions.gssapiCanonicalizeHostName = o.CANONICALIZE_HOST_NAME == 'true' ? true : false;
+ break;
+ case 'wtimeoutMS':
+ dbOptions.wtimeout = parseInt(value, 10);
+ break;
+ case 'readPreference':
+ if(!ReadPreference.isValid(value)) throw new Error("readPreference must be either primary/primaryPreferred/secondary/secondaryPreferred/nearest");
+ dbOptions.readPreference = value;
+ break;
+ case 'maxStalenessMS':
+ dbOptions.maxStalenessMS = parseInt(value, 10);
+ break;
+ case 'readPreferenceTags':
+ // Decode the value
+ value = decodeURIComponent(value);
+ // Contains the tag object
+ var tagObject = {};
+ if(value == null || value == '') {
+ dbOptions.read_preference_tags.push(tagObject);
+ break;
+ }
+
+ // Split up the tags
+ var tags = value.split(/\,/);
+ for(var i = 0; i < tags.length; i++) {
+ var parts = tags[i].trim().split(/\:/);
+ tagObject[parts[0]] = parts[1];
+ }
+
+ // Set the preferences tags
+ dbOptions.read_preference_tags.push(tagObject);
+ break;
+ default:
+ break;
+ }
+ });
+
+ // No tags: should be null (not [])
+ if(dbOptions.read_preference_tags.length === 0) {
+ dbOptions.read_preference_tags = null;
+ }
+
+ // Validate if there are an invalid write concern combinations
+ if((dbOptions.w == -1 || dbOptions.w == 0) && (
+ dbOptions.journal == true
+ || dbOptions.fsync == true
+ || dbOptions.safe == true)) throw new Error("w set to -1 or 0 cannot be combined with safe/w/journal/fsync")
+
+ // If no read preference set it to primary
+ if(!dbOptions.readPreference) {
+ dbOptions.readPreference = 'primary';
+ }
+
+ // Add servers to result
+ object.servers = servers;
+ // Returned parsed object
+ return object;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/utils.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/utils.js
new file mode 100644
index 0000000..fcadf8f
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/lib/utils.js
@@ -0,0 +1,313 @@
+"use strict";
+
+var MongoError = require('mongodb-core').MongoError,
+ f = require('util').format;
+
+var shallowClone = function(obj) {
+ var copy = {};
+ for(var name in obj) copy[name] = obj[name];
+ return copy;
+}
+
+// Set simple property
+var getSingleProperty = function(obj, name, value) {
+ Object.defineProperty(obj, name, {
+ enumerable:true,
+ get: function() {
+ return value
+ }
+ });
+}
+
+var formatSortValue = exports.formatSortValue = function(sortDirection) {
+ var value = ("" + sortDirection).toLowerCase();
+
+ switch (value) {
+ case 'ascending':
+ case 'asc':
+ case '1':
+ return 1;
+ case 'descending':
+ case 'desc':
+ case '-1':
+ return -1;
+ default:
+ throw new Error("Illegal sort clause, must be of the form "
+ + "[['field1', '(ascending|descending)'], "
+ + "['field2', '(ascending|descending)']]");
+ }
+};
+
+var formattedOrderClause = exports.formattedOrderClause = function(sortValue) {
+ var orderBy = {};
+ if(sortValue == null) return null;
+ if (Array.isArray(sortValue)) {
+ if(sortValue.length === 0) {
+ return null;
+ }
+
+ for(var i = 0; i < sortValue.length; i++) {
+ if(sortValue[i].constructor == String) {
+ orderBy[sortValue[i]] = 1;
+ } else {
+ orderBy[sortValue[i][0]] = formatSortValue(sortValue[i][1]);
+ }
+ }
+ } else if(sortValue != null && typeof sortValue == 'object') {
+ orderBy = sortValue;
+ } else if (typeof sortValue == 'string') {
+ orderBy[sortValue] = 1;
+ } else {
+ throw new Error("Illegal sort clause, must be of the form " +
+ "[['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]");
+ }
+
+ return orderBy;
+};
+
+var checkCollectionName = function checkCollectionName (collectionName) {
+ if('string' !== typeof collectionName) {
+ throw Error("collection name must be a String");
+ }
+
+ if(!collectionName || collectionName.indexOf('..') != -1) {
+ throw Error("collection names cannot be empty");
+ }
+
+ if(collectionName.indexOf('$') != -1 &&
+ collectionName.match(/((^\$cmd)|(oplog\.\$main))/) == null) {
+ throw Error("collection names must not contain '$'");
+ }
+
+ if(collectionName.match(/^\.|\.$/) != null) {
+ throw Error("collection names must not start or end with '.'");
+ }
+
+ // Validate that we are not passing 0x00 in the colletion name
+ if(!!~collectionName.indexOf("\x00")) {
+ throw new Error("collection names cannot contain a null character");
+ }
+};
+
+var handleCallback = function(callback, err, value1, value2) {
+ try {
+ if(callback == null) return;
+ if(callback) {
+ return value2 ? callback(err, value1, value2) : callback(err, value1);
+ }
+ } catch(err) {
+ process.nextTick(function() { throw err; });
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * Wrap a Mongo error document in an Error instance
+ * @ignore
+ * @api private
+ */
+var toError = function(error) {
+ if (error instanceof Error) return error;
+
+ var msg = error.err || error.errmsg || error.errMessage || error;
+ var e = MongoError.create({message: msg, driver:true});
+
+ // Get all object keys
+ var keys = typeof error == 'object'
+ ? Object.keys(error)
+ : [];
+
+ for(var i = 0; i < keys.length; i++) {
+ try {
+ e[keys[i]] = error[keys[i]];
+ } catch(err) {
+ // continue
+ }
+ }
+
+ return e;
+}
+
+/**
+ * @ignore
+ */
+var normalizeHintField = function normalizeHintField(hint) {
+ var finalHint = null;
+
+ if(typeof hint == 'string') {
+ finalHint = hint;
+ } else if(Array.isArray(hint)) {
+ finalHint = {};
+
+ hint.forEach(function(param) {
+ finalHint[param] = 1;
+ });
+ } else if(hint != null && typeof hint == 'object') {
+ finalHint = {};
+ for (var name in hint) {
+ finalHint[name] = hint[name];
+ }
+ }
+
+ return finalHint;
+};
+
+/**
+ * Create index name based on field spec
+ *
+ * @ignore
+ * @api private
+ */
+var parseIndexOptions = function(fieldOrSpec) {
+ var fieldHash = {};
+ var indexes = [];
+ var keys;
+
+ // Get all the fields accordingly
+ if('string' == typeof fieldOrSpec) {
+ // 'type'
+ indexes.push(fieldOrSpec + '_' + 1);
+ fieldHash[fieldOrSpec] = 1;
+ } else if(Array.isArray(fieldOrSpec)) {
+ fieldOrSpec.forEach(function(f) {
+ if('string' == typeof f) {
+ // [{location:'2d'}, 'type']
+ indexes.push(f + '_' + 1);
+ fieldHash[f] = 1;
+ } else if(Array.isArray(f)) {
+ // [['location', '2d'],['type', 1]]
+ indexes.push(f[0] + '_' + (f[1] || 1));
+ fieldHash[f[0]] = f[1] || 1;
+ } else if(isObject(f)) {
+ // [{location:'2d'}, {type:1}]
+ keys = Object.keys(f);
+ keys.forEach(function(k) {
+ indexes.push(k + '_' + f[k]);
+ fieldHash[k] = f[k];
+ });
+ } else {
+ // undefined (ignore)
+ }
+ });
+ } else if(isObject(fieldOrSpec)) {
+ // {location:'2d', type:1}
+ keys = Object.keys(fieldOrSpec);
+ keys.forEach(function(key) {
+ indexes.push(key + '_' + fieldOrSpec[key]);
+ fieldHash[key] = fieldOrSpec[key];
+ });
+ }
+
+ return {
+ name: indexes.join("_"), keys: keys, fieldHash: fieldHash
+ }
+}
+
+var isObject = exports.isObject = function (arg) {
+ return '[object Object]' == toString.call(arg)
+}
+
+var debugOptions = function(debugFields, options) {
+ var finaloptions = {};
+ debugFields.forEach(function(n) {
+ finaloptions[n] = options[n];
+ });
+
+ return finaloptions;
+}
+
+var decorateCommand = function(command, options, exclude) {
+ for(var name in options) {
+ if(exclude[name] == null) command[name] = options[name];
+ }
+
+ return command;
+}
+
+var mergeOptions = function(target, source) {
+ for(var name in source) {
+ target[name] = source[name];
+ }
+
+ return target;
+}
+
+// Merge options with translation
+var translateOptions = function(target, source) {
+ var translations = {
+ // SSL translation options
+ 'sslCA': 'ca', 'sslValidate': 'rejectUnauthorized', 'sslKey': 'key', 'sslCert': 'cert', 'sslPass': 'passphrase',
+ // SocketTimeout translation options
+ 'socketTimeoutMS': 'socketTimeout', 'connectTimeoutMS': 'connectionTimeout',
+ // Replicaset options
+ 'replicaSet': 'setName', 'rs_name': 'setName', 'secondaryAcceptableLatencyMS': 'acceptableLatency',
+ 'connectWithNoPrimary': 'secondaryOnlyConnectionAllowed',
+ // Mongos options
+ 'acceptableLatencyMS': 'localThresholdMS'
+ }
+
+ for(var name in source) {
+ if(translations[name]) {
+ target[translations[name]] = source[name];
+ } else {
+ target[name] = source[name];
+ }
+ }
+
+ return target;
+}
+
+var filterOptions = function(options, names) {
+ var filterOptions = {};
+
+ for(var name in options) {
+ if(names.indexOf(name) != -1) filterOptions[name] = options[name];
+ }
+
+ // Filtered options
+ return filterOptions;
+}
+
+// Object.assign method or polyfille
+var assign = Object.assign ? Object.assign : function assign(target, firstSource) {
+ if (target === undefined || target === null) {
+ throw new TypeError('Cannot convert first argument to object');
+ }
+
+ var to = Object(target);
+ for (var i = 1; i < arguments.length; i++) {
+ var nextSource = arguments[i];
+ if (nextSource === undefined || nextSource === null) {
+ continue;
+ }
+
+ var keysArray = Object.keys(Object(nextSource));
+ for (var nextIndex = 0, len = keysArray.length; nextIndex < len; nextIndex++) {
+ var nextKey = keysArray[nextIndex];
+ var desc = Object.getOwnPropertyDescriptor(nextSource, nextKey);
+ if (desc !== undefined && desc.enumerable) {
+ to[nextKey] = nextSource[nextKey];
+ }
+ }
+ }
+ return to;
+}
+
+exports.filterOptions = filterOptions;
+exports.mergeOptions = mergeOptions;
+exports.translateOptions = translateOptions;
+exports.shallowClone = shallowClone;
+exports.getSingleProperty = getSingleProperty;
+exports.checkCollectionName = checkCollectionName;
+exports.toError = toError;
+exports.formattedOrderClause = formattedOrderClause;
+exports.parseIndexOptions = parseIndexOptions;
+exports.normalizeHintField = normalizeHintField;
+exports.handleCallback = handleCallback;
+exports.decorateCommand = decorateCommand;
+exports.isObject = isObject;
+exports.debugOptions = debugOptions;
+exports.MAX_JS_INT = 0x20000000000000;
+exports.assign = assign;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/CHANGELOG.md b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/CHANGELOG.md
new file mode 100644
index 0000000..cc8aa10
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/CHANGELOG.md
@@ -0,0 +1,65 @@
+# Master
+
+# 3.2.0
+
+* improve tamper resistence of Promise.all Promise.race and
+ Promise.prototype.then (note, this isn't complete, but addresses an exception
+ when used \w core-js, follow up work will address entirely)
+* remove spec incompatible then chaining fast-path
+* add eslint
+* update build deps
+
+# 3.1.2
+
+* fix node detection issues with NWJS/electron
+
+# 3.1.0
+
+* improve performance of Promise.all when it encounters a non-promise input object input
+* then/resolve tamper protection
+* reduce AST size of promise constructor, to facilitate more inlining
+* Update README.md with details about PhantomJS requirement for running tests
+* Mangle and compress the minified version
+
+# 3.0.1
+
+* no longer include dist/test in npm releases
+
+# 3.0.0
+
+* use nextTick() instead of setImmediate() to schedule microtasks with node 0.10. Later versions of
+ nodes are not affected as they were already using nextTick(). Note that using nextTick() might
+ trigger a depreciation warning on 0.10 as described at https://github.com/cujojs/when/issues/410.
+ The reason why nextTick() is preferred is that is setImmediate() would schedule a macrotask
+ instead of a microtask and might result in a different scheduling.
+ If needed you can revert to the former behavior as follow:
+
+ var Promise = require('es6-promise').Promise;
+ Promise._setScheduler(setImmediate);
+
+# 2.3.0
+
+* #121: Ability to override the internal asap implementation
+* #120: Use an ascii character for an apostrophe, for source maps
+
+# 2.2.0
+
+* #116: Expose asap() and a way to override the scheduling mechanism on Promise
+* Lock to v0.2.3 of ember-cli
+
+# 2.1.1
+
+* Fix #100 via #105: tell browserify to ignore vertx require
+* Fix #101 via #102: "follow thenable state, not own state"
+
+# 2.1.0
+
+* ? (see the commit log)
+
+# 2.0.0
+
+* re-sync with RSVP. Many large performance improvements and bugfixes.
+
+# 1.0.0
+
+* first subset of RSVP
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/LICENSE b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/LICENSE
new file mode 100644
index 0000000..954ec59
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/LICENSE
@@ -0,0 +1,19 @@
+Copyright (c) 2014 Yehuda Katz, Tom Dale, Stefan Penner and contributors
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/README.md b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/README.md
new file mode 100644
index 0000000..16739ca
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/README.md
@@ -0,0 +1,74 @@
+# ES6-Promise (subset of [rsvp.js](https://github.com/tildeio/rsvp.js))
+
+This is a polyfill of the [ES6 Promise](http://people.mozilla.org/~jorendorff/es6-draft.html#sec-promise-constructor). The implementation is a subset of [rsvp.js](https://github.com/tildeio/rsvp.js) extracted by @jakearchibald, if you're wanting extra features and more debugging options, check out the [full library](https://github.com/tildeio/rsvp.js).
+
+For API details and how to use promises, see the JavaScript Promises HTML5Rocks article.
+
+## Downloads
+
+* [es6-promise](https://raw.githubusercontent.com/stefanpenner/es6-promise/master/dist/es6-promise.js)
+* [es6-promise-min](https://raw.githubusercontent.com/stefanpenner/es6-promise/master/dist/es6-promise.min.js)
+
+## Node.js
+
+To install:
+
+```sh
+npm install es6-promise
+```
+
+To use:
+
+```js
+var Promise = require('es6-promise').Promise;
+```
+
+## Bower
+
+To install:
+
+```sh
+bower install es6-promise --save
+```
+
+
+## Usage in IE<9
+
+`catch` is a reserved word in IE<9, meaning `promise.catch(func)` throws a syntax error. To work around this, you can use a string to access the property as shown in the following example.
+
+However, please remember that such technique is already provided by most common minifiers, making the resulting code safe for old browsers and production:
+
+```js
+promise['catch'](function(err) {
+ // ...
+});
+```
+
+Or use `.then` instead:
+
+```js
+promise.then(undefined, function(err) {
+ // ...
+});
+```
+
+## Auto-polyfill
+
+To polyfill the global environment (either in Node or in the browser via CommonJS) use the following code snippet:
+
+```js
+require('es6-promise').polyfill();
+```
+
+Notice that we don't assign the result of `polyfill()` to any variable. The `polyfill()` method will patch the global environment (in this case to the `Promise` name) when called.
+
+## Building & Testing
+
+You will need to have PhantomJS installed globally in order to run the tests.
+
+`npm install -g phantomjs`
+
+* `npm run build` to build
+* `npm test` to run tests
+* `npm start` to run a build watcher, and webserver to test
+* `npm run test:server` for a testem test runner and watching builder
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.js
new file mode 100644
index 0000000..0755e9b
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.js
@@ -0,0 +1,959 @@
+/*!
+ * @overview es6-promise - a tiny implementation of Promises/A+.
+ * @copyright Copyright (c) 2014 Yehuda Katz, Tom Dale, Stefan Penner and contributors (Conversion to ES6 API by Jake Archibald)
+ * @license Licensed under MIT license
+ * See https://raw.githubusercontent.com/jakearchibald/es6-promise/master/LICENSE
+ * @version 3.2.1
+ */
+
+(function() {
+ "use strict";
+ function lib$es6$promise$utils$$objectOrFunction(x) {
+ return typeof x === 'function' || (typeof x === 'object' && x !== null);
+ }
+
+ function lib$es6$promise$utils$$isFunction(x) {
+ return typeof x === 'function';
+ }
+
+ function lib$es6$promise$utils$$isMaybeThenable(x) {
+ return typeof x === 'object' && x !== null;
+ }
+
+ var lib$es6$promise$utils$$_isArray;
+ if (!Array.isArray) {
+ lib$es6$promise$utils$$_isArray = function (x) {
+ return Object.prototype.toString.call(x) === '[object Array]';
+ };
+ } else {
+ lib$es6$promise$utils$$_isArray = Array.isArray;
+ }
+
+ var lib$es6$promise$utils$$isArray = lib$es6$promise$utils$$_isArray;
+ var lib$es6$promise$asap$$len = 0;
+ var lib$es6$promise$asap$$vertxNext;
+ var lib$es6$promise$asap$$customSchedulerFn;
+
+ var lib$es6$promise$asap$$asap = function asap(callback, arg) {
+ lib$es6$promise$asap$$queue[lib$es6$promise$asap$$len] = callback;
+ lib$es6$promise$asap$$queue[lib$es6$promise$asap$$len + 1] = arg;
+ lib$es6$promise$asap$$len += 2;
+ if (lib$es6$promise$asap$$len === 2) {
+ // If len is 2, that means that we need to schedule an async flush.
+ // If additional callbacks are queued before the queue is flushed, they
+ // will be processed by this flush that we are scheduling.
+ if (lib$es6$promise$asap$$customSchedulerFn) {
+ lib$es6$promise$asap$$customSchedulerFn(lib$es6$promise$asap$$flush);
+ } else {
+ lib$es6$promise$asap$$scheduleFlush();
+ }
+ }
+ }
+
+ function lib$es6$promise$asap$$setScheduler(scheduleFn) {
+ lib$es6$promise$asap$$customSchedulerFn = scheduleFn;
+ }
+
+ function lib$es6$promise$asap$$setAsap(asapFn) {
+ lib$es6$promise$asap$$asap = asapFn;
+ }
+
+ var lib$es6$promise$asap$$browserWindow = (typeof window !== 'undefined') ? window : undefined;
+ var lib$es6$promise$asap$$browserGlobal = lib$es6$promise$asap$$browserWindow || {};
+ var lib$es6$promise$asap$$BrowserMutationObserver = lib$es6$promise$asap$$browserGlobal.MutationObserver || lib$es6$promise$asap$$browserGlobal.WebKitMutationObserver;
+ var lib$es6$promise$asap$$isNode = typeof self === 'undefined' && typeof process !== 'undefined' && {}.toString.call(process) === '[object process]';
+
+ // test for web worker but not in IE10
+ var lib$es6$promise$asap$$isWorker = typeof Uint8ClampedArray !== 'undefined' &&
+ typeof importScripts !== 'undefined' &&
+ typeof MessageChannel !== 'undefined';
+
+ // node
+ function lib$es6$promise$asap$$useNextTick() {
+ // node version 0.10.x displays a deprecation warning when nextTick is used recursively
+ // see https://github.com/cujojs/when/issues/410 for details
+ return function() {
+ process.nextTick(lib$es6$promise$asap$$flush);
+ };
+ }
+
+ // vertx
+ function lib$es6$promise$asap$$useVertxTimer() {
+ return function() {
+ lib$es6$promise$asap$$vertxNext(lib$es6$promise$asap$$flush);
+ };
+ }
+
+ function lib$es6$promise$asap$$useMutationObserver() {
+ var iterations = 0;
+ var observer = new lib$es6$promise$asap$$BrowserMutationObserver(lib$es6$promise$asap$$flush);
+ var node = document.createTextNode('');
+ observer.observe(node, { characterData: true });
+
+ return function() {
+ node.data = (iterations = ++iterations % 2);
+ };
+ }
+
+ // web worker
+ function lib$es6$promise$asap$$useMessageChannel() {
+ var channel = new MessageChannel();
+ channel.port1.onmessage = lib$es6$promise$asap$$flush;
+ return function () {
+ channel.port2.postMessage(0);
+ };
+ }
+
+ function lib$es6$promise$asap$$useSetTimeout() {
+ return function() {
+ setTimeout(lib$es6$promise$asap$$flush, 1);
+ };
+ }
+
+ var lib$es6$promise$asap$$queue = new Array(1000);
+ function lib$es6$promise$asap$$flush() {
+ for (var i = 0; i < lib$es6$promise$asap$$len; i+=2) {
+ var callback = lib$es6$promise$asap$$queue[i];
+ var arg = lib$es6$promise$asap$$queue[i+1];
+
+ callback(arg);
+
+ lib$es6$promise$asap$$queue[i] = undefined;
+ lib$es6$promise$asap$$queue[i+1] = undefined;
+ }
+
+ lib$es6$promise$asap$$len = 0;
+ }
+
+ function lib$es6$promise$asap$$attemptVertx() {
+ try {
+ var r = require;
+ var vertx = r('vertx');
+ lib$es6$promise$asap$$vertxNext = vertx.runOnLoop || vertx.runOnContext;
+ return lib$es6$promise$asap$$useVertxTimer();
+ } catch(e) {
+ return lib$es6$promise$asap$$useSetTimeout();
+ }
+ }
+
+ var lib$es6$promise$asap$$scheduleFlush;
+ // Decide what async method to use to triggering processing of queued callbacks:
+ if (lib$es6$promise$asap$$isNode) {
+ lib$es6$promise$asap$$scheduleFlush = lib$es6$promise$asap$$useNextTick();
+ } else if (lib$es6$promise$asap$$BrowserMutationObserver) {
+ lib$es6$promise$asap$$scheduleFlush = lib$es6$promise$asap$$useMutationObserver();
+ } else if (lib$es6$promise$asap$$isWorker) {
+ lib$es6$promise$asap$$scheduleFlush = lib$es6$promise$asap$$useMessageChannel();
+ } else if (lib$es6$promise$asap$$browserWindow === undefined && typeof require === 'function') {
+ lib$es6$promise$asap$$scheduleFlush = lib$es6$promise$asap$$attemptVertx();
+ } else {
+ lib$es6$promise$asap$$scheduleFlush = lib$es6$promise$asap$$useSetTimeout();
+ }
+ function lib$es6$promise$then$$then(onFulfillment, onRejection) {
+ var parent = this;
+
+ var child = new this.constructor(lib$es6$promise$$internal$$noop);
+
+ if (child[lib$es6$promise$$internal$$PROMISE_ID] === undefined) {
+ lib$es6$promise$$internal$$makePromise(child);
+ }
+
+ var state = parent._state;
+
+ if (state) {
+ var callback = arguments[state - 1];
+ lib$es6$promise$asap$$asap(function(){
+ lib$es6$promise$$internal$$invokeCallback(state, child, callback, parent._result);
+ });
+ } else {
+ lib$es6$promise$$internal$$subscribe(parent, child, onFulfillment, onRejection);
+ }
+
+ return child;
+ }
+ var lib$es6$promise$then$$default = lib$es6$promise$then$$then;
+ function lib$es6$promise$promise$resolve$$resolve(object) {
+ /*jshint validthis:true */
+ var Constructor = this;
+
+ if (object && typeof object === 'object' && object.constructor === Constructor) {
+ return object;
+ }
+
+ var promise = new Constructor(lib$es6$promise$$internal$$noop);
+ lib$es6$promise$$internal$$resolve(promise, object);
+ return promise;
+ }
+ var lib$es6$promise$promise$resolve$$default = lib$es6$promise$promise$resolve$$resolve;
+ var lib$es6$promise$$internal$$PROMISE_ID = Math.random().toString(36).substring(16);
+
+ function lib$es6$promise$$internal$$noop() {}
+
+ var lib$es6$promise$$internal$$PENDING = void 0;
+ var lib$es6$promise$$internal$$FULFILLED = 1;
+ var lib$es6$promise$$internal$$REJECTED = 2;
+
+ var lib$es6$promise$$internal$$GET_THEN_ERROR = new lib$es6$promise$$internal$$ErrorObject();
+
+ function lib$es6$promise$$internal$$selfFulfillment() {
+ return new TypeError("You cannot resolve a promise with itself");
+ }
+
+ function lib$es6$promise$$internal$$cannotReturnOwn() {
+ return new TypeError('A promises callback cannot return that same promise.');
+ }
+
+ function lib$es6$promise$$internal$$getThen(promise) {
+ try {
+ return promise.then;
+ } catch(error) {
+ lib$es6$promise$$internal$$GET_THEN_ERROR.error = error;
+ return lib$es6$promise$$internal$$GET_THEN_ERROR;
+ }
+ }
+
+ function lib$es6$promise$$internal$$tryThen(then, value, fulfillmentHandler, rejectionHandler) {
+ try {
+ then.call(value, fulfillmentHandler, rejectionHandler);
+ } catch(e) {
+ return e;
+ }
+ }
+
+ function lib$es6$promise$$internal$$handleForeignThenable(promise, thenable, then) {
+ lib$es6$promise$asap$$asap(function(promise) {
+ var sealed = false;
+ var error = lib$es6$promise$$internal$$tryThen(then, thenable, function(value) {
+ if (sealed) { return; }
+ sealed = true;
+ if (thenable !== value) {
+ lib$es6$promise$$internal$$resolve(promise, value);
+ } else {
+ lib$es6$promise$$internal$$fulfill(promise, value);
+ }
+ }, function(reason) {
+ if (sealed) { return; }
+ sealed = true;
+
+ lib$es6$promise$$internal$$reject(promise, reason);
+ }, 'Settle: ' + (promise._label || ' unknown promise'));
+
+ if (!sealed && error) {
+ sealed = true;
+ lib$es6$promise$$internal$$reject(promise, error);
+ }
+ }, promise);
+ }
+
+ function lib$es6$promise$$internal$$handleOwnThenable(promise, thenable) {
+ if (thenable._state === lib$es6$promise$$internal$$FULFILLED) {
+ lib$es6$promise$$internal$$fulfill(promise, thenable._result);
+ } else if (thenable._state === lib$es6$promise$$internal$$REJECTED) {
+ lib$es6$promise$$internal$$reject(promise, thenable._result);
+ } else {
+ lib$es6$promise$$internal$$subscribe(thenable, undefined, function(value) {
+ lib$es6$promise$$internal$$resolve(promise, value);
+ }, function(reason) {
+ lib$es6$promise$$internal$$reject(promise, reason);
+ });
+ }
+ }
+
+ function lib$es6$promise$$internal$$handleMaybeThenable(promise, maybeThenable, then) {
+ if (maybeThenable.constructor === promise.constructor &&
+ then === lib$es6$promise$then$$default &&
+ constructor.resolve === lib$es6$promise$promise$resolve$$default) {
+ lib$es6$promise$$internal$$handleOwnThenable(promise, maybeThenable);
+ } else {
+ if (then === lib$es6$promise$$internal$$GET_THEN_ERROR) {
+ lib$es6$promise$$internal$$reject(promise, lib$es6$promise$$internal$$GET_THEN_ERROR.error);
+ } else if (then === undefined) {
+ lib$es6$promise$$internal$$fulfill(promise, maybeThenable);
+ } else if (lib$es6$promise$utils$$isFunction(then)) {
+ lib$es6$promise$$internal$$handleForeignThenable(promise, maybeThenable, then);
+ } else {
+ lib$es6$promise$$internal$$fulfill(promise, maybeThenable);
+ }
+ }
+ }
+
+ function lib$es6$promise$$internal$$resolve(promise, value) {
+ if (promise === value) {
+ lib$es6$promise$$internal$$reject(promise, lib$es6$promise$$internal$$selfFulfillment());
+ } else if (lib$es6$promise$utils$$objectOrFunction(value)) {
+ lib$es6$promise$$internal$$handleMaybeThenable(promise, value, lib$es6$promise$$internal$$getThen(value));
+ } else {
+ lib$es6$promise$$internal$$fulfill(promise, value);
+ }
+ }
+
+ function lib$es6$promise$$internal$$publishRejection(promise) {
+ if (promise._onerror) {
+ promise._onerror(promise._result);
+ }
+
+ lib$es6$promise$$internal$$publish(promise);
+ }
+
+ function lib$es6$promise$$internal$$fulfill(promise, value) {
+ if (promise._state !== lib$es6$promise$$internal$$PENDING) { return; }
+
+ promise._result = value;
+ promise._state = lib$es6$promise$$internal$$FULFILLED;
+
+ if (promise._subscribers.length !== 0) {
+ lib$es6$promise$asap$$asap(lib$es6$promise$$internal$$publish, promise);
+ }
+ }
+
+ function lib$es6$promise$$internal$$reject(promise, reason) {
+ if (promise._state !== lib$es6$promise$$internal$$PENDING) { return; }
+ promise._state = lib$es6$promise$$internal$$REJECTED;
+ promise._result = reason;
+
+ lib$es6$promise$asap$$asap(lib$es6$promise$$internal$$publishRejection, promise);
+ }
+
+ function lib$es6$promise$$internal$$subscribe(parent, child, onFulfillment, onRejection) {
+ var subscribers = parent._subscribers;
+ var length = subscribers.length;
+
+ parent._onerror = null;
+
+ subscribers[length] = child;
+ subscribers[length + lib$es6$promise$$internal$$FULFILLED] = onFulfillment;
+ subscribers[length + lib$es6$promise$$internal$$REJECTED] = onRejection;
+
+ if (length === 0 && parent._state) {
+ lib$es6$promise$asap$$asap(lib$es6$promise$$internal$$publish, parent);
+ }
+ }
+
+ function lib$es6$promise$$internal$$publish(promise) {
+ var subscribers = promise._subscribers;
+ var settled = promise._state;
+
+ if (subscribers.length === 0) { return; }
+
+ var child, callback, detail = promise._result;
+
+ for (var i = 0; i < subscribers.length; i += 3) {
+ child = subscribers[i];
+ callback = subscribers[i + settled];
+
+ if (child) {
+ lib$es6$promise$$internal$$invokeCallback(settled, child, callback, detail);
+ } else {
+ callback(detail);
+ }
+ }
+
+ promise._subscribers.length = 0;
+ }
+
+ function lib$es6$promise$$internal$$ErrorObject() {
+ this.error = null;
+ }
+
+ var lib$es6$promise$$internal$$TRY_CATCH_ERROR = new lib$es6$promise$$internal$$ErrorObject();
+
+ function lib$es6$promise$$internal$$tryCatch(callback, detail) {
+ try {
+ return callback(detail);
+ } catch(e) {
+ lib$es6$promise$$internal$$TRY_CATCH_ERROR.error = e;
+ return lib$es6$promise$$internal$$TRY_CATCH_ERROR;
+ }
+ }
+
+ function lib$es6$promise$$internal$$invokeCallback(settled, promise, callback, detail) {
+ var hasCallback = lib$es6$promise$utils$$isFunction(callback),
+ value, error, succeeded, failed;
+
+ if (hasCallback) {
+ value = lib$es6$promise$$internal$$tryCatch(callback, detail);
+
+ if (value === lib$es6$promise$$internal$$TRY_CATCH_ERROR) {
+ failed = true;
+ error = value.error;
+ value = null;
+ } else {
+ succeeded = true;
+ }
+
+ if (promise === value) {
+ lib$es6$promise$$internal$$reject(promise, lib$es6$promise$$internal$$cannotReturnOwn());
+ return;
+ }
+
+ } else {
+ value = detail;
+ succeeded = true;
+ }
+
+ if (promise._state !== lib$es6$promise$$internal$$PENDING) {
+ // noop
+ } else if (hasCallback && succeeded) {
+ lib$es6$promise$$internal$$resolve(promise, value);
+ } else if (failed) {
+ lib$es6$promise$$internal$$reject(promise, error);
+ } else if (settled === lib$es6$promise$$internal$$FULFILLED) {
+ lib$es6$promise$$internal$$fulfill(promise, value);
+ } else if (settled === lib$es6$promise$$internal$$REJECTED) {
+ lib$es6$promise$$internal$$reject(promise, value);
+ }
+ }
+
+ function lib$es6$promise$$internal$$initializePromise(promise, resolver) {
+ try {
+ resolver(function resolvePromise(value){
+ lib$es6$promise$$internal$$resolve(promise, value);
+ }, function rejectPromise(reason) {
+ lib$es6$promise$$internal$$reject(promise, reason);
+ });
+ } catch(e) {
+ lib$es6$promise$$internal$$reject(promise, e);
+ }
+ }
+
+ var lib$es6$promise$$internal$$id = 0;
+ function lib$es6$promise$$internal$$nextId() {
+ return lib$es6$promise$$internal$$id++;
+ }
+
+ function lib$es6$promise$$internal$$makePromise(promise) {
+ promise[lib$es6$promise$$internal$$PROMISE_ID] = lib$es6$promise$$internal$$id++;
+ promise._state = undefined;
+ promise._result = undefined;
+ promise._subscribers = [];
+ }
+
+ function lib$es6$promise$promise$all$$all(entries) {
+ return new lib$es6$promise$enumerator$$default(this, entries).promise;
+ }
+ var lib$es6$promise$promise$all$$default = lib$es6$promise$promise$all$$all;
+ function lib$es6$promise$promise$race$$race(entries) {
+ /*jshint validthis:true */
+ var Constructor = this;
+
+ if (!lib$es6$promise$utils$$isArray(entries)) {
+ return new Constructor(function(resolve, reject) {
+ reject(new TypeError('You must pass an array to race.'));
+ });
+ } else {
+ return new Constructor(function(resolve, reject) {
+ var length = entries.length;
+ for (var i = 0; i < length; i++) {
+ Constructor.resolve(entries[i]).then(resolve, reject);
+ }
+ });
+ }
+ }
+ var lib$es6$promise$promise$race$$default = lib$es6$promise$promise$race$$race;
+ function lib$es6$promise$promise$reject$$reject(reason) {
+ /*jshint validthis:true */
+ var Constructor = this;
+ var promise = new Constructor(lib$es6$promise$$internal$$noop);
+ lib$es6$promise$$internal$$reject(promise, reason);
+ return promise;
+ }
+ var lib$es6$promise$promise$reject$$default = lib$es6$promise$promise$reject$$reject;
+
+
+ function lib$es6$promise$promise$$needsResolver() {
+ throw new TypeError('You must pass a resolver function as the first argument to the promise constructor');
+ }
+
+ function lib$es6$promise$promise$$needsNew() {
+ throw new TypeError("Failed to construct 'Promise': Please use the 'new' operator, this object constructor cannot be called as a function.");
+ }
+
+ var lib$es6$promise$promise$$default = lib$es6$promise$promise$$Promise;
+ /**
+ Promise objects represent the eventual result of an asynchronous operation. The
+ primary way of interacting with a promise is through its `then` method, which
+ registers callbacks to receive either a promise's eventual value or the reason
+ why the promise cannot be fulfilled.
+
+ Terminology
+ -----------
+
+ - `promise` is an object or function with a `then` method whose behavior conforms to this specification.
+ - `thenable` is an object or function that defines a `then` method.
+ - `value` is any legal JavaScript value (including undefined, a thenable, or a promise).
+ - `exception` is a value that is thrown using the throw statement.
+ - `reason` is a value that indicates why a promise was rejected.
+ - `settled` the final resting state of a promise, fulfilled or rejected.
+
+ A promise can be in one of three states: pending, fulfilled, or rejected.
+
+ Promises that are fulfilled have a fulfillment value and are in the fulfilled
+ state. Promises that are rejected have a rejection reason and are in the
+ rejected state. A fulfillment value is never a thenable.
+
+ Promises can also be said to *resolve* a value. If this value is also a
+ promise, then the original promise's settled state will match the value's
+ settled state. So a promise that *resolves* a promise that rejects will
+ itself reject, and a promise that *resolves* a promise that fulfills will
+ itself fulfill.
+
+
+ Basic Usage:
+ ------------
+
+ ```js
+ var promise = new Promise(function(resolve, reject) {
+ // on success
+ resolve(value);
+
+ // on failure
+ reject(reason);
+ });
+
+ promise.then(function(value) {
+ // on fulfillment
+ }, function(reason) {
+ // on rejection
+ });
+ ```
+
+ Advanced Usage:
+ ---------------
+
+ Promises shine when abstracting away asynchronous interactions such as
+ `XMLHttpRequest`s.
+
+ ```js
+ function getJSON(url) {
+ return new Promise(function(resolve, reject){
+ var xhr = new XMLHttpRequest();
+
+ xhr.open('GET', url);
+ xhr.onreadystatechange = handler;
+ xhr.responseType = 'json';
+ xhr.setRequestHeader('Accept', 'application/json');
+ xhr.send();
+
+ function handler() {
+ if (this.readyState === this.DONE) {
+ if (this.status === 200) {
+ resolve(this.response);
+ } else {
+ reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
+ }
+ }
+ };
+ });
+ }
+
+ getJSON('/posts.json').then(function(json) {
+ // on fulfillment
+ }, function(reason) {
+ // on rejection
+ });
+ ```
+
+ Unlike callbacks, promises are great composable primitives.
+
+ ```js
+ Promise.all([
+ getJSON('/posts'),
+ getJSON('/comments')
+ ]).then(function(values){
+ values[0] // => postsJSON
+ values[1] // => commentsJSON
+
+ return values;
+ });
+ ```
+
+ @class Promise
+ @param {function} resolver
+ Useful for tooling.
+ @constructor
+ */
+ function lib$es6$promise$promise$$Promise(resolver) {
+ this[lib$es6$promise$$internal$$PROMISE_ID] = lib$es6$promise$$internal$$nextId();
+ this._result = this._state = undefined;
+ this._subscribers = [];
+
+ if (lib$es6$promise$$internal$$noop !== resolver) {
+ typeof resolver !== 'function' && lib$es6$promise$promise$$needsResolver();
+ this instanceof lib$es6$promise$promise$$Promise ? lib$es6$promise$$internal$$initializePromise(this, resolver) : lib$es6$promise$promise$$needsNew();
+ }
+ }
+
+ lib$es6$promise$promise$$Promise.all = lib$es6$promise$promise$all$$default;
+ lib$es6$promise$promise$$Promise.race = lib$es6$promise$promise$race$$default;
+ lib$es6$promise$promise$$Promise.resolve = lib$es6$promise$promise$resolve$$default;
+ lib$es6$promise$promise$$Promise.reject = lib$es6$promise$promise$reject$$default;
+ lib$es6$promise$promise$$Promise._setScheduler = lib$es6$promise$asap$$setScheduler;
+ lib$es6$promise$promise$$Promise._setAsap = lib$es6$promise$asap$$setAsap;
+ lib$es6$promise$promise$$Promise._asap = lib$es6$promise$asap$$asap;
+
+ lib$es6$promise$promise$$Promise.prototype = {
+ constructor: lib$es6$promise$promise$$Promise,
+
+ /**
+ The primary way of interacting with a promise is through its `then` method,
+ which registers callbacks to receive either a promise's eventual value or the
+ reason why the promise cannot be fulfilled.
+
+ ```js
+ findUser().then(function(user){
+ // user is available
+ }, function(reason){
+ // user is unavailable, and you are given the reason why
+ });
+ ```
+
+ Chaining
+ --------
+
+ The return value of `then` is itself a promise. This second, 'downstream'
+ promise is resolved with the return value of the first promise's fulfillment
+ or rejection handler, or rejected if the handler throws an exception.
+
+ ```js
+ findUser().then(function (user) {
+ return user.name;
+ }, function (reason) {
+ return 'default name';
+ }).then(function (userName) {
+ // If `findUser` fulfilled, `userName` will be the user's name, otherwise it
+ // will be `'default name'`
+ });
+
+ findUser().then(function (user) {
+ throw new Error('Found user, but still unhappy');
+ }, function (reason) {
+ throw new Error('`findUser` rejected and we're unhappy');
+ }).then(function (value) {
+ // never reached
+ }, function (reason) {
+ // if `findUser` fulfilled, `reason` will be 'Found user, but still unhappy'.
+ // If `findUser` rejected, `reason` will be '`findUser` rejected and we're unhappy'.
+ });
+ ```
+ If the downstream promise does not specify a rejection handler, rejection reasons will be propagated further downstream.
+
+ ```js
+ findUser().then(function (user) {
+ throw new PedagogicalException('Upstream error');
+ }).then(function (value) {
+ // never reached
+ }).then(function (value) {
+ // never reached
+ }, function (reason) {
+ // The `PedgagocialException` is propagated all the way down to here
+ });
+ ```
+
+ Assimilation
+ ------------
+
+ Sometimes the value you want to propagate to a downstream promise can only be
+ retrieved asynchronously. This can be achieved by returning a promise in the
+ fulfillment or rejection handler. The downstream promise will then be pending
+ until the returned promise is settled. This is called *assimilation*.
+
+ ```js
+ findUser().then(function (user) {
+ return findCommentsByAuthor(user);
+ }).then(function (comments) {
+ // The user's comments are now available
+ });
+ ```
+
+ If the assimliated promise rejects, then the downstream promise will also reject.
+
+ ```js
+ findUser().then(function (user) {
+ return findCommentsByAuthor(user);
+ }).then(function (comments) {
+ // If `findCommentsByAuthor` fulfills, we'll have the value here
+ }, function (reason) {
+ // If `findCommentsByAuthor` rejects, we'll have the reason here
+ });
+ ```
+
+ Simple Example
+ --------------
+
+ Synchronous Example
+
+ ```javascript
+ var result;
+
+ try {
+ result = findResult();
+ // success
+ } catch(reason) {
+ // failure
+ }
+ ```
+
+ Errback Example
+
+ ```js
+ findResult(function(result, err){
+ if (err) {
+ // failure
+ } else {
+ // success
+ }
+ });
+ ```
+
+ Promise Example;
+
+ ```javascript
+ findResult().then(function(result){
+ // success
+ }, function(reason){
+ // failure
+ });
+ ```
+
+ Advanced Example
+ --------------
+
+ Synchronous Example
+
+ ```javascript
+ var author, books;
+
+ try {
+ author = findAuthor();
+ books = findBooksByAuthor(author);
+ // success
+ } catch(reason) {
+ // failure
+ }
+ ```
+
+ Errback Example
+
+ ```js
+
+ function foundBooks(books) {
+
+ }
+
+ function failure(reason) {
+
+ }
+
+ findAuthor(function(author, err){
+ if (err) {
+ failure(err);
+ // failure
+ } else {
+ try {
+ findBoooksByAuthor(author, function(books, err) {
+ if (err) {
+ failure(err);
+ } else {
+ try {
+ foundBooks(books);
+ } catch(reason) {
+ failure(reason);
+ }
+ }
+ });
+ } catch(error) {
+ failure(err);
+ }
+ // success
+ }
+ });
+ ```
+
+ Promise Example;
+
+ ```javascript
+ findAuthor().
+ then(findBooksByAuthor).
+ then(function(books){
+ // found books
+ }).catch(function(reason){
+ // something went wrong
+ });
+ ```
+
+ @method then
+ @param {Function} onFulfilled
+ @param {Function} onRejected
+ Useful for tooling.
+ @return {Promise}
+ */
+ then: lib$es6$promise$then$$default,
+
+ /**
+ `catch` is simply sugar for `then(undefined, onRejection)` which makes it the same
+ as the catch block of a try/catch statement.
+
+ ```js
+ function findAuthor(){
+ throw new Error('couldn't find that author');
+ }
+
+ // synchronous
+ try {
+ findAuthor();
+ } catch(reason) {
+ // something went wrong
+ }
+
+ // async with promises
+ findAuthor().catch(function(reason){
+ // something went wrong
+ });
+ ```
+
+ @method catch
+ @param {Function} onRejection
+ Useful for tooling.
+ @return {Promise}
+ */
+ 'catch': function(onRejection) {
+ return this.then(null, onRejection);
+ }
+ };
+ var lib$es6$promise$enumerator$$default = lib$es6$promise$enumerator$$Enumerator;
+ function lib$es6$promise$enumerator$$Enumerator(Constructor, input) {
+ this._instanceConstructor = Constructor;
+ this.promise = new Constructor(lib$es6$promise$$internal$$noop);
+
+ if (!this.promise[lib$es6$promise$$internal$$PROMISE_ID]) {
+ lib$es6$promise$$internal$$makePromise(this.promise);
+ }
+
+ if (lib$es6$promise$utils$$isArray(input)) {
+ this._input = input;
+ this.length = input.length;
+ this._remaining = input.length;
+
+ this._result = new Array(this.length);
+
+ if (this.length === 0) {
+ lib$es6$promise$$internal$$fulfill(this.promise, this._result);
+ } else {
+ this.length = this.length || 0;
+ this._enumerate();
+ if (this._remaining === 0) {
+ lib$es6$promise$$internal$$fulfill(this.promise, this._result);
+ }
+ }
+ } else {
+ lib$es6$promise$$internal$$reject(this.promise, lib$es6$promise$enumerator$$validationError());
+ }
+ }
+
+ function lib$es6$promise$enumerator$$validationError() {
+ return new Error('Array Methods must be provided an Array');
+ }
+
+ lib$es6$promise$enumerator$$Enumerator.prototype._enumerate = function() {
+ var length = this.length;
+ var input = this._input;
+
+ for (var i = 0; this._state === lib$es6$promise$$internal$$PENDING && i < length; i++) {
+ this._eachEntry(input[i], i);
+ }
+ };
+
+ lib$es6$promise$enumerator$$Enumerator.prototype._eachEntry = function(entry, i) {
+ var c = this._instanceConstructor;
+ var resolve = c.resolve;
+
+ if (resolve === lib$es6$promise$promise$resolve$$default) {
+ var then = lib$es6$promise$$internal$$getThen(entry);
+
+ if (then === lib$es6$promise$then$$default &&
+ entry._state !== lib$es6$promise$$internal$$PENDING) {
+ this._settledAt(entry._state, i, entry._result);
+ } else if (typeof then !== 'function') {
+ this._remaining--;
+ this._result[i] = entry;
+ } else if (c === lib$es6$promise$promise$$default) {
+ var promise = new c(lib$es6$promise$$internal$$noop);
+ lib$es6$promise$$internal$$handleMaybeThenable(promise, entry, then);
+ this._willSettleAt(promise, i);
+ } else {
+ this._willSettleAt(new c(function(resolve) { resolve(entry); }), i);
+ }
+ } else {
+ this._willSettleAt(resolve(entry), i);
+ }
+ };
+
+ lib$es6$promise$enumerator$$Enumerator.prototype._settledAt = function(state, i, value) {
+ var promise = this.promise;
+
+ if (promise._state === lib$es6$promise$$internal$$PENDING) {
+ this._remaining--;
+
+ if (state === lib$es6$promise$$internal$$REJECTED) {
+ lib$es6$promise$$internal$$reject(promise, value);
+ } else {
+ this._result[i] = value;
+ }
+ }
+
+ if (this._remaining === 0) {
+ lib$es6$promise$$internal$$fulfill(promise, this._result);
+ }
+ };
+
+ lib$es6$promise$enumerator$$Enumerator.prototype._willSettleAt = function(promise, i) {
+ var enumerator = this;
+
+ lib$es6$promise$$internal$$subscribe(promise, undefined, function(value) {
+ enumerator._settledAt(lib$es6$promise$$internal$$FULFILLED, i, value);
+ }, function(reason) {
+ enumerator._settledAt(lib$es6$promise$$internal$$REJECTED, i, reason);
+ });
+ };
+ function lib$es6$promise$polyfill$$polyfill() {
+ var local;
+
+ if (typeof global !== 'undefined') {
+ local = global;
+ } else if (typeof self !== 'undefined') {
+ local = self;
+ } else {
+ try {
+ local = Function('return this')();
+ } catch (e) {
+ throw new Error('polyfill failed because global object is unavailable in this environment');
+ }
+ }
+
+ var P = local.Promise;
+
+ if (P && Object.prototype.toString.call(P.resolve()) === '[object Promise]' && !P.cast) {
+ return;
+ }
+
+ local.Promise = lib$es6$promise$promise$$default;
+ }
+ var lib$es6$promise$polyfill$$default = lib$es6$promise$polyfill$$polyfill;
+
+ var lib$es6$promise$umd$$ES6Promise = {
+ 'Promise': lib$es6$promise$promise$$default,
+ 'polyfill': lib$es6$promise$polyfill$$default
+ };
+
+ /* global define:true module:true window: true */
+ if (typeof define === 'function' && define['amd']) {
+ define(function() { return lib$es6$promise$umd$$ES6Promise; });
+ } else if (typeof module !== 'undefined' && module['exports']) {
+ module['exports'] = lib$es6$promise$umd$$ES6Promise;
+ } else if (typeof this !== 'undefined') {
+ this['ES6Promise'] = lib$es6$promise$umd$$ES6Promise;
+ }
+
+ lib$es6$promise$polyfill$$default();
+}).call(this);
+
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.min.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.min.js
new file mode 100644
index 0000000..13151c2
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/dist/es6-promise.min.js
@@ -0,0 +1,9 @@
+/*!
+ * @overview es6-promise - a tiny implementation of Promises/A+.
+ * @copyright Copyright (c) 2014 Yehuda Katz, Tom Dale, Stefan Penner and contributors (Conversion to ES6 API by Jake Archibald)
+ * @license Licensed under MIT license
+ * See https://raw.githubusercontent.com/jakearchibald/es6-promise/master/LICENSE
+ * @version 3.2.1
+ */
+
+(function(){"use strict";function t(t){return"function"==typeof t||"object"==typeof t&&null!==t}function e(t){return"function"==typeof t}function n(t){G=t}function r(t){Q=t}function o(){return function(){process.nextTick(a)}}function i(){return function(){B(a)}}function s(){var t=0,e=new X(a),n=document.createTextNode("");return e.observe(n,{characterData:!0}),function(){n.data=t=++t%2}}function u(){var t=new MessageChannel;return t.port1.onmessage=a,function(){t.port2.postMessage(0)}}function c(){return function(){setTimeout(a,1)}}function a(){for(var t=0;J>t;t+=2){var e=tt[t],n=tt[t+1];e(n),tt[t]=void 0,tt[t+1]=void 0}J=0}function f(){try{var t=require,e=t("vertx");return B=e.runOnLoop||e.runOnContext,i()}catch(n){return c()}}function l(t,e){var n=this,r=new this.constructor(p);void 0===r[rt]&&k(r);var o=n._state;if(o){var i=arguments[o-1];Q(function(){x(o,r,i,n._result)})}else E(n,r,t,e);return r}function h(t){var e=this;if(t&&"object"==typeof t&&t.constructor===e)return t;var n=new e(p);return g(n,t),n}function p(){}function _(){return new TypeError("You cannot resolve a promise with itself")}function d(){return new TypeError("A promises callback cannot return that same promise.")}function v(t){try{return t.then}catch(e){return ut.error=e,ut}}function y(t,e,n,r){try{t.call(e,n,r)}catch(o){return o}}function m(t,e,n){Q(function(t){var r=!1,o=y(n,e,function(n){r||(r=!0,e!==n?g(t,n):S(t,n))},function(e){r||(r=!0,j(t,e))},"Settle: "+(t._label||" unknown promise"));!r&&o&&(r=!0,j(t,o))},t)}function b(t,e){e._state===it?S(t,e._result):e._state===st?j(t,e._result):E(e,void 0,function(e){g(t,e)},function(e){j(t,e)})}function w(t,n,r){n.constructor===t.constructor&&r===et&&constructor.resolve===nt?b(t,n):r===ut?j(t,ut.error):void 0===r?S(t,n):e(r)?m(t,n,r):S(t,n)}function g(e,n){e===n?j(e,_()):t(n)?w(e,n,v(n)):S(e,n)}function A(t){t._onerror&&t._onerror(t._result),T(t)}function S(t,e){t._state===ot&&(t._result=e,t._state=it,0!==t._subscribers.length&&Q(T,t))}function j(t,e){t._state===ot&&(t._state=st,t._result=e,Q(A,t))}function E(t,e,n,r){var o=t._subscribers,i=o.length;t._onerror=null,o[i]=e,o[i+it]=n,o[i+st]=r,0===i&&t._state&&Q(T,t)}function T(t){var e=t._subscribers,n=t._state;if(0!==e.length){for(var r,o,i=t._result,s=0;si;i++)e.resolve(t[i]).then(n,r)}:function(t,e){e(new TypeError("You must pass an array to race."))})}function F(t){var e=this,n=new e(p);return j(n,t),n}function D(){throw new TypeError("You must pass a resolver function as the first argument to the promise constructor")}function K(){throw new TypeError("Failed to construct 'Promise': Please use the 'new' operator, this object constructor cannot be called as a function.")}function L(t){this[rt]=O(),this._result=this._state=void 0,this._subscribers=[],p!==t&&("function"!=typeof t&&D(),this instanceof L?C(this,t):K())}function N(t,e){this._instanceConstructor=t,this.promise=new t(p),this.promise[rt]||k(this.promise),I(e)?(this._input=e,this.length=e.length,this._remaining=e.length,this._result=new Array(this.length),0===this.length?S(this.promise,this._result):(this.length=this.length||0,this._enumerate(),0===this._remaining&&S(this.promise,this._result))):j(this.promise,U())}function U(){return new Error("Array Methods must be provided an Array")}function W(){var t;if("undefined"!=typeof global)t=global;else if("undefined"!=typeof self)t=self;else try{t=Function("return this")()}catch(e){throw new Error("polyfill failed because global object is unavailable in this environment")}var n=t.Promise;(!n||"[object Promise]"!==Object.prototype.toString.call(n.resolve())||n.cast)&&(t.Promise=pt)}var z;z=Array.isArray?Array.isArray:function(t){return"[object Array]"===Object.prototype.toString.call(t)};var B,G,H,I=z,J=0,Q=function(t,e){tt[J]=t,tt[J+1]=e,J+=2,2===J&&(G?G(a):H())},R="undefined"!=typeof window?window:void 0,V=R||{},X=V.MutationObserver||V.WebKitMutationObserver,Z="undefined"==typeof self&&"undefined"!=typeof process&&"[object process]"==={}.toString.call(process),$="undefined"!=typeof Uint8ClampedArray&&"undefined"!=typeof importScripts&&"undefined"!=typeof MessageChannel,tt=new Array(1e3);H=Z?o():X?s():$?u():void 0===R&&"function"==typeof require?f():c();var et=l,nt=h,rt=Math.random().toString(36).substring(16),ot=void 0,it=1,st=2,ut=new M,ct=new M,at=0,ft=Y,lt=q,ht=F,pt=L;L.all=ft,L.race=lt,L.resolve=nt,L.reject=ht,L._setScheduler=n,L._setAsap=r,L._asap=Q,L.prototype={constructor:L,then:et,"catch":function(t){return this.then(null,t)}};var _t=N;N.prototype._enumerate=function(){for(var t=this.length,e=this._input,n=0;this._state===ot&&t>n;n++)this._eachEntry(e[n],n)},N.prototype._eachEntry=function(t,e){var n=this._instanceConstructor,r=n.resolve;if(r===nt){var o=v(t);if(o===et&&t._state!==ot)this._settledAt(t._state,e,t._result);else if("function"!=typeof o)this._remaining--,this._result[e]=t;else if(n===pt){var i=new n(p);w(i,t,o),this._willSettleAt(i,e)}else this._willSettleAt(new n(function(e){e(t)}),e)}else this._willSettleAt(r(t),e)},N.prototype._settledAt=function(t,e,n){var r=this.promise;r._state===ot&&(this._remaining--,t===st?j(r,n):this._result[e]=n),0===this._remaining&&S(r,this._result)},N.prototype._willSettleAt=function(t,e){var n=this;E(t,void 0,function(t){n._settledAt(it,e,t)},function(t){n._settledAt(st,e,t)})};var dt=W,vt={Promise:pt,polyfill:dt};"function"==typeof define&&define.amd?define(function(){return vt}):"undefined"!=typeof module&&module.exports?module.exports=vt:"undefined"!=typeof this&&(this.ES6Promise=vt),dt()}).call(this);
\ No newline at end of file
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise.umd.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise.umd.js
new file mode 100644
index 0000000..5984f70
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise.umd.js
@@ -0,0 +1,18 @@
+import Promise from './es6-promise/promise';
+import polyfill from './es6-promise/polyfill';
+
+var ES6Promise = {
+ 'Promise': Promise,
+ 'polyfill': polyfill
+};
+
+/* global define:true module:true window: true */
+if (typeof define === 'function' && define['amd']) {
+ define(function() { return ES6Promise; });
+} else if (typeof module !== 'undefined' && module['exports']) {
+ module['exports'] = ES6Promise;
+} else if (typeof this !== 'undefined') {
+ this['ES6Promise'] = ES6Promise;
+}
+
+polyfill();
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/-internal.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/-internal.js
new file mode 100644
index 0000000..aeebf57
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/-internal.js
@@ -0,0 +1,273 @@
+import {
+ objectOrFunction,
+ isFunction
+} from './utils';
+
+import {
+ asap
+} from './asap';
+
+import originalThen from './then';
+import originalResolve from './promise/resolve';
+
+export var PROMISE_ID = Math.random().toString(36).substring(16);
+
+function noop() {}
+
+var PENDING = void 0;
+var FULFILLED = 1;
+var REJECTED = 2;
+
+var GET_THEN_ERROR = new ErrorObject();
+
+function selfFulfillment() {
+ return new TypeError("You cannot resolve a promise with itself");
+}
+
+function cannotReturnOwn() {
+ return new TypeError('A promises callback cannot return that same promise.');
+}
+
+function getThen(promise) {
+ try {
+ return promise.then;
+ } catch(error) {
+ GET_THEN_ERROR.error = error;
+ return GET_THEN_ERROR;
+ }
+}
+
+function tryThen(then, value, fulfillmentHandler, rejectionHandler) {
+ try {
+ then.call(value, fulfillmentHandler, rejectionHandler);
+ } catch(e) {
+ return e;
+ }
+}
+
+function handleForeignThenable(promise, thenable, then) {
+ asap(function(promise) {
+ var sealed = false;
+ var error = tryThen(then, thenable, function(value) {
+ if (sealed) { return; }
+ sealed = true;
+ if (thenable !== value) {
+ resolve(promise, value);
+ } else {
+ fulfill(promise, value);
+ }
+ }, function(reason) {
+ if (sealed) { return; }
+ sealed = true;
+
+ reject(promise, reason);
+ }, 'Settle: ' + (promise._label || ' unknown promise'));
+
+ if (!sealed && error) {
+ sealed = true;
+ reject(promise, error);
+ }
+ }, promise);
+}
+
+function handleOwnThenable(promise, thenable) {
+ if (thenable._state === FULFILLED) {
+ fulfill(promise, thenable._result);
+ } else if (thenable._state === REJECTED) {
+ reject(promise, thenable._result);
+ } else {
+ subscribe(thenable, undefined, function(value) {
+ resolve(promise, value);
+ }, function(reason) {
+ reject(promise, reason);
+ });
+ }
+}
+
+function handleMaybeThenable(promise, maybeThenable, then) {
+ if (maybeThenable.constructor === promise.constructor &&
+ then === originalThen &&
+ constructor.resolve === originalResolve) {
+ handleOwnThenable(promise, maybeThenable);
+ } else {
+ if (then === GET_THEN_ERROR) {
+ reject(promise, GET_THEN_ERROR.error);
+ } else if (then === undefined) {
+ fulfill(promise, maybeThenable);
+ } else if (isFunction(then)) {
+ handleForeignThenable(promise, maybeThenable, then);
+ } else {
+ fulfill(promise, maybeThenable);
+ }
+ }
+}
+
+function resolve(promise, value) {
+ if (promise === value) {
+ reject(promise, selfFulfillment());
+ } else if (objectOrFunction(value)) {
+ handleMaybeThenable(promise, value, getThen(value));
+ } else {
+ fulfill(promise, value);
+ }
+}
+
+function publishRejection(promise) {
+ if (promise._onerror) {
+ promise._onerror(promise._result);
+ }
+
+ publish(promise);
+}
+
+function fulfill(promise, value) {
+ if (promise._state !== PENDING) { return; }
+
+ promise._result = value;
+ promise._state = FULFILLED;
+
+ if (promise._subscribers.length !== 0) {
+ asap(publish, promise);
+ }
+}
+
+function reject(promise, reason) {
+ if (promise._state !== PENDING) { return; }
+ promise._state = REJECTED;
+ promise._result = reason;
+
+ asap(publishRejection, promise);
+}
+
+function subscribe(parent, child, onFulfillment, onRejection) {
+ var subscribers = parent._subscribers;
+ var length = subscribers.length;
+
+ parent._onerror = null;
+
+ subscribers[length] = child;
+ subscribers[length + FULFILLED] = onFulfillment;
+ subscribers[length + REJECTED] = onRejection;
+
+ if (length === 0 && parent._state) {
+ asap(publish, parent);
+ }
+}
+
+function publish(promise) {
+ var subscribers = promise._subscribers;
+ var settled = promise._state;
+
+ if (subscribers.length === 0) { return; }
+
+ var child, callback, detail = promise._result;
+
+ for (var i = 0; i < subscribers.length; i += 3) {
+ child = subscribers[i];
+ callback = subscribers[i + settled];
+
+ if (child) {
+ invokeCallback(settled, child, callback, detail);
+ } else {
+ callback(detail);
+ }
+ }
+
+ promise._subscribers.length = 0;
+}
+
+function ErrorObject() {
+ this.error = null;
+}
+
+var TRY_CATCH_ERROR = new ErrorObject();
+
+function tryCatch(callback, detail) {
+ try {
+ return callback(detail);
+ } catch(e) {
+ TRY_CATCH_ERROR.error = e;
+ return TRY_CATCH_ERROR;
+ }
+}
+
+function invokeCallback(settled, promise, callback, detail) {
+ var hasCallback = isFunction(callback),
+ value, error, succeeded, failed;
+
+ if (hasCallback) {
+ value = tryCatch(callback, detail);
+
+ if (value === TRY_CATCH_ERROR) {
+ failed = true;
+ error = value.error;
+ value = null;
+ } else {
+ succeeded = true;
+ }
+
+ if (promise === value) {
+ reject(promise, cannotReturnOwn());
+ return;
+ }
+
+ } else {
+ value = detail;
+ succeeded = true;
+ }
+
+ if (promise._state !== PENDING) {
+ // noop
+ } else if (hasCallback && succeeded) {
+ resolve(promise, value);
+ } else if (failed) {
+ reject(promise, error);
+ } else if (settled === FULFILLED) {
+ fulfill(promise, value);
+ } else if (settled === REJECTED) {
+ reject(promise, value);
+ }
+}
+
+function initializePromise(promise, resolver) {
+ try {
+ resolver(function resolvePromise(value){
+ resolve(promise, value);
+ }, function rejectPromise(reason) {
+ reject(promise, reason);
+ });
+ } catch(e) {
+ reject(promise, e);
+ }
+}
+
+var id = 0;
+function nextId() {
+ return id++;
+}
+
+function makePromise(promise) {
+ promise[PROMISE_ID] = id++;
+ promise._state = undefined;
+ promise._result = undefined;
+ promise._subscribers = [];
+}
+
+export {
+ nextId,
+ makePromise,
+ getThen,
+ noop,
+ resolve,
+ reject,
+ fulfill,
+ subscribe,
+ publish,
+ publishRejection,
+ initializePromise,
+ invokeCallback,
+ FULFILLED,
+ REJECTED,
+ PENDING,
+ handleMaybeThenable
+};
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/asap.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/asap.js
new file mode 100644
index 0000000..40f1d25
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/asap.js
@@ -0,0 +1,119 @@
+var len = 0;
+var vertxNext;
+var customSchedulerFn;
+
+export var asap = function asap(callback, arg) {
+ queue[len] = callback;
+ queue[len + 1] = arg;
+ len += 2;
+ if (len === 2) {
+ // If len is 2, that means that we need to schedule an async flush.
+ // If additional callbacks are queued before the queue is flushed, they
+ // will be processed by this flush that we are scheduling.
+ if (customSchedulerFn) {
+ customSchedulerFn(flush);
+ } else {
+ scheduleFlush();
+ }
+ }
+}
+
+export function setScheduler(scheduleFn) {
+ customSchedulerFn = scheduleFn;
+}
+
+export function setAsap(asapFn) {
+ asap = asapFn;
+}
+
+var browserWindow = (typeof window !== 'undefined') ? window : undefined;
+var browserGlobal = browserWindow || {};
+var BrowserMutationObserver = browserGlobal.MutationObserver || browserGlobal.WebKitMutationObserver;
+var isNode = typeof self === 'undefined' && typeof process !== 'undefined' && {}.toString.call(process) === '[object process]';
+
+// test for web worker but not in IE10
+var isWorker = typeof Uint8ClampedArray !== 'undefined' &&
+ typeof importScripts !== 'undefined' &&
+ typeof MessageChannel !== 'undefined';
+
+// node
+function useNextTick() {
+ // node version 0.10.x displays a deprecation warning when nextTick is used recursively
+ // see https://github.com/cujojs/when/issues/410 for details
+ return function() {
+ process.nextTick(flush);
+ };
+}
+
+// vertx
+function useVertxTimer() {
+ return function() {
+ vertxNext(flush);
+ };
+}
+
+function useMutationObserver() {
+ var iterations = 0;
+ var observer = new BrowserMutationObserver(flush);
+ var node = document.createTextNode('');
+ observer.observe(node, { characterData: true });
+
+ return function() {
+ node.data = (iterations = ++iterations % 2);
+ };
+}
+
+// web worker
+function useMessageChannel() {
+ var channel = new MessageChannel();
+ channel.port1.onmessage = flush;
+ return function () {
+ channel.port2.postMessage(0);
+ };
+}
+
+function useSetTimeout() {
+ return function() {
+ setTimeout(flush, 1);
+ };
+}
+
+var queue = new Array(1000);
+function flush() {
+ for (var i = 0; i < len; i+=2) {
+ var callback = queue[i];
+ var arg = queue[i+1];
+
+ callback(arg);
+
+ queue[i] = undefined;
+ queue[i+1] = undefined;
+ }
+
+ len = 0;
+}
+
+function attemptVertx() {
+ try {
+ var r = require;
+ var vertx = r('vertx');
+ vertxNext = vertx.runOnLoop || vertx.runOnContext;
+ return useVertxTimer();
+ } catch(e) {
+ return useSetTimeout();
+ }
+}
+
+var scheduleFlush;
+// Decide what async method to use to triggering processing of queued callbacks:
+if (isNode) {
+ scheduleFlush = useNextTick();
+} else if (BrowserMutationObserver) {
+ scheduleFlush = useMutationObserver();
+} else if (isWorker) {
+ scheduleFlush = useMessageChannel();
+} else if (browserWindow === undefined && typeof require === 'function') {
+ scheduleFlush = attemptVertx();
+} else {
+ scheduleFlush = useSetTimeout();
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/enumerator.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/enumerator.js
new file mode 100644
index 0000000..2a7a28f
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/enumerator.js
@@ -0,0 +1,118 @@
+import {
+ isArray,
+ isMaybeThenable
+} from './utils';
+
+import {
+ noop,
+ reject,
+ fulfill,
+ subscribe,
+ FULFILLED,
+ REJECTED,
+ PENDING,
+ getThen,
+ handleMaybeThenable
+} from './-internal';
+
+import then from './then';
+import Promise from './promise';
+import originalResolve from './promise/resolve';
+import originalThen from './then';
+import { makePromise, PROMISE_ID } from './-internal';
+
+export default Enumerator;
+function Enumerator(Constructor, input) {
+ this._instanceConstructor = Constructor;
+ this.promise = new Constructor(noop);
+
+ if (!this.promise[PROMISE_ID]) {
+ makePromise(this.promise);
+ }
+
+ if (isArray(input)) {
+ this._input = input;
+ this.length = input.length;
+ this._remaining = input.length;
+
+ this._result = new Array(this.length);
+
+ if (this.length === 0) {
+ fulfill(this.promise, this._result);
+ } else {
+ this.length = this.length || 0;
+ this._enumerate();
+ if (this._remaining === 0) {
+ fulfill(this.promise, this._result);
+ }
+ }
+ } else {
+ reject(this.promise, validationError());
+ }
+}
+
+function validationError() {
+ return new Error('Array Methods must be provided an Array');
+};
+
+Enumerator.prototype._enumerate = function() {
+ var length = this.length;
+ var input = this._input;
+
+ for (var i = 0; this._state === PENDING && i < length; i++) {
+ this._eachEntry(input[i], i);
+ }
+};
+
+Enumerator.prototype._eachEntry = function(entry, i) {
+ var c = this._instanceConstructor;
+ var resolve = c.resolve;
+
+ if (resolve === originalResolve) {
+ var then = getThen(entry);
+
+ if (then === originalThen &&
+ entry._state !== PENDING) {
+ this._settledAt(entry._state, i, entry._result);
+ } else if (typeof then !== 'function') {
+ this._remaining--;
+ this._result[i] = entry;
+ } else if (c === Promise) {
+ var promise = new c(noop);
+ handleMaybeThenable(promise, entry, then);
+ this._willSettleAt(promise, i);
+ } else {
+ this._willSettleAt(new c(function(resolve) { resolve(entry); }), i);
+ }
+ } else {
+ this._willSettleAt(resolve(entry), i);
+ }
+};
+
+Enumerator.prototype._settledAt = function(state, i, value) {
+ var promise = this.promise;
+
+ if (promise._state === PENDING) {
+ this._remaining--;
+
+ if (state === REJECTED) {
+ reject(promise, value);
+ } else {
+ this._result[i] = value;
+ }
+ }
+
+ if (this._remaining === 0) {
+ fulfill(promise, this._result);
+ }
+};
+
+Enumerator.prototype._willSettleAt = function(promise, i) {
+ var enumerator = this;
+
+ subscribe(promise, undefined, function(value) {
+ enumerator._settledAt(FULFILLED, i, value);
+ }, function(reason) {
+ enumerator._settledAt(REJECTED, i, reason);
+ });
+};
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/polyfill.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/polyfill.js
new file mode 100644
index 0000000..6696dfc
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/polyfill.js
@@ -0,0 +1,26 @@
+/*global self*/
+import Promise from './promise';
+
+export default function polyfill() {
+ var local;
+
+ if (typeof global !== 'undefined') {
+ local = global;
+ } else if (typeof self !== 'undefined') {
+ local = self;
+ } else {
+ try {
+ local = Function('return this')();
+ } catch (e) {
+ throw new Error('polyfill failed because global object is unavailable in this environment');
+ }
+ }
+
+ var P = local.Promise;
+
+ if (P && Object.prototype.toString.call(P.resolve()) === '[object Promise]' && !P.cast) {
+ return;
+ }
+
+ local.Promise = Promise;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise.js
new file mode 100644
index 0000000..d95951e
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise.js
@@ -0,0 +1,384 @@
+import {
+ isFunction
+} from './utils';
+
+import {
+ noop,
+ nextId,
+ PROMISE_ID,
+ initializePromise
+} from './-internal';
+
+import {
+ asap,
+ setAsap,
+ setScheduler
+} from './asap';
+
+import all from './promise/all';
+import race from './promise/race';
+import Resolve from './promise/resolve';
+import Reject from './promise/reject';
+import then from './then';
+
+
+function needsResolver() {
+ throw new TypeError('You must pass a resolver function as the first argument to the promise constructor');
+}
+
+function needsNew() {
+ throw new TypeError("Failed to construct 'Promise': Please use the 'new' operator, this object constructor cannot be called as a function.");
+}
+
+export default Promise;
+/**
+ Promise objects represent the eventual result of an asynchronous operation. The
+ primary way of interacting with a promise is through its `then` method, which
+ registers callbacks to receive either a promise's eventual value or the reason
+ why the promise cannot be fulfilled.
+
+ Terminology
+ -----------
+
+ - `promise` is an object or function with a `then` method whose behavior conforms to this specification.
+ - `thenable` is an object or function that defines a `then` method.
+ - `value` is any legal JavaScript value (including undefined, a thenable, or a promise).
+ - `exception` is a value that is thrown using the throw statement.
+ - `reason` is a value that indicates why a promise was rejected.
+ - `settled` the final resting state of a promise, fulfilled or rejected.
+
+ A promise can be in one of three states: pending, fulfilled, or rejected.
+
+ Promises that are fulfilled have a fulfillment value and are in the fulfilled
+ state. Promises that are rejected have a rejection reason and are in the
+ rejected state. A fulfillment value is never a thenable.
+
+ Promises can also be said to *resolve* a value. If this value is also a
+ promise, then the original promise's settled state will match the value's
+ settled state. So a promise that *resolves* a promise that rejects will
+ itself reject, and a promise that *resolves* a promise that fulfills will
+ itself fulfill.
+
+
+ Basic Usage:
+ ------------
+
+ ```js
+ var promise = new Promise(function(resolve, reject) {
+ // on success
+ resolve(value);
+
+ // on failure
+ reject(reason);
+ });
+
+ promise.then(function(value) {
+ // on fulfillment
+ }, function(reason) {
+ // on rejection
+ });
+ ```
+
+ Advanced Usage:
+ ---------------
+
+ Promises shine when abstracting away asynchronous interactions such as
+ `XMLHttpRequest`s.
+
+ ```js
+ function getJSON(url) {
+ return new Promise(function(resolve, reject){
+ var xhr = new XMLHttpRequest();
+
+ xhr.open('GET', url);
+ xhr.onreadystatechange = handler;
+ xhr.responseType = 'json';
+ xhr.setRequestHeader('Accept', 'application/json');
+ xhr.send();
+
+ function handler() {
+ if (this.readyState === this.DONE) {
+ if (this.status === 200) {
+ resolve(this.response);
+ } else {
+ reject(new Error('getJSON: `' + url + '` failed with status: [' + this.status + ']'));
+ }
+ }
+ };
+ });
+ }
+
+ getJSON('/posts.json').then(function(json) {
+ // on fulfillment
+ }, function(reason) {
+ // on rejection
+ });
+ ```
+
+ Unlike callbacks, promises are great composable primitives.
+
+ ```js
+ Promise.all([
+ getJSON('/posts'),
+ getJSON('/comments')
+ ]).then(function(values){
+ values[0] // => postsJSON
+ values[1] // => commentsJSON
+
+ return values;
+ });
+ ```
+
+ @class Promise
+ @param {function} resolver
+ Useful for tooling.
+ @constructor
+*/
+function Promise(resolver) {
+ this[PROMISE_ID] = nextId();
+ this._result = this._state = undefined;
+ this._subscribers = [];
+
+ if (noop !== resolver) {
+ typeof resolver !== 'function' && needsResolver();
+ this instanceof Promise ? initializePromise(this, resolver) : needsNew();
+ }
+}
+
+Promise.all = all;
+Promise.race = race;
+Promise.resolve = Resolve;
+Promise.reject = Reject;
+Promise._setScheduler = setScheduler;
+Promise._setAsap = setAsap;
+Promise._asap = asap;
+
+Promise.prototype = {
+ constructor: Promise,
+
+/**
+ The primary way of interacting with a promise is through its `then` method,
+ which registers callbacks to receive either a promise's eventual value or the
+ reason why the promise cannot be fulfilled.
+
+ ```js
+ findUser().then(function(user){
+ // user is available
+ }, function(reason){
+ // user is unavailable, and you are given the reason why
+ });
+ ```
+
+ Chaining
+ --------
+
+ The return value of `then` is itself a promise. This second, 'downstream'
+ promise is resolved with the return value of the first promise's fulfillment
+ or rejection handler, or rejected if the handler throws an exception.
+
+ ```js
+ findUser().then(function (user) {
+ return user.name;
+ }, function (reason) {
+ return 'default name';
+ }).then(function (userName) {
+ // If `findUser` fulfilled, `userName` will be the user's name, otherwise it
+ // will be `'default name'`
+ });
+
+ findUser().then(function (user) {
+ throw new Error('Found user, but still unhappy');
+ }, function (reason) {
+ throw new Error('`findUser` rejected and we're unhappy');
+ }).then(function (value) {
+ // never reached
+ }, function (reason) {
+ // if `findUser` fulfilled, `reason` will be 'Found user, but still unhappy'.
+ // If `findUser` rejected, `reason` will be '`findUser` rejected and we're unhappy'.
+ });
+ ```
+ If the downstream promise does not specify a rejection handler, rejection reasons will be propagated further downstream.
+
+ ```js
+ findUser().then(function (user) {
+ throw new PedagogicalException('Upstream error');
+ }).then(function (value) {
+ // never reached
+ }).then(function (value) {
+ // never reached
+ }, function (reason) {
+ // The `PedgagocialException` is propagated all the way down to here
+ });
+ ```
+
+ Assimilation
+ ------------
+
+ Sometimes the value you want to propagate to a downstream promise can only be
+ retrieved asynchronously. This can be achieved by returning a promise in the
+ fulfillment or rejection handler. The downstream promise will then be pending
+ until the returned promise is settled. This is called *assimilation*.
+
+ ```js
+ findUser().then(function (user) {
+ return findCommentsByAuthor(user);
+ }).then(function (comments) {
+ // The user's comments are now available
+ });
+ ```
+
+ If the assimliated promise rejects, then the downstream promise will also reject.
+
+ ```js
+ findUser().then(function (user) {
+ return findCommentsByAuthor(user);
+ }).then(function (comments) {
+ // If `findCommentsByAuthor` fulfills, we'll have the value here
+ }, function (reason) {
+ // If `findCommentsByAuthor` rejects, we'll have the reason here
+ });
+ ```
+
+ Simple Example
+ --------------
+
+ Synchronous Example
+
+ ```javascript
+ var result;
+
+ try {
+ result = findResult();
+ // success
+ } catch(reason) {
+ // failure
+ }
+ ```
+
+ Errback Example
+
+ ```js
+ findResult(function(result, err){
+ if (err) {
+ // failure
+ } else {
+ // success
+ }
+ });
+ ```
+
+ Promise Example;
+
+ ```javascript
+ findResult().then(function(result){
+ // success
+ }, function(reason){
+ // failure
+ });
+ ```
+
+ Advanced Example
+ --------------
+
+ Synchronous Example
+
+ ```javascript
+ var author, books;
+
+ try {
+ author = findAuthor();
+ books = findBooksByAuthor(author);
+ // success
+ } catch(reason) {
+ // failure
+ }
+ ```
+
+ Errback Example
+
+ ```js
+
+ function foundBooks(books) {
+
+ }
+
+ function failure(reason) {
+
+ }
+
+ findAuthor(function(author, err){
+ if (err) {
+ failure(err);
+ // failure
+ } else {
+ try {
+ findBoooksByAuthor(author, function(books, err) {
+ if (err) {
+ failure(err);
+ } else {
+ try {
+ foundBooks(books);
+ } catch(reason) {
+ failure(reason);
+ }
+ }
+ });
+ } catch(error) {
+ failure(err);
+ }
+ // success
+ }
+ });
+ ```
+
+ Promise Example;
+
+ ```javascript
+ findAuthor().
+ then(findBooksByAuthor).
+ then(function(books){
+ // found books
+ }).catch(function(reason){
+ // something went wrong
+ });
+ ```
+
+ @method then
+ @param {Function} onFulfilled
+ @param {Function} onRejected
+ Useful for tooling.
+ @return {Promise}
+*/
+ then: then,
+
+/**
+ `catch` is simply sugar for `then(undefined, onRejection)` which makes it the same
+ as the catch block of a try/catch statement.
+
+ ```js
+ function findAuthor(){
+ throw new Error('couldn't find that author');
+ }
+
+ // synchronous
+ try {
+ findAuthor();
+ } catch(reason) {
+ // something went wrong
+ }
+
+ // async with promises
+ findAuthor().catch(function(reason){
+ // something went wrong
+ });
+ ```
+
+ @method catch
+ @param {Function} onRejection
+ Useful for tooling.
+ @return {Promise}
+*/
+ 'catch': function(onRejection) {
+ return this.then(null, onRejection);
+ }
+};
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/all.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/all.js
new file mode 100644
index 0000000..03033f0
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/all.js
@@ -0,0 +1,52 @@
+import Enumerator from '../enumerator';
+
+/**
+ `Promise.all` accepts an array of promises, and returns a new promise which
+ is fulfilled with an array of fulfillment values for the passed promises, or
+ rejected with the reason of the first passed promise to be rejected. It casts all
+ elements of the passed iterable to promises as it runs this algorithm.
+
+ Example:
+
+ ```javascript
+ var promise1 = resolve(1);
+ var promise2 = resolve(2);
+ var promise3 = resolve(3);
+ var promises = [ promise1, promise2, promise3 ];
+
+ Promise.all(promises).then(function(array){
+ // The array here would be [ 1, 2, 3 ];
+ });
+ ```
+
+ If any of the `promises` given to `all` are rejected, the first promise
+ that is rejected will be given as an argument to the returned promises's
+ rejection handler. For example:
+
+ Example:
+
+ ```javascript
+ var promise1 = resolve(1);
+ var promise2 = reject(new Error("2"));
+ var promise3 = reject(new Error("3"));
+ var promises = [ promise1, promise2, promise3 ];
+
+ Promise.all(promises).then(function(array){
+ // Code here never runs because there are rejected promises!
+ }, function(error) {
+ // error.message === "2"
+ });
+ ```
+
+ @method all
+ @static
+ @param {Array} entries array of promises
+ @param {String} label optional string for labeling the promise.
+ Useful for tooling.
+ @return {Promise} promise that is fulfilled when all `promises` have been
+ fulfilled, or rejected if any of them become rejected.
+ @static
+*/
+export default function all(entries) {
+ return new Enumerator(this, entries).promise;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/race.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/race.js
new file mode 100644
index 0000000..8c922e3
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/race.js
@@ -0,0 +1,86 @@
+import {
+ isArray
+} from "../utils";
+
+/**
+ `Promise.race` returns a new promise which is settled in the same way as the
+ first passed promise to settle.
+
+ Example:
+
+ ```javascript
+ var promise1 = new Promise(function(resolve, reject){
+ setTimeout(function(){
+ resolve('promise 1');
+ }, 200);
+ });
+
+ var promise2 = new Promise(function(resolve, reject){
+ setTimeout(function(){
+ resolve('promise 2');
+ }, 100);
+ });
+
+ Promise.race([promise1, promise2]).then(function(result){
+ // result === 'promise 2' because it was resolved before promise1
+ // was resolved.
+ });
+ ```
+
+ `Promise.race` is deterministic in that only the state of the first
+ settled promise matters. For example, even if other promises given to the
+ `promises` array argument are resolved, but the first settled promise has
+ become rejected before the other promises became fulfilled, the returned
+ promise will become rejected:
+
+ ```javascript
+ var promise1 = new Promise(function(resolve, reject){
+ setTimeout(function(){
+ resolve('promise 1');
+ }, 200);
+ });
+
+ var promise2 = new Promise(function(resolve, reject){
+ setTimeout(function(){
+ reject(new Error('promise 2'));
+ }, 100);
+ });
+
+ Promise.race([promise1, promise2]).then(function(result){
+ // Code here never runs
+ }, function(reason){
+ // reason.message === 'promise 2' because promise 2 became rejected before
+ // promise 1 became fulfilled
+ });
+ ```
+
+ An example real-world use case is implementing timeouts:
+
+ ```javascript
+ Promise.race([ajax('foo.json'), timeout(5000)])
+ ```
+
+ @method race
+ @static
+ @param {Array} promises array of promises to observe
+ Useful for tooling.
+ @return {Promise} a promise which settles in the same way as the first passed
+ promise to settle.
+*/
+export default function race(entries) {
+ /*jshint validthis:true */
+ var Constructor = this;
+
+ if (!isArray(entries)) {
+ return new Constructor(function(resolve, reject) {
+ reject(new TypeError('You must pass an array to race.'));
+ });
+ } else {
+ return new Constructor(function(resolve, reject) {
+ var length = entries.length;
+ for (var i = 0; i < length; i++) {
+ Constructor.resolve(entries[i]).then(resolve, reject);
+ }
+ });
+ }
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/reject.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/reject.js
new file mode 100644
index 0000000..63b86cb
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/reject.js
@@ -0,0 +1,46 @@
+import {
+ noop,
+ reject as _reject
+} from '../-internal';
+
+/**
+ `Promise.reject` returns a promise rejected with the passed `reason`.
+ It is shorthand for the following:
+
+ ```javascript
+ var promise = new Promise(function(resolve, reject){
+ reject(new Error('WHOOPS'));
+ });
+
+ promise.then(function(value){
+ // Code here doesn't run because the promise is rejected!
+ }, function(reason){
+ // reason.message === 'WHOOPS'
+ });
+ ```
+
+ Instead of writing the above, your code now simply becomes the following:
+
+ ```javascript
+ var promise = Promise.reject(new Error('WHOOPS'));
+
+ promise.then(function(value){
+ // Code here doesn't run because the promise is rejected!
+ }, function(reason){
+ // reason.message === 'WHOOPS'
+ });
+ ```
+
+ @method reject
+ @static
+ @param {Any} reason value that the returned promise will be rejected with.
+ Useful for tooling.
+ @return {Promise} a promise rejected with the given `reason`.
+*/
+export default function reject(reason) {
+ /*jshint validthis:true */
+ var Constructor = this;
+ var promise = new Constructor(noop);
+ _reject(promise, reason);
+ return promise;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/resolve.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/resolve.js
new file mode 100644
index 0000000..201a545
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/promise/resolve.js
@@ -0,0 +1,48 @@
+import {
+ noop,
+ resolve as _resolve
+} from '../-internal';
+
+/**
+ `Promise.resolve` returns a promise that will become resolved with the
+ passed `value`. It is shorthand for the following:
+
+ ```javascript
+ var promise = new Promise(function(resolve, reject){
+ resolve(1);
+ });
+
+ promise.then(function(value){
+ // value === 1
+ });
+ ```
+
+ Instead of writing the above, your code now simply becomes the following:
+
+ ```javascript
+ var promise = Promise.resolve(1);
+
+ promise.then(function(value){
+ // value === 1
+ });
+ ```
+
+ @method resolve
+ @static
+ @param {Any} value value that the returned promise will be resolved with
+ Useful for tooling.
+ @return {Promise} a promise that will become fulfilled with the given
+ `value`
+*/
+export default function resolve(object) {
+ /*jshint validthis:true */
+ var Constructor = this;
+
+ if (object && typeof object === 'object' && object.constructor === Constructor) {
+ return object;
+ }
+
+ var promise = new Constructor(noop);
+ _resolve(promise, object);
+ return promise;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/then.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/then.js
new file mode 100644
index 0000000..f97e946
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/then.js
@@ -0,0 +1,34 @@
+import {
+ invokeCallback,
+ subscribe,
+ FULFILLED,
+ REJECTED,
+ noop,
+ makePromise,
+ PROMISE_ID
+} from './-internal';
+
+import { asap } from './asap';
+
+export default function then(onFulfillment, onRejection) {
+ var parent = this;
+
+ var child = new this.constructor(noop);
+
+ if (child[PROMISE_ID] === undefined) {
+ makePromise(child);
+ }
+
+ var state = parent._state;
+
+ if (state) {
+ var callback = arguments[state - 1];
+ asap(function(){
+ invokeCallback(state, child, callback, parent._result);
+ });
+ } else {
+ subscribe(parent, child, onFulfillment, onRejection);
+ }
+
+ return child;
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/utils.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/utils.js
new file mode 100644
index 0000000..31ec6f9
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/lib/es6-promise/utils.js
@@ -0,0 +1,22 @@
+export function objectOrFunction(x) {
+ return typeof x === 'function' || (typeof x === 'object' && x !== null);
+}
+
+export function isFunction(x) {
+ return typeof x === 'function';
+}
+
+export function isMaybeThenable(x) {
+ return typeof x === 'object' && x !== null;
+}
+
+var _isArray;
+if (!Array.isArray) {
+ _isArray = function (x) {
+ return Object.prototype.toString.call(x) === '[object Array]';
+ };
+} else {
+ _isArray = Array.isArray;
+}
+
+export var isArray = _isArray;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/package.json b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/package.json
new file mode 100644
index 0000000..e3ee168
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/es6-promise/package.json
@@ -0,0 +1,95 @@
+{
+ "name": "es6-promise",
+ "namespace": "es6-promise",
+ "version": "3.2.1",
+ "description": "A lightweight library that provides tools for organizing asynchronous code",
+ "main": "dist/es6-promise.js",
+ "directories": {
+ "lib": "lib"
+ },
+ "files": [
+ "dist",
+ "lib",
+ "!dist/test"
+ ],
+ "devDependencies": {
+ "babel-eslint": "^6.0.0",
+ "broccoli-es6-module-transpiler": "^0.5.0",
+ "broccoli-jshint": "^2.1.0",
+ "broccoli-merge-trees": "^1.1.1",
+ "broccoli-replace": "^0.12.0",
+ "broccoli-stew": "^1.2.0",
+ "broccoli-uglify-js": "^0.1.3",
+ "broccoli-watchify": "^0.2.0",
+ "ember-cli": "2.5.0",
+ "ember-publisher": "0.0.7",
+ "git-repo-version": "0.0.3",
+ "json3": "^3.3.2",
+ "mocha": "^1.20.1",
+ "promises-aplus-tests-phantom": "^2.1.0-revise",
+ "release-it": "0.0.10"
+ },
+ "scripts": {
+ "build": "ember build",
+ "build:production": "ember build --environment production",
+ "start": "ember s",
+ "test": "ember test",
+ "test:server": "ember test --server",
+ "test:node": "ember build && mocha ./dist/test/browserify",
+ "lint": "jshint lib",
+ "prepublish": "ember build --environment production",
+ "dry-run-release": "ember build --environment production && release-it --dry-run --non-interactive"
+ },
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/jakearchibald/ES6-Promises.git"
+ },
+ "bugs": {
+ "url": "https://github.com/jakearchibald/ES6-Promises/issues"
+ },
+ "browser": {
+ "vertx": false
+ },
+ "keywords": [
+ "promises",
+ "futures"
+ ],
+ "author": {
+ "name": "Yehuda Katz, Tom Dale, Stefan Penner and contributors",
+ "url": "Conversion to ES6 API by Jake Archibald"
+ },
+ "license": "MIT",
+ "spm": {
+ "main": "dist/es6-promise.js"
+ },
+ "gitHead": "5df472ec56d5b9fbc76589383852008d46055c61",
+ "homepage": "https://github.com/jakearchibald/ES6-Promises#readme",
+ "_id": "es6-promise@3.2.1",
+ "_shasum": "ec56233868032909207170c39448e24449dd1fc4",
+ "_from": "es6-promise@3.2.1",
+ "_npmVersion": "3.8.8",
+ "_nodeVersion": "5.10.1",
+ "_npmUser": {
+ "name": "stefanpenner",
+ "email": "stefan.penner@gmail.com"
+ },
+ "dist": {
+ "shasum": "ec56233868032909207170c39448e24449dd1fc4",
+ "tarball": "https://registry.npmjs.org/es6-promise/-/es6-promise-3.2.1.tgz"
+ },
+ "maintainers": [
+ {
+ "name": "jaffathecake",
+ "email": "jaffathecake@gmail.com"
+ },
+ {
+ "name": "stefanpenner",
+ "email": "stefan.penner@gmail.com"
+ }
+ ],
+ "_npmOperationalInternal": {
+ "host": "packages-12-west.internal.npmjs.com",
+ "tmp": "tmp/es6-promise-3.2.1.tgz_1463027774105_0.6333294357173145"
+ },
+ "_resolved": "https://registry.npmjs.org/es6-promise/-/es6-promise-3.2.1.tgz"
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/.coveralls.yml b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/.coveralls.yml
new file mode 100644
index 0000000..a0b4fb6
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/.coveralls.yml
@@ -0,0 +1 @@
+repo_token: 47iIZ0B3llo2Wc4dxWRltvgdImqcrVDTi
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/HISTORY.md b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/HISTORY.md
new file mode 100644
index 0000000..28bd789
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/HISTORY.md
@@ -0,0 +1,559 @@
+2.0.13 2016-10-21
+-----------------
+* Updated bson library to 0.5.6.
+ - Included cyclic dependency detection
+* Fire callback when topology was destroyed (Issue #147, https://github.com/vkarpov15).
+* Refactoring to support pipelining ala 1.4.x branch will retaining the benefits of the growing/shrinking pool (Issue #146).
+* Fix typo in serverHeartbeatFailed event name (Issue #143, https://github.com/jakesjews).
+* NODE-798 Driver hangs on count command in replica set with one member (Issue #141, https://github.com/isayme).
+
+2.0.12 2016-09-15
+-----------------
+* fix debug logging message not printing server name.
+* fixed application metadata being sent by wrong ismaster.
+* NODE-812 Fixed mongos stall due to proxy monitoring ismaster failure causing reconnect.
+* NODE-818 Replicaset timeouts in initial connect sequence can "no primary found".
+* Updated bson library to 0.5.5.
+* Added DBPointer up conversion to DBRef.
+
+2.0.11 2016-08-29
+-----------------
+* NODE-803, Fixed issue in how the latency window is calculated for Mongos topology causing issues for single proxy connections.
+* Avoid timeout in attemptReconnect causing multiple attemptReconnect attempts to happen (Issue #134, https://github.com/dead-horse).
+* Ensure promoteBuffers is propagated in same fashion as promoteValues and promoteLongs
+
+2.0.10 2016-08-23
+-----------------
+* Added promoteValues flag (default to true) to allow user to specify they only want wrapped BSON values back instead of promotion to native types.
+* Do not close mongos proxy connection on failed ismaster check in ha process (Issue #130).
+
+2.0.9 2016-08-19
+----------------
+* Allow promoteLongs to be passed in through Response.parse method and overrides default set on the connection.
+* NODE-798 Driver hangs on count command in replica set with one member.
+* Allow promoteLongs to be passed in through Response.parse method and overrides default set on the connection.
+* Allow passing in servername for TLS connections for SNI support.
+
+2.0.8 2016-08-16
+----------------
+* Allow execution of store operations indepent of having both a primary and secondary available (Issue #123).
+* Fixed command execution issue for mongos to ensure buffering of commands when no mongos available.
+* Added hashed connection names and fullResult.
+* Updated bson library to 0.5.3.
+* Wrap callback in nextTick to ensure exceptions are thrown correctly.
+
+2.0.7 2016-07-28
+----------------
+* Allow primary to be returned when secondaryPreferred is passed (Issue #117, https://github.com/dhendo).
+* Added better warnings when passing in illegal seed list members to a Mongos topology.
+* Minor attemptReconnect bug that would cause multiple attemptReconnect to run in parallel.
+* Fix wrong opType passed to disconnectHandler.add (Issue #121, https://github.com/adrian-gierakowski)
+* Implemented domain backward comp support enabled via domainsEnabled options on Server/ReplSet/Mongos and MongoClient.connect.
+* Initial max staleness implementation for ReplSet and Mongos for 3.4 support.
+* Added handling of collation for 3.4 support.
+
+2.0.6 2016-07-19
+----------------
+* Destroy connection on socket timeout due to newer node versions not closing the socket.
+
+2.0.5 2016-07-15
+----------------
+* Minor fixes to handle faster MongoClient connectivity from the driver, allowing single server instances to detect if they are a proxy.
+* Added numberOfConsecutiveTimeouts to pool that will destroy the pool if the number of consecutive timeouts > reconnectTries.
+* Print warning if seedlist servers host name does not match the one provided in it's ismaster.me field for Replicaset members.
+* Fix issue where Replicaset connection would not succeeed if there the replicaset was a single primary server setup.
+
+2.0.4 2016-07-11
+-----------------
+* Updated bson to version 0.5.1.
+* handle situation where user is providing seedlist names that do not match host list. fix allows for a single full discovery connection sweep before erroring out.
+* NODE-747 Polyfill for Object.assign for 0.12.x or 0.10.x.
+* NODE-746 Improves replicaset errors for wrong setName.
+
+2.0.3 2016-07-08
+-----------------
+* Implemented Server Selection Specification test suite.
+* Added warning level to logger.
+* Added warning message when sockeTimeout < haInterval for Replset/Mongos.
+
+2.0.2 2016-07-06
+-----------------
+* Mongos emits close event on no proxies available or when reconnect attempt fails.
+* Replset emits close event when no servers available or when attemptReconnect fails to reconnect.
+* Don't throw in auth methods but return error in callback.
+
+2.0.1 2016-07-05
+-----------------
+* Added missing logout method on mongos proxy topology.
+* Fixed logger error serialization issue.
+* Documentation fixes.
+
+2.0.0 2016-07-05
+-----------------
+* Moved all authentication and handling of growing/shrinking of pool connections into actual pool.
+* All authentication methods now handle both auth/reauthenticate and logout events.
+* Introduced logout method to get rid of onAll option for logout command.
+* Updated bson to 0.5.0 that includes Decimal128 support.
+
+1.3.21 2016-05-30
+-----------------
+* Pool gets stuck if a connection marked for immediateRelease times out (Issue #99, https://github.com/nbrachet).
+* Make authentication process retry up to authenticationRetries at authenticationRetryIntervalMS interval.
+* Made ismaster replicaset calls operate with connectTimeout or monitorSocketTimeout to lower impact of big socketTimeouts on monitoring performance.
+* Make sure connections mark as "immediateRelease" don't linger the inUserConnections list. Otherwise, after that connection times out, getAll() incorrectly returns more connections than are effectively present, causing the pool to not get restarted by reconnectServer. (Issue #99, https://github.com/nbrachet).
+* Make cursor getMore or killCursor correctly trigger pool reconnect to single server if pool has not been destroyed.
+* Make ismaster monitoring for single server connection default to avoid user confusion due to change in behavior.
+
+1.3.20 2016-05-25
+-----------------
+* NODE-710 Allow setting driver loggerLevel and logger function from MongoClient options.
+* Minor fix for SSL errors on connection attempts, minor fix to reconnect handler for the server.
+* Don't write to socket before having registered the callback for commands, work around for windows issuing error events twice on node.js when socket gets destroyed by firewall.
+* Fix minor issue where connectingServers would not be removed correctly causing single server connections to not auto-reconnect.
+
+1.3.19 2016-05-17
+-----------------
+- Handle situation where a server connection in a replicaset sometimes fails to be destroyed properly due to being in the middle of authentication when the destroy method is called on the replicaset causing it to be orphaned and never collected.
+- Set keepAlive to false by default to work around bug in node.js for Windows XP and Windows 2003.
+- Ensure replicaset topology destroy is never called by SDAM.
+- Ensure all paths are correctly returned on inspectServer in replset.
+
+1.3.18 2016-04-27
+-----------------
+- Hardened cursor connection handling for getMore and killCursor to ensure mid operation connection kill does not throw null exception.
+- Fixes for Node 6.0 support.
+
+1.3.17 2016-04-26
+-----------------
+- Added improved handling of reconnect when topology is a single server.
+- Added better handling of $query queries passed down for 3.2 or higher.
+- Introduced getServerFrom method to topologies to let cursor grab a new pool for getMore and killCursors commands and not use connection pipelining.
+- NODE-693 Move authentication to be after ismaster call to avoid authenticating against arbiters.
+
+1.3.16 2016-04-07
+-----------------
+- Only call unref on destroy if it exists to ensure proper working destroy method on early node v0.10.x versions.
+
+1.3.15 2016-04-06
+-----------------
+- NODE-687 Fixed issue where a server object failed to be destroyed if the replicaset state did not update successfully. This could leave active connections accumulating over time.
+- Fixed some situations where all connections are flushed due to a single connection in the connection pool closing.
+
+1.3.14 2016-04-01
+-----------------
+- Ensure server inquireServerState exits immediately on server.destroy call.
+- Refactored readPreference handling in 2.4, 2.6 and 3.2 wire protocol handling.
+
+1.3.13 2016-03-30
+-----------------
+- Handle missing cursor on getMore when going through a mongos proxy by pinning to socket connection and not server.
+
+1.3.12 2016-03-29
+-----------------
+- Mongos pickProxies fall back to closest mongos if no proxies meet latency window specified.
+
+1.3.11 2016-03-29
+-----------------
+- isConnected method for mongos uses same selection code as getServer.
+- Exceptions in cursor getServer trapped and correctly delegated to high level handler.
+
+1.3.10 2016-03-22
+-----------------
+- SDAM Monitoring emits diff for Replicasets to simplify detecting the state changes.
+- SDAM Monitoring correctly emits Mongos as serverDescriptionEvent.
+
+1.3.9 2016-03-20
+----------------
+- Removed monitoring exclusive connection, should resolve timeouts and reconnects on idle replicasets where haInteval > socketTimeout.
+
+1.3.8 2016-03-18
+----------------
+- Implements the SDAM monitoring specification.
+- Fix issue where cursor would error out and not be buffered when primary is not connected.
+
+1.3.7 2016-03-16
+----------------
+- Fixed issue with replicasetInquirer where it could stop performing monitoring if there was no servers available.
+
+1.3.6 2016-03-15
+----------------
+- Fixed raise condition where multiple replicasetInquirer operations could be started in parallel creating redundant connections.
+
+1.3.5 2016-03-14
+----------------
+- Handle rogue SSL exceptions (Issue #85, https://github.com/durran).
+
+1.3.4 2016-03-14
+----------------
+- Added unref options on server, replicaset and mongos (Issue #81, https://github.com/allevo)
+- cursorNotFound flag always false (Issue #83, https://github.com/xgfd)
+- refactor of events emission of fullsetup and all events (Issue #84, https://github.com/xizhibei)
+
+1.3.3 2016-03-08
+----------------
+- Added support for promoteLongs option for command function.
+- Return connection if no callback available
+- Emit connect event when server reconnects after initial connection failed (Issue #76, https://github.com/vkarpov15)
+- Introduced optional monitoringSocketTimeout option to allow better control of SDAM monitoring timeouts.
+- Made monitoringSocketTimeout default to 30000 if no connectionTimeout value specified or if set to 0.
+- Fixed issue where tailable cursor would not retry even though cursor was still alive.
+- Disabled exhaust flag support to avoid issues where users could easily write code that would cause memory to run out.
+- Handle the case where the first command result document returns an empty list of documents but a live cursor.
+- Allow passing down off CANONICALIZE_HOST_NAME and SERVICE_REALM options for kerberos.
+
+1.3.2 2016-02-09
+----------------
+- Harden MessageHandler in server.js to avoid issues where we cannot find a callback for an operation.
+- Ensure RequestId can never be larger than Max Number integer size.
+
+1.3.1 2016-02-05
+----------------
+- Removed annoying missing Kerberos error (NODE-654).
+
+1.3.0 2016-02-03
+----------------
+- Added raw support for the command function on topologies.
+- Fixed issue where raw results that fell on batchSize boundaries failed (Issue #72)
+- Copy over all the properties to the callback returned from bindToDomain, (Issue #72)
+- Added connection hash id to be able to reference connection host/name without leaking it outside of driver.
+- NODE-638, Cannot authenticate database user with utf-8 password.
+- Refactored pool to be worker queue based, minimizing the impact a slow query have on throughput as long as # slow queries < # connections in the pool.
+- Pool now grows and shrinks correctly depending on demand not causing a full pool reconnect.
+- Improvements in monitoring of a Replicaset where in certain situations the inquiry process could get exited.
+- Switched to using Array.push instead of concat for use cases of a lot of documents.
+- Fixed issue where re-authentication could loose the credentials if whole Replicaset disconnected at once.
+- Added peer optional dependencies support using require_optional module.
+
+1.2.32 2016-01-12
+-----------------
+- Bumped bson to V0.4.21 to allow using minor optimizations.
+
+1.2.31 2016-01-04
+-----------------
+- Allow connection to secondary if primaryPreferred or secondaryPreferred (Issue #70, https://github.com/leichter)
+
+1.2.30 2015-12-23
+-----------------
+- Pool allocates size + 1 connections when using replicasets, reserving additional pool connection for monitoring exclusively.
+- Fixes bug when all replicaset members are down, that would cause it to fail to reconnect using the originally provided seedlist.
+
+1.2.29 2015-12-17
+-----------------
+- Correctly emit close event when calling destroy on server topology.
+
+1.2.28 2015-12-13
+-----------------
+- Backed out Prevent Maximum call stack exceeded by calling all callbacks on nextTick, (Issue #64, https://github.com/iamruinous) as it breaks node 0.10.x support.
+
+1.2.27 2015-12-13
+-----------------
+- Added [options.checkServerIdentity=true] {boolean|function}. Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function, (Issue #29).
+- Prevent Maximum call stack exceeded by calling all callbacks on nextTick, (Issue #64, https://github.com/iamruinous).
+- State is not defined in mongos, (Issue #63, https://github.com/flyingfisher).
+- Fixed corner case issue on exhaust cursors on pre 3.0.x MongoDB.
+
+1.2.26 2015-11-23
+-----------------
+- Converted test suite to use mongodb-topology-manager.
+- Upgraded bson library to V0.4.20.
+- Minor fixes for 3.2 readPreferences.
+
+1.2.25 2015-11-23
+-----------------
+- Correctly error out when passed a seedlist of non-valid server members.
+
+1.2.24 2015-11-20
+-----------------
+- Fix Automattic/mongoose#3481; flush callbacks on error, (Issue #57, https://github.com/vkarpov15).
+- $explain query for wire protocol 2.6 and 2.4 does not set number of returned documents to -1 but to 0.
+
+1.2.23 2015-11-16
+-----------------
+- ismaster runs against admin.$cmd instead of system.$cmd.
+
+1.2.22 2015-11-16
+-----------------
+- Fixes to handle getMore command errors for MongoDB 3.2
+- Allows the process to properly close upon a Db.close() call on the replica set by shutting down the haTimer and closing arbiter connections.
+
+1.2.21 2015-11-07
+-----------------
+- Hardened the checking for replicaset equality checks.
+- OpReplay flag correctly set on Wire protocol query.
+- Mongos load balancing added, introduced localThresholdMS to control the feature.
+- Kerberos now a peerDependency, making it not install it by default in Node 5.0 or higher.
+
+1.2.20 2015-10-28
+-----------------
+- Fixed bug in arbiter connection capping code.
+- NODE-599 correctly handle arrays of server tags in order of priority.
+- Fix for 2.6 wire protocol handler related to readPreference handling.
+- Added maxAwaitTimeMS support for 3.2 getMore to allow for custom timeouts on tailable cursors.
+- Make CoreCursor check for $err before saying that 'next' succeeded (Issue #53, https://github.com/vkarpov15).
+
+1.2.19 2015-10-15
+-----------------
+- Make batchSize always be > 0 for 3.2 wire protocol to make it work consistently with pre 3.2 servers.
+- Locked to bson 0.4.19.
+
+1.2.18 2015-10-15
+-----------------
+- Minor 3.2 fix for handling readPreferences on sharded commands.
+- Minor fixes to correctly pass APM specification test suite.
+
+1.2.17 2015-10-08
+-----------------
+- Connections to arbiters only maintain a single connection.
+
+1.2.15 2015-10-06
+-----------------
+- Set slaveOk to true for getMore and killCursors commands.
+- Don't swallow callback errors for 2.4 single server (Issue #49, https://github.com/vkarpov15).
+- Apply toString('hex') to each buffer in an array when logging (Issue #48, https://github.com/nbrachet).
+
+1.2.14 2015-09-28
+-----------------
+- NODE-547 only emit error if there are any listeners.
+- Fixed APM issue with issuing readConcern.
+
+1.2.13 2015-09-18
+-----------------
+- Added BSON serializer ignoreUndefined option for insert/update/remove/command/cursor.
+
+1.2.12 2015-09-08
+-----------------
+- NODE-541 Added initial support for readConcern.
+
+1.2.11 2015-08-31
+-----------------
+- NODE-535 If connectWithNoPrimary is true then primary-only connection is not allowed.
+- NODE-534 Passive secondaries are not allowed for secondaryOnlyConnectionAllowed.
+- Fixed filtering bug for logging (Issue 30, https://github.com/christkv/mongodb-core/issues/30).
+
+1.2.10 2015-08-14
+-----------------
+- Added missing Mongos.prototype.parserType function.
+
+1.2.9 2015-08-05
+----------------
+- NODE-525 Reset connectionTimeout after it's overwritten by tls.connect.
+- NODE-518 connectTimeoutMS is doubled in 2.0.39.
+
+1.2.8 2015-07-24
+-----------------
+- Minor fix to handle 2.4.x errors better by correctly return driver layer issues.
+
+1.2.7 2015-07-16
+-----------------
+- Refactoring to allow to tap into find/getmore/killcursor in cursors for APM monitoring in driver.
+
+1.2.6 2015-07-14
+-----------------
+- NODE-505 Query fails to find records that have a 'result' property with an array value.
+
+1.2.5 2015-07-14
+-----------------
+- NODE-492 correctly handle hanging replicaset monitoring connections when server is unavailable due to network partitions or firewalls dropping packets, configureable using the connectionTimeoutMS setting.
+
+1.2.4 2015-07-07
+-----------------
+- NODE-493 staggering the socket connections to avoid overwhelming the mongod process.
+
+1.2.3 2015-06-26
+-----------------
+- Minor bug fixes.
+
+1.2.2 2015-06-22
+-----------------
+- Fix issue with SCRAM authentication causing authentication to return true on failed authentication (Issue 26, https://github.com/cglass17).
+
+1.2.1 2015-06-17
+-----------------
+- Ensure serializeFunctions passed down correctly to wire protocol.
+
+1.2.0 2015-06-17
+-----------------
+- Switching to using the 0.4.x pure JS serializer, removing dependency on C++ parser.
+- Refactoring wire protocol messages to avoid expensive size calculations of documents in favor of writing out an array of buffers to the sockets.
+- NODE-486 fixed issue related to limit and skip when calling toArray in 2.0 driver.
+- NODE-483 throw error if capabilities of topology is queries before topology has performed connection setup.
+- NODE-487 fixed issue where killcursor command was not being sent correctly on limit and skip queries.
+
+1.1.33 2015-05-31
+-----------------
+- NODE-478 Work around authentication race condition in mongos authentication due to multi step authentication methods like SCRAM.
+
+1.1.32 2015-05-20
+-----------------
+- After reconnect, it updates the allowable reconnect retries to the option settings (Issue #23, https://github.com/owenallenaz)
+
+1.1.31 2015-05-19
+-----------------
+- Minor fixes for issues with re-authentication of mongos.
+
+1.1.30 2015-05-18
+-----------------
+- Correctly emit 'all' event when primary + all secondaries have connected.
+
+1.1.29 2015-05-17
+-----------------
+- NODE-464 Only use a single socket against arbiters and hidden servers.
+- Ensure we filter out hidden servers from any server queries.
+
+1.1.28 2015-05-12
+-----------------
+- Fixed buffer compare for electionId for < node 12.0.2
+
+1.1.27 2015-05-12
+-----------------
+- NODE-455 Update SDAM specification support to cover electionId and Mongos load balancing.
+
+1.1.26 2015-05-06
+-----------------
+- NODE-456 Allow mongodb-core to pipeline commands (ex findAndModify+GLE) along the same connection and handle the returned results.
+- Fixes to make mongodb-core work for node 0.8.x when using scram and setImmediate.
+
+1.1.25 2015-04-24
+-----------------
+- Handle lack of callback in crud operations when returning error on application closed.
+
+1.1.24 2015-04-22
+-----------------
+- Error out when topology has been destroyed either by connection retries being exhausted or destroy called on topology.
+
+1.1.23 2015-04-15
+-----------------
+- Standardizing mongoErrors and its API (Issue #14)
+- Creating a new connection is slow because of 100ms setTimeout() (Issue #17, https://github.com/vkarpov15)
+- remove mkdirp and rimraf dependencies (Issue #12)
+- Updated default value of param options.rejectUnauthorized to match documentation (Issue #16)
+- ISSUE: NODE-417 Resolution. Improving behavior of thrown errors (Issue #14, https://github.com/owenallenaz)
+- Fix cursor hanging when next() called on exhausted cursor (Issue #18, https://github.com/vkarpov15)
+
+1.1.22 2015-04-10
+-----------------
+- Minor refactorings in cursor code to make extending the cursor simpler.
+- NODE-417 Resolution. Improving behavior of thrown errors using Error.captureStackTrace.
+
+1.1.21 2015-03-26
+-----------------
+- Updated bson module to 0.3.0 that extracted the c++ parser into bson-ext and made it an optional dependency.
+
+1.1.20 2015-03-24
+-----------------
+- NODE-395 Socket Not Closing, db.close called before full set finished initalizing leading to server connections in progress not being closed properly.
+
+1.1.19 2015-03-21
+-----------------
+- Made kerberos module ~0.0 to allow for quicker releases due to io.js of kerberos module.
+
+1.1.18 2015-03-17
+-----------------
+- Added support for minHeartbeatFrequencyMS on server reconnect according to the SDAM specification.
+
+1.1.17 2015-03-16
+-----------------
+- NODE-377, fixed issue where tags would correctly be checked on secondary and nearest to filter out eligible server candidates.
+
+1.1.16 2015-03-06
+-----------------
+- rejectUnauthorized parameter is set to true for ssl certificates by default instead of false.
+
+1.1.15 2015-03-04
+-----------------
+- Removed check for type in replset pickserver function.
+
+1.1.14 2015-02-26
+-----------------
+- NODE-374 correctly adding passive secondaries to the list of eligable servers for reads
+
+1.1.13 2015-02-24
+-----------------
+- NODE-365 mongoDB native node.js driver infinite reconnect attempts (fixed issue around handling of retry attempts)
+
+1.1.12 2015-02-16
+-----------------
+- Fixed cursor transforms for buffered document reads from cursor.
+
+1.1.11 2015-02-02
+-----------------
+- Remove the required setName for replicaset connections, if not set it will pick the first setName returned.
+
+1.1.10 2015-31-01
+-----------------
+- Added tranforms.doc option to cursor to allow for pr. document transformations.
+
+1.1.9 2015-21-01
+----------------
+- Updated BSON dependency to 0.2.18 to fix issues with io.js and node.
+- Updated Kerberos dependency to 0.0.8 to fix issues with io.js and node.
+- Don't treat findOne() as a command cursor.
+- Refactored out state changes into methods to simplify read the next method.
+
+1.1.8 2015-09-12
+----------------
+- Stripped out Object.defineProperty for performance reasons
+- Applied more performance optimizations.
+- properties cursorBatchSize, cursorSkip, cursorLimit are not methods setCursorBatchSize/cursorBatchSize, setCursorSkip/cursorSkip, setCursorLimit/cursorLimit
+
+1.1.7 2014-18-12
+----------------
+- Use ns variable for getMore commands for command cursors to work properly with cursor version of listCollections and listIndexes.
+
+1.1.6 2014-18-12
+----------------
+- Server manager fixed to support 2.2.X servers for travis test matrix.
+
+1.1.5 2014-17-12
+----------------
+- Fall back to errmsg when creating MongoError for command errors
+
+1.1.4 2014-17-12
+----------------
+- Added transform method support for cursor (initially just for initial query results) to support listCollections/listIndexes in 2.8.
+- Fixed variable leak in scram.
+- Fixed server manager to deal better with killing processes.
+- Bumped bson to 0.2.16.
+
+1.1.3 2014-01-12
+----------------
+- Fixed error handling issue with nonce generation in mongocr.
+- Fixed issues with restarting servers when using ssl.
+- Using strict for all classes.
+- Cleaned up any escaping global variables.
+
+1.1.2 2014-20-11
+----------------
+- Correctly encoding UTF8 collection names on wire protocol messages.
+- Added emitClose parameter to topology destroy methods to allow users to specify that they wish the topology to emit the close event to any listeners.
+
+1.1.1 2014-14-11
+----------------
+- Refactored code to use prototype instead of privileged methods.
+- Fixed issue with auth where a runtime condition could leave replicaset members without proper authentication.
+- Several deopt optimizations for v8 to improve performance and reduce GC pauses.
+
+1.0.5 2014-29-10
+----------------
+- Fixed issue with wrong namespace being created for command cursors.
+
+1.0.4 2014-24-10
+----------------
+- switched from using shift for the cursor due to bad slowdown on big batchSizes as shift causes entire array to be copied on each call.
+
+1.0.3 2014-21-10
+----------------
+- fixed error issuing problem on cursor.next when iterating over a huge dataset with a very small batchSize.
+
+1.0.2 2014-07-10
+----------------
+- fullsetup is now defined as a primary and secondary being available allowing for all read preferences to be satisfied.
+- fixed issue with replset_state logging.
+
+1.0.1 2014-07-10
+----------------
+- Dependency issue solved
+
+1.0.0 2014-07-10
+----------------
+- Initial release of mongodb-core
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/LICENSE b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/LICENSE
new file mode 100644
index 0000000..ad410e1
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/LICENSE
@@ -0,0 +1,201 @@
+Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "{}"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright {yyyy} {name of copyright owner}
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/Makefile b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/Makefile
new file mode 100644
index 0000000..36e1202
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/Makefile
@@ -0,0 +1,11 @@
+NODE = node
+NPM = npm
+JSDOC = jsdoc
+name = all
+
+generate_docs:
+ # cp -R ./HISTORY.md ./docs/content/meta/release-notes.md
+ hugo -s docs/reference -d ../../public
+ $(JSDOC) -c conf.json -t docs/jsdoc-template/ -d ./public/api
+ cp -R ./public/api/scripts ./public/.
+ cp -R ./public/api/styles ./public/.
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/README.md b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/README.md
new file mode 100644
index 0000000..433dd88
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/README.md
@@ -0,0 +1,228 @@
+[![Build Status](https://secure.travis-ci.org/christkv/mongodb-core.png)](http://travis-ci.org/christkv/mongodb-core)
+[![Coverage Status](https://coveralls.io/repos/github/christkv/mongodb-core/badge.svg?branch=1.3)](https://coveralls.io/github/christkv/mongodb-core?branch=1.3)
+
+# Description
+
+The MongoDB Core driver is the low level part of the 2.0 or higher MongoDB driver and is meant for library developers not end users. It does not contain any abstractions or helpers outside of the basic management of MongoDB topology connections, CRUD operations and authentication.
+
+## MongoDB Node.JS Core Driver
+
+| what | where |
+|---------------|------------------------------------------------|
+| documentation | http://mongodb.github.io/node-mongodb-native/ |
+| apidoc | http://mongodb.github.io/node-mongodb-native/ |
+| source | https://github.com/christkv/mongodb-core |
+| mongodb | http://www.mongodb.org/ |
+
+### Blogs of Engineers involved in the driver
+- Christian Kvalheim [@christkv](https://twitter.com/christkv)
+
+### Bugs / Feature Requests
+
+Think you’ve found a bug? Want to see a new feature in node-mongodb-native? Please open a
+case in our issue management tool, JIRA:
+
+- Create an account and login .
+- Navigate to the NODE project .
+- Click **Create Issue** - Please provide as much information as possible about the issue type and how to reproduce it.
+
+Bug reports in JIRA for all driver projects (i.e. NODE, PYTHON, CSHARP, JAVA) and the
+Core Server (i.e. SERVER) project are **public**.
+
+### Questions and Bug Reports
+
+ * mailing list: https://groups.google.com/forum/#!forum/node-mongodb-native
+ * jira: http://jira.mongodb.org/
+
+### Change Log
+
+http://jira.mongodb.org/browse/NODE
+
+# QuickStart
+
+The quick start guide will show you how to set up a simple application using Core driver and MongoDB. It scope is only how to set up the driver and perform the simple crud operations. For more inn depth coverage we encourage reading the tutorials.
+
+## Create the package.json file
+
+Let's create a directory where our application will live. In our case we will put this under our projects directory.
+
+```
+mkdir myproject
+cd myproject
+```
+
+Create a **package.json** using your favorite text editor and fill it in.
+
+```json
+{
+ "name": "myproject",
+ "version": "1.0.0",
+ "description": "My first project",
+ "main": "index.js",
+ "repository": {
+ "type": "git",
+ "url": "git://github.com/christkv/myfirstproject.git"
+ },
+ "dependencies": {
+ "mongodb-core": "~1.0"
+ },
+ "author": "Christian Kvalheim",
+ "license": "Apache 2.0",
+ "bugs": {
+ "url": "https://github.com/christkv/myfirstproject/issues"
+ },
+ "homepage": "https://github.com/christkv/myfirstproject"
+}
+```
+
+Save the file and return to the shell or command prompt and use **NPM** to install all the dependencies.
+
+```
+npm install
+```
+
+You should see **NPM** download a lot of files. Once it's done you'll find all the downloaded packages under the **node_modules** directory.
+
+Booting up a MongoDB Server
+---------------------------
+Let's boot up a MongoDB server instance. Download the right MongoDB version from [MongoDB](http://www.mongodb.org), open a new shell or command line and ensure the **mongod** command is in the shell or command line path. Now let's create a database directory (in our case under **/data**).
+
+```
+mongod --dbpath=/data --port 27017
+```
+
+You should see the **mongod** process start up and print some status information.
+
+## Connecting to MongoDB
+
+Let's create a new **app.js** file that we will use to show the basic CRUD operations using the MongoDB driver.
+
+First let's add code to connect to the server. Notice that there is no concept of a database here and we use the topology directly to perform the connection.
+
+```js
+var Server = require('mongodb-core').Server
+ , assert = require('assert');
+
+// Set up server connection
+var server = new Server({
+ host: 'localhost'
+ , port: 27017
+ , reconnect: true
+ , reconnectInterval: 50
+});
+
+// Add event listeners
+server.on('connect', function(_server) {
+ console.log('connected');
+ test.done();
+});
+
+server.on('close', function() {
+ console.log('closed');
+});
+
+server.on('reconnect', function() {
+ console.log('reconnect');
+});
+
+// Start connection
+server.connect();
+```
+
+To connect to a replicaset we would use the `ReplSet` class and for a set of Mongos proxies we use the `Mongos` class. Each topology class offer the same CRUD operations and you operate on the topology directly. Let's look at an example exercising all the different available CRUD operations.
+
+```js
+var Server = require('mongodb-core').Server
+ , assert = require('assert');
+
+// Set up server connection
+var server = new Server({
+ host: 'localhost'
+ , port: 27017
+ , reconnect: true
+ , reconnectInterval: 50
+});
+
+// Add event listeners
+server.on('connect', function(_server) {
+ console.log('connected');
+
+ // Execute the ismaster command
+ _server.command('system.$cmd', {ismaster: true}, function(err, result) {
+
+ // Perform a document insert
+ _server.insert('myproject.inserts1', [{a:1}, {a:2}], {
+ writeConcern: {w:1}, ordered:true
+ }, function(err, results) {
+ assert.equal(null, err);
+ assert.equal(2, results.result.n);
+
+ // Perform a document update
+ _server.update('myproject.inserts1', [{
+ q: {a: 1}, u: {'$set': {b:1}}
+ }], {
+ writeConcern: {w:1}, ordered:true
+ }, function(err, results) {
+ assert.equal(null, err);
+ assert.equal(1, results.result.n);
+
+ // Remove a document
+ _server.remove('myproject.inserts1', [{
+ q: {a: 1}, limit: 1
+ }], {
+ writeConcern: {w:1}, ordered:true
+ }, function(err, results) {
+ assert.equal(null, err);
+ assert.equal(1, results.result.n);
+
+ // Get a document
+ var cursor = _server.cursor('integration_tests.inserts_example4', {
+ find: 'integration_tests.example4'
+ , query: {a:1}
+ });
+
+ // Get the first document
+ cursor.next(function(err, doc) {
+ assert.equal(null, err);
+ assert.equal(2, doc.a);
+
+ // Execute the ismaster command
+ _server.command("system.$cmd"
+ , {ismaster: true}, function(err, result) {
+ assert.equal(null, err)
+ _server.destroy();
+ });
+ });
+ });
+ });
+
+ test.done();
+ });
+});
+
+server.on('close', function() {
+ console.log('closed');
+});
+
+server.on('reconnect', function() {
+ console.log('reconnect');
+});
+
+// Start connection
+server.connect();
+```
+
+The core driver does not contain any helpers or abstractions only the core crud operations. These consist of the following commands.
+
+* `insert`, Insert takes an array of 1 or more documents to be inserted against the topology and allows you to specify a write concern and if you wish to execute the inserts in order or out of order.
+* `update`, Update takes an array of 1 or more update commands to be executed against the server topology and also allows you to specify a write concern and if you wish to execute the updates in order or out of order.
+* `remove`, Remove takes an array of 1 or more remove commands to be executed against the server topology and also allows you to specify a write concern and if you wish to execute the removes in order or out of order.
+* `cursor`, Returns you a cursor for either the 'virtual' `find` command, a command that returns a cursor id or a plain cursor id. Read the cursor tutorial for more inn depth coverage.
+* `command`, Executes a command against MongoDB and returns the result.
+* `auth`, Authenticates the current topology using a supported authentication scheme.
+
+The Core Driver is a building block for library builders and is not meant for usage by end users as it lacks a lot of features the end user might need such as automatic buffering of operations when a primary is changing in a replicaset or the db and collections abstraction.
+
+## Next steps
+
+The next step is to get more in depth information about how the different aspects of the core driver works and how to leverage them to extend the functionality of the cursors. Please view the tutorials for more detailed information.
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/TESTING.md b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/TESTING.md
new file mode 100644
index 0000000..fe03ea0
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/TESTING.md
@@ -0,0 +1,18 @@
+Testing setup
+=============
+
+Single Server
+-------------
+mongod --dbpath=./db
+
+Replicaset
+----------
+mongo --nodb
+var x = new ReplSetTest({"useHostName":"false", "nodes" : {node0 : {}, node1 : {}, node2 : {}}})
+x.startSet();
+var config = x.getReplSetConfig()
+x.initiate(config);
+
+Mongos
+------
+var s = new ShardingTest( "auth1", 1 , 0 , 2 , {rs: true, noChunkSize : true});
\ No newline at end of file
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/conf.json b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/conf.json
new file mode 100644
index 0000000..12ce4c7
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/conf.json
@@ -0,0 +1,59 @@
+{
+ "plugins": ["plugins/markdown", "docs/lib/jsdoc/examples_plugin.js"],
+ "source": {
+ "include": [
+ "test/tests/functional/operation_example_tests.js",
+ "lib/connection/command_result.js",
+ "lib/topologies/mongos.js",
+ "lib/topologies/read_preference.js",
+ "lib/topologies/replset.js",
+ "lib/topologies/server.js",
+ "lib/topologies/replset_state.js",
+ "lib/connection/logger.js",
+ "lib/connection/connection.js",
+ "lib/cursor.js",
+ "lib/error.js",
+ "node_modules/bson/lib/bson/binary.js",
+ "node_modules/bson/lib/bson/code.js",
+ "node_modules/bson/lib/bson/db_ref.js",
+ "node_modules/bson/lib/bson/double.js",
+ "node_modules/bson/lib/bson/long.js",
+ "node_modules/bson/lib/bson/objectid.js",
+ "node_modules/bson/lib/bson/symbol.js",
+ "node_modules/bson/lib/bson/timestamp.js",
+ "node_modules/bson/lib/bson/max_key.js",
+ "node_modules/bson/lib/bson/min_key.js"
+ ]
+ },
+ "templates": {
+ "cleverLinks": true,
+ "monospaceLinks": true,
+ "default": {
+ "outputSourceFiles" : true
+ },
+ "applicationName": "Node.js MongoDB Driver API",
+ "disqus": true,
+ "googleAnalytics": "UA-29229787-1",
+ "openGraph": {
+ "title": "",
+ "type": "website",
+ "image": "",
+ "site_name": "",
+ "url": ""
+ },
+ "meta": {
+ "title": "",
+ "description": "",
+ "keyword": ""
+ },
+ "linenums": true
+ },
+ "markdown": {
+ "parser": "gfm",
+ "hardwrap": true,
+ "tags": ["examples"]
+ },
+ "examples": {
+ "indent": 4
+ }
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/connect_test.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/connect_test.js
new file mode 100644
index 0000000..47ee71e
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/connect_test.js
@@ -0,0 +1,72 @@
+var Server = require('./lib/topologies/server');
+
+// Attempt to connect
+var server = new Server({
+ host: 'localhost', port: 27017, socketTimeout: 500
+});
+
+// function executeCursors(_server, cb) {
+// var count = 100;
+//
+// for(var i = 0; i < 100; i++) {
+// // Execute the write
+// var cursor = _server.cursor('test.test', {
+// find: 'test.test'
+// , query: {a:1}
+// }, {readPreference: new ReadPreference('secondary')});
+//
+// // Get the first document
+// cursor.next(function(err, doc) {
+// count = count - 1;
+// if(err) console.dir(err)
+// if(count == 0) return cb();
+// });
+// }
+// }
+
+server.on('connect', function(_server) {
+ // console.log("===== connect")
+ setInterval(function() {
+ _server.insert('test.test', [{a:1}], function(err, r) {
+ console.log("insert")
+ });
+ }, 1000)
+ // console.log("---------------------------------- 0")
+ // // Attempt authentication
+ // _server.auth('scram-sha-1', 'admin', 'root', 'root', function(err, r) {
+ // console.log("---------------------------------- 1")
+ // // console.dir(err)
+ // // console.dir(r)
+ //
+ // _server.insert('test.test', [{a:1}], function(err, r) {
+ // console.log("---------------------------------- 2")
+ // console.dir(err)
+ // if(r)console.dir(r.result)
+ // var name = null;
+ //
+ // _server.on('joined', function(_t, _server) {
+ // if(name == _server.name) {
+ // console.log("=========== joined :: " + _t + " :: " + _server.name)
+ // executeCursors(_server, function() {
+ // });
+ // }
+ // })
+ //
+ // // var s = _server.s.replicaSetState.secondaries[0];
+ // // s.destroy({emitClose:true});
+ // executeCursors(_server, function() {
+ // console.log("============== 0")
+ // // Attempt to force a server reconnect
+ // var s = _server.s.replicaSetState.secondaries[0];
+ // name = s.name;
+ // s.destroy({emitClose:true});
+ // // console.log("============== 1")
+ //
+ // // _server.destroy();
+ // // test.done();
+ // });
+ // });
+ // });
+});
+
+server.connect();
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/index.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/index.js
new file mode 100644
index 0000000..5011f4a
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/index.js
@@ -0,0 +1,38 @@
+// module.exports = {
+// MongoError: require('./lib/error')
+// , Server: require('./lib/topologies/server')
+// , ReplSet: require('./lib/topologies/replset')
+// , Mongos: require('./lib/topologies/mongos')
+// , Logger: require('./lib/connection/logger')
+// , Cursor: require('./lib/cursor')
+// , ReadPreference: require('./lib/topologies/read_preference')
+// , BSON: require('bson')
+// // Raw operations
+// , Query: require('./lib/connection/commands').Query
+// // Auth mechanisms
+// , MongoCR: require('./lib/auth/mongocr')
+// , X509: require('./lib/auth/x509')
+// , Plain: require('./lib/auth/plain')
+// , GSSAPI: require('./lib/auth/gssapi')
+// , ScramSHA1: require('./lib/auth/scram')
+// }
+
+module.exports = {
+ MongoError: require('./lib/error')
+ , Connection: require('./lib/connection/connection')
+ , Server: require('./lib/topologies/server')
+ , ReplSet: require('./lib/topologies/replset')
+ , Mongos: require('./lib/topologies/mongos')
+ , Logger: require('./lib/connection/logger')
+ , Cursor: require('./lib/cursor')
+ , ReadPreference: require('./lib/topologies/read_preference')
+ , BSON: require('bson')
+ // Raw operations
+ , Query: require('./lib/connection/commands').Query
+ // Auth mechanisms
+ , MongoCR: require('./lib/auth/mongocr')
+ , X509: require('./lib/auth/x509')
+ , Plain: require('./lib/auth/plain')
+ , GSSAPI: require('./lib/auth/gssapi')
+ , ScramSHA1: require('./lib/auth/scram')
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/gssapi.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/gssapi.js
new file mode 100644
index 0000000..e06ce72
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/gssapi.js
@@ -0,0 +1,266 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , require_optional = require('require_optional')
+ , Query = require('../connection/commands').Query
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password, options) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+ this.options = options;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+// Kerberos class
+var Kerberos = null;
+var MongoAuthProcess = null;
+
+// Try to grab the Kerberos class
+try {
+ Kerberos = require_optional('kerberos').Kerberos;
+ // Authentication process for Mongo
+ MongoAuthProcess = require_optional('kerberos').processes.MongoAuthProcess
+} catch(err) {}
+
+/**
+ * Creates a new GSSAPI authentication mechanism
+ * @class
+ * @return {GSSAPI} A cursor instance
+ */
+var GSSAPI = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+GSSAPI.prototype.auth = function(server, connections, db, username, password, options, callback) {
+ var self = this;
+ // We don't have the Kerberos library
+ if(Kerberos == null) return callback(new Error("Kerberos library is not installed"));
+ var gssapiServiceName = options['gssapiServiceName'] || 'mongodb';
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ // Execute MongoCR
+ var execute = function(connection) {
+ // Start Auth process for a connection
+ GSSAPIInitialize(self, db, username, password, db, gssapiServiceName, server, connection, options, function(err, r) {
+ // Adjust count
+ count = count - 1;
+
+ // If we have an error
+ if(err) {
+ errorObject = err;
+ } else if(r.result['$err']) {
+ errorObject = r.result;
+ } else if(r.result['errmsg']) {
+ errorObject = r.result;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ // We have authenticated all connections
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password, options));
+ // Return correct authentication
+ callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using mongocr"));
+ callback(errorObject, false);
+ }
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ execute(_connection);
+ });
+ }
+
+ _execute(connections.shift());
+ }
+}
+
+//
+// Initialize step
+var GSSAPIInitialize = function(self, db, username, password, authdb, gssapiServiceName, server, connection, options, callback) {
+ // Create authenticator
+ var mongo_auth_process = new MongoAuthProcess(connection.host, connection.port, gssapiServiceName, options);
+
+ // Perform initialization
+ mongo_auth_process.init(username, password, function(err, context) {
+ if(err) return callback(err, false);
+
+ // Perform the first step
+ mongo_auth_process.transition('', function(err, payload) {
+ if(err) return callback(err, false);
+
+ // Call the next db step
+ MongoDBGSSAPIFirstStep(self, mongo_auth_process, payload, db, username, password, authdb, server, connection, callback);
+ });
+ });
+}
+
+//
+// Perform first step against mongodb
+var MongoDBGSSAPIFirstStep = function(self, mongo_auth_process, payload, db, username, password, authdb, server, connection, callback) {
+ // Build the sasl start command
+ var command = {
+ saslStart: 1
+ , mechanism: 'GSSAPI'
+ , payload: payload
+ , autoAuthorize: 1
+ };
+
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+ // Execute mongodb transition
+ mongo_auth_process.transition(r.result.payload, function(err, payload) {
+ if(err) return callback(err, false);
+
+ // MongoDB API Second Step
+ MongoDBGSSAPISecondStep(self, mongo_auth_process, payload, doc, db, username, password, authdb, server, connection, callback);
+ });
+ });
+}
+
+//
+// Perform first step against mongodb
+var MongoDBGSSAPISecondStep = function(self, mongo_auth_process, payload, doc, db, username, password, authdb, server, connection, callback) {
+ // Build Authentication command to send to MongoDB
+ var command = {
+ saslContinue: 1
+ , conversationId: doc.conversationId
+ , payload: payload
+ };
+
+ // Execute the command
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+ // Call next transition for kerberos
+ mongo_auth_process.transition(doc.payload, function(err, payload) {
+ if(err) return callback(err, false);
+
+ // Call the last and third step
+ MongoDBGSSAPIThirdStep(self, mongo_auth_process, payload, doc, db, username, password, authdb, server, connection, callback);
+ });
+ });
+}
+
+var MongoDBGSSAPIThirdStep = function(self, mongo_auth_process, payload, doc, db, username, password, authdb, server, connection, callback) {
+ // Build final command
+ var command = {
+ saslContinue: 1
+ , conversationId: doc.conversationId
+ , payload: payload
+ };
+
+ // Execute the command
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ mongo_auth_process.transition(null, function(err, payload) {
+ if(err) return callback(err, null);
+ callback(null, r);
+ });
+ });
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+GSSAPI.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+GSSAPI.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var err = null;
+ var count = authStore.length;
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, authStore[i].options, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+/**
+ * This is a result from a authentication strategy
+ *
+ * @callback authResultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {boolean} result The result of the authentication process
+ */
+
+module.exports = GSSAPI;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/mongocr.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/mongocr.js
new file mode 100644
index 0000000..0df70a0
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/mongocr.js
@@ -0,0 +1,185 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , Query = require('../connection/commands').Query
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+/**
+ * Creates a new MongoCR authentication mechanism
+ * @class
+ * @return {MongoCR} A cursor instance
+ */
+var MongoCR = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+MongoCR.prototype.auth = function(server, connections, db, username, password, callback) {
+ var self = this;
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ // Execute MongoCR
+ var executeMongoCR = function(connection) {
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, f("%s.$cmd", db), {
+ getnonce:1
+ }, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ var nonce = null;
+ var key = null;
+
+ // Adjust the number of connections left
+ // Get nonce
+ if(err == null) {
+ nonce = r.result.nonce;
+ // Use node md5 generator
+ var md5 = crypto.createHash('md5');
+ // Generate keys used for authentication
+ md5.update(username + ":mongo:" + password, 'utf8');
+ var hash_password = md5.digest('hex');
+ // Final key
+ md5 = crypto.createHash('md5');
+ md5.update(nonce + username + hash_password, 'utf8');
+ key = md5.digest('hex');
+ }
+
+ // Execute command
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, f("%s.$cmd", db), {
+ authenticate: 1, user: username, nonce: nonce, key:key
+ }, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ count = count - 1;
+
+ // If we have an error
+ if(err) {
+ errorObject = err;
+ } else if(r.result['$err']) {
+ errorObject = r.result;
+ } else if(r.result['errmsg']) {
+ errorObject = r.result;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ // We have authenticated all connections
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password));
+ // Return correct authentication
+ callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using mongocr"));
+ callback(errorObject, false);
+ }
+ });
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ executeMongoCR(_connection);
+ });
+ }
+
+ _execute(connections.shift());
+ }
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+MongoCR.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+MongoCR.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var err = null;
+ var count = authStore.length;
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+/**
+ * This is a result from a authentication strategy
+ *
+ * @callback authResultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {boolean} result The result of the authentication process
+ */
+
+module.exports = MongoCR;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/plain.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/plain.js
new file mode 100644
index 0000000..37bac30
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/plain.js
@@ -0,0 +1,170 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , Binary = require('bson').Binary
+ , Query = require('../connection/commands').Query
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+/**
+ * Creates a new Plain authentication mechanism
+ * @class
+ * @return {Plain} A cursor instance
+ */
+var Plain = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+Plain.prototype.auth = function(server, connections, db, username, password, callback) {
+ var self = this;
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ // Execute MongoCR
+ var execute = function(connection) {
+ // Create payload
+ var payload = new Binary(f("\x00%s\x00%s", username, password));
+
+ // Let's start the sasl process
+ var command = {
+ saslStart: 1
+ , mechanism: 'PLAIN'
+ , payload: payload
+ , autoAuthorize: 1
+ };
+
+ // Let's start the process
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ // Adjust count
+ count = count - 1;
+
+ // If we have an error
+ if(err) {
+ errorObject = err;
+ } else if(r.result['$err']) {
+ errorObject = r.result;
+ } else if(r.result['errmsg']) {
+ errorObject = r.result;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ // We have authenticated all connections
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password));
+ // Return correct authentication
+ callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using mongocr"));
+ callback(errorObject, false);
+ }
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ execute(_connection);
+ });
+ }
+
+ _execute(connections.shift());
+ }
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+Plain.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+Plain.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var err = null;
+ var count = authStore.length;
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+/**
+ * This is a result from a authentication strategy
+ *
+ * @callback authResultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {boolean} result The result of the authentication process
+ */
+
+module.exports = Plain;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/scram.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/scram.js
new file mode 100644
index 0000000..0a620b3
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/scram.js
@@ -0,0 +1,345 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , Query = require('../connection/commands').Query
+ , Binary = require('bson').Binary
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+var id = 0;
+
+/**
+ * Creates a new ScramSHA1 authentication mechanism
+ * @class
+ * @return {ScramSHA1} A cursor instance
+ */
+var ScramSHA1 = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+ this.id = id++;
+}
+
+var parsePayload = function(payload) {
+ var dict = {};
+ var parts = payload.split(',');
+
+ for(var i = 0; i < parts.length; i++) {
+ var valueParts = parts[i].split('=');
+ dict[valueParts[0]] = valueParts[1];
+ }
+
+ return dict;
+}
+
+var passwordDigest = function(username, password) {
+ if(typeof username != 'string') throw new MongoError("username must be a string");
+ if(typeof password != 'string') throw new MongoError("password must be a string");
+ if(password.length == 0) throw new MongoError("password cannot be empty");
+ // Use node md5 generator
+ var md5 = crypto.createHash('md5');
+ // Generate keys used for authentication
+ md5.update(username + ":mongo:" + password, 'utf8');
+ return md5.digest('hex');
+}
+
+// XOR two buffers
+var xor = function(a, b) {
+ if (!Buffer.isBuffer(a)) a = new Buffer(a)
+ if (!Buffer.isBuffer(b)) b = new Buffer(b)
+ var res = []
+ if (a.length > b.length) {
+ for (var i = 0; i < b.length; i++) {
+ res.push(a[i] ^ b[i])
+ }
+ } else {
+ for (var i = 0; i < a.length; i++) {
+ res.push(a[i] ^ b[i])
+ }
+ }
+ return new Buffer(res);
+}
+
+// Create a final digest
+var hi = function(data, salt, iterations) {
+ // Create digest
+ var digest = function(msg) {
+ var hmac = crypto.createHmac('sha1', data);
+ hmac.update(msg);
+ return new Buffer(hmac.digest('base64'), 'base64');
+ }
+
+ // Create variables
+ salt = Buffer.concat([salt, new Buffer('\x00\x00\x00\x01')])
+ var ui = digest(salt);
+ var u1 = ui;
+
+ for(var i = 0; i < iterations - 1; i++) {
+ u1 = digest(u1);
+ ui = xor(ui, u1);
+ }
+
+ return ui;
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+ScramSHA1.prototype.auth = function(server, connections, db, username, password, callback) {
+ var self = this;
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // Execute MongoCR
+ var executeScram = function(connection) {
+ // Clean up the user
+ username = username.replace('=', "=3D").replace(',', '=2C');
+
+ // Create a random nonce
+ var nonce = crypto.randomBytes(24).toString('base64');
+ // var nonce = 'MsQUY9iw0T9fx2MUEz6LZPwGuhVvWAhc'
+ var firstBare = f("n=%s,r=%s", username, nonce);
+
+ // Build command structure
+ var cmd = {
+ saslStart: 1
+ , mechanism: 'SCRAM-SHA-1'
+ , payload: new Binary(f("n,,%s", firstBare))
+ , autoAuthorize: 1
+ }
+
+ // Handle the error
+ var handleError = function(err, r) {
+ if(err) {
+ numberOfValidConnections = numberOfValidConnections - 1;
+ errorObject = err; return false;
+ } else if(r.result['$err']) {
+ errorObject = r.result; return false;
+ } else if(r.result['errmsg']) {
+ errorObject = r.result; return false;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ return true
+ }
+
+ // Finish up
+ var finish = function(_count, _numberOfValidConnections) {
+ if(_count == 0 && _numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password));
+ // Return correct authentication
+ return callback(null, true);
+ } else if(_count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using scram"));
+ return callback(errorObject, false);
+ }
+ }
+
+ var handleEnd = function(_err, _r) {
+ // Handle any error
+ handleError(_err, _r)
+ // Adjust the number of connections
+ count = count - 1;
+ // Execute the finish
+ finish(count, numberOfValidConnections);
+ }
+
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, f("%s.$cmd", db), cmd, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ // Do we have an error, handle it
+ if(handleError(err, r) == false) {
+ count = count - 1;
+
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password));
+ // Return correct authentication
+ return callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using scram"));
+ return callback(errorObject, false);
+ }
+
+ return;
+ }
+
+ // Get the dictionary
+ var dict = parsePayload(r.result.payload.value())
+
+ // Unpack dictionary
+ var iterations = parseInt(dict.i, 10);
+ var salt = dict.s;
+ var rnonce = dict.r;
+
+ // Set up start of proof
+ var withoutProof = f("c=biws,r=%s", rnonce);
+ var passwordDig = passwordDigest(username, password);
+ var saltedPassword = hi(passwordDig
+ , new Buffer(salt, 'base64')
+ , iterations);
+
+ // Create the client key
+ var hmac = crypto.createHmac('sha1', saltedPassword);
+ hmac.update(new Buffer("Client Key"));
+ var clientKey = new Buffer(hmac.digest('base64'), 'base64');
+
+ // Create the stored key
+ var hash = crypto.createHash('sha1');
+ hash.update(clientKey);
+ var storedKey = new Buffer(hash.digest('base64'), 'base64');
+
+ // Create the authentication message
+ var authMsg = [firstBare, r.result.payload.value().toString('base64'), withoutProof].join(',');
+
+ // Create client signature
+ var hmac = crypto.createHmac('sha1', storedKey);
+ hmac.update(new Buffer(authMsg));
+ var clientSig = new Buffer(hmac.digest('base64'), 'base64');
+
+ // Create client proof
+ var clientProof = f("p=%s", new Buffer(xor(clientKey, clientSig)).toString('base64'));
+
+ // Create client final
+ var clientFinal = [withoutProof, clientProof].join(',');
+
+ // Generate server key
+ var hmac = crypto.createHmac('sha1', saltedPassword);
+ hmac.update(new Buffer('Server Key'))
+ var serverKey = new Buffer(hmac.digest('base64'), 'base64');
+
+ // Generate server signature
+ var hmac = crypto.createHmac('sha1', serverKey);
+ hmac.update(new Buffer(authMsg))
+ var serverSig = new Buffer(hmac.digest('base64'), 'base64');
+
+ //
+ // Create continue message
+ var cmd = {
+ saslContinue: 1
+ , conversationId: r.result.conversationId
+ , payload: new Binary(new Buffer(clientFinal))
+ }
+
+ //
+ // Execute sasl continue
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, f("%s.$cmd", db), cmd, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(r && r.result.done == false) {
+ var cmd = {
+ saslContinue: 1
+ , conversationId: r.result.conversationId
+ , payload: new Buffer(0)
+ }
+
+ // Write the commmand on the connection
+ server(connection, new Query(self.bson, f("%s.$cmd", db), cmd, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ handleEnd(err, r);
+ });
+ } else {
+ handleEnd(err, r);
+ }
+ });
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ executeScram(_connection);
+ });
+ }
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ _execute(connections.shift());
+ }
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+ScramSHA1.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+ScramSHA1.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var count = authStore.length;
+ var err = null;
+ // No connections
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+
+module.exports = ScramSHA1;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/sspi.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/sspi.js
new file mode 100644
index 0000000..01be1bf
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/sspi.js
@@ -0,0 +1,255 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , require_optional = require('require_optional')
+ , Query = require('../connection/commands').Query
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password, options) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+ this.options = options;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+// Kerberos class
+var Kerberos = null;
+var MongoAuthProcess = null;
+
+// Try to grab the Kerberos class
+try {
+ Kerberos = require_optional('kerberos').Kerberos
+ // Authentication process for Mongo
+ MongoAuthProcess = require_optional('kerberos').processes.MongoAuthProcess
+} catch(err) {}
+
+/**
+ * Creates a new SSPI authentication mechanism
+ * @class
+ * @return {SSPI} A cursor instance
+ */
+var SSPI = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+SSPI.prototype.auth = function(server, connections, db, username, password, options, callback) {
+ var self = this;
+ // We don't have the Kerberos library
+ if(Kerberos == null) return callback(new Error("Kerberos library is not installed"));
+ var gssapiServiceName = options['gssapiServiceName'] || 'mongodb';
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ // Execute MongoCR
+ var execute = function(connection) {
+ // Start Auth process for a connection
+ SSIPAuthenticate(self, username, password, gssapiServiceName, server, connection, options, function(err, r) {
+ // Adjust count
+ count = count - 1;
+
+ // If we have an error
+ if(err) {
+ errorObject = err;
+ } else if(r && typeof r == 'object' && r.result['$err']) {
+ errorObject = r.result;
+ } else if(r && typeof r == 'object' && r.result['errmsg']) {
+ errorObject = r.result;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ // We have authenticated all connections
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password, options));
+ // Return correct authentication
+ callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using mongocr"));
+ callback(errorObject, false);
+ }
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ execute(_connection);
+ });
+ }
+
+ _execute(connections.shift());
+ }
+}
+
+var SSIPAuthenticate = function(self, username, password, gssapiServiceName, server, connection, options, callback) {
+ // Build Authentication command to send to MongoDB
+ var command = {
+ saslStart: 1
+ , mechanism: 'GSSAPI'
+ , payload: ''
+ , autoAuthorize: 1
+ };
+
+ // Create authenticator
+ var mongo_auth_process = new MongoAuthProcess(connection.host, connection.port, gssapiServiceName, options);
+
+ // Execute first sasl step
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+
+ mongo_auth_process.init(username, password, function(err) {
+ if(err) return callback(err);
+
+ mongo_auth_process.transition(doc.payload, function(err, payload) {
+ if(err) return callback(err);
+
+ // Perform the next step against mongod
+ var command = {
+ saslContinue: 1
+ , conversationId: doc.conversationId
+ , payload: payload
+ };
+
+ // Execute the command
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+
+ mongo_auth_process.transition(doc.payload, function(err, payload) {
+ if(err) return callback(err);
+
+ // Perform the next step against mongod
+ var command = {
+ saslContinue: 1
+ , conversationId: doc.conversationId
+ , payload: payload
+ };
+
+ // Execute the command
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+
+ mongo_auth_process.transition(doc.payload, function(err, payload) {
+ // Perform the next step against mongod
+ var command = {
+ saslContinue: 1
+ , conversationId: doc.conversationId
+ , payload: payload
+ };
+
+ // Execute the command
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ if(err) return callback(err, false);
+ var doc = r.result;
+
+ if(doc.done) return callback(null, true);
+ callback(new Error("Authentication failed"), false);
+ });
+ });
+ });
+ });
+ });
+ });
+ });
+ });
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+SSPI.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+SSPI.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var err = null;
+ var count = authStore.length;
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, authStore[i].options, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+/**
+ * This is a result from a authentication strategy
+ *
+ * @callback authResultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {boolean} result The result of the authentication process
+ */
+
+module.exports = SSPI;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/x509.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/x509.js
new file mode 100644
index 0000000..e3117c0
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/auth/x509.js
@@ -0,0 +1,165 @@
+"use strict";
+
+var f = require('util').format
+ , crypto = require('crypto')
+ , Query = require('../connection/commands').Query
+ , MongoError = require('../error');
+
+var AuthSession = function(db, username, password) {
+ this.db = db;
+ this.username = username;
+ this.password = password;
+}
+
+AuthSession.prototype.equal = function(session) {
+ return session.db == this.db
+ && session.username == this.username
+ && session.password == this.password;
+}
+
+/**
+ * Creates a new X509 authentication mechanism
+ * @class
+ * @return {X509} A cursor instance
+ */
+var X509 = function(bson) {
+ this.bson = bson;
+ this.authStore = [];
+}
+
+/**
+ * Authenticate
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {string} db Name of the database
+ * @param {string} username Username
+ * @param {string} password Password
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+X509.prototype.auth = function(server, connections, db, username, password, callback) {
+ var self = this;
+ // Total connections
+ var count = connections.length;
+ if(count == 0) return callback(null, null);
+
+ // Valid connections
+ var numberOfValidConnections = 0;
+ var credentialsValid = false;
+ var errorObject = null;
+
+ // For each connection we need to authenticate
+ while(connections.length > 0) {
+ // Execute MongoCR
+ var execute = function(connection) {
+ // Let's start the sasl process
+ var command = {
+ authenticate: 1
+ , mechanism: 'MONGODB-X509'
+ , user: username
+ };
+
+ // Let's start the process
+ server(connection, new Query(self.bson, "$external.$cmd", command, {
+ numberToSkip: 0, numberToReturn: 1
+ }), function(err, r) {
+ // Adjust count
+ count = count - 1;
+
+ // If we have an error
+ if(err) {
+ errorObject = err;
+ } else if(r.result['$err']) {
+ errorObject = r.result;
+ } else if(r.result['errmsg']) {
+ errorObject = r.result;
+ } else {
+ credentialsValid = true;
+ numberOfValidConnections = numberOfValidConnections + 1;
+ }
+
+ // We have authenticated all connections
+ if(count == 0 && numberOfValidConnections > 0) {
+ // Store the auth details
+ addAuthSession(self.authStore, new AuthSession(db, username, password));
+ // Return correct authentication
+ callback(null, true);
+ } else if(count == 0) {
+ if(errorObject == null) errorObject = new MongoError(f("failed to authenticate using mongocr"));
+ callback(errorObject, false);
+ }
+ });
+ }
+
+ var _execute = function(_connection) {
+ process.nextTick(function() {
+ execute(_connection);
+ });
+ }
+
+ _execute(connections.shift());
+ }
+}
+
+// Add to store only if it does not exist
+var addAuthSession = function(authStore, session) {
+ var found = false;
+
+ for(var i = 0; i < authStore.length; i++) {
+ if(authStore[i].equal(session)) {
+ found = true;
+ break;
+ }
+ }
+
+ if(!found) authStore.push(session);
+}
+
+/**
+ * Remove authStore credentials
+ * @method
+ * @param {string} db Name of database we are removing authStore details about
+ * @return {object}
+ */
+X509.prototype.logout = function(dbName) {
+ this.authStore = this.authStore.filter(function(x) {
+ return x.db != dbName;
+ });
+}
+
+/**
+ * Re authenticate pool
+ * @method
+ * @param {{Server}|{ReplSet}|{Mongos}} server Topology the authentication method is being called on
+ * @param {[]Connections} connections Connections to authenticate using this authenticator
+ * @param {authResultCallback} callback The callback to return the result from the authentication
+ * @return {object}
+ */
+X509.prototype.reauthenticate = function(server, connections, callback) {
+ var authStore = this.authStore.slice(0);
+ var err = null;
+ var count = authStore.length;
+ if(count == 0) return callback(null, null);
+ // Iterate over all the auth details stored
+ for(var i = 0; i < authStore.length; i++) {
+ this.auth(server, connections, authStore[i].db, authStore[i].username, authStore[i].password, function(err, r) {
+ if(err) err = err;
+ count = count - 1;
+ // Done re-authenticating
+ if(count == 0) {
+ callback(err, null);
+ }
+ });
+ }
+}
+
+/**
+ * This is a result from a authentication strategy
+ *
+ * @callback authResultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {boolean} result The result of the authentication process
+ */
+
+module.exports = X509;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/command_result.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/command_result.js
new file mode 100644
index 0000000..eb7b27a
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/command_result.js
@@ -0,0 +1,38 @@
+"use strict";
+
+var setProperty = require('../connection/utils').setProperty
+ , getProperty = require('../connection/utils').getProperty
+ , getSingleProperty = require('../connection/utils').getSingleProperty;
+
+/**
+ * Creates a new CommandResult instance
+ * @class
+ * @param {object} result CommandResult object
+ * @param {Connection} connection A connection instance associated with this result
+ * @return {CommandResult} A cursor instance
+ */
+var CommandResult = function(result, connection, message) {
+ this.result = result;
+ this.connection = connection;
+ this.message = message;
+}
+
+/**
+ * Convert CommandResult to JSON
+ * @method
+ * @return {object}
+ */
+CommandResult.prototype.toJSON = function() {
+ return this.result;
+}
+
+/**
+ * Convert CommandResult to String representation
+ * @method
+ * @return {string}
+ */
+CommandResult.prototype.toString = function() {
+ return JSON.stringify(this.toJSON());
+}
+
+module.exports = CommandResult;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/commands.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/commands.js
new file mode 100644
index 0000000..7999e59
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/commands.js
@@ -0,0 +1,555 @@
+"use strict";
+
+var f = require('util').format
+ , Long = require('bson').Long
+ , setProperty = require('./utils').setProperty
+ , getProperty = require('./utils').getProperty
+ , getSingleProperty = require('./utils').getSingleProperty;
+
+// Incrementing request id
+var _requestId = 0;
+
+// Wire command operation ids
+var OP_QUERY = 2004;
+var OP_GETMORE = 2005;
+var OP_KILL_CURSORS = 2007;
+
+// Query flags
+var OPTS_NONE = 0;
+var OPTS_TAILABLE_CURSOR = 2;
+var OPTS_SLAVE = 4;
+var OPTS_OPLOG_REPLAY = 8;
+var OPTS_NO_CURSOR_TIMEOUT = 16;
+var OPTS_AWAIT_DATA = 32;
+var OPTS_EXHAUST = 64;
+var OPTS_PARTIAL = 128;
+
+// Response flags
+var CURSOR_NOT_FOUND = 0;
+var QUERY_FAILURE = 2;
+var SHARD_CONFIG_STALE = 4;
+var AWAIT_CAPABLE = 8;
+
+/**************************************************************
+ * QUERY
+ **************************************************************/
+var Query = function(bson, ns, query, options) {
+ var self = this;
+ // Basic options needed to be passed in
+ if(ns == null) throw new Error("ns must be specified for query");
+ if(query == null) throw new Error("query must be specified for query");
+
+ // Validate that we are not passing 0x00 in the colletion name
+ if(!!~ns.indexOf("\x00")) {
+ throw new Error("namespace cannot contain a null character");
+ }
+
+ // Basic options
+ this.bson = bson;
+ this.ns = ns;
+ this.query = query;
+
+ // Ensure empty options
+ this.options = options || {};
+
+ // Additional options
+ this.numberToSkip = options.numberToSkip || 0;
+ this.numberToReturn = options.numberToReturn || 0;
+ this.returnFieldSelector = options.returnFieldSelector || null;
+ this.requestId = Query.getRequestId();
+
+ // Serialization option
+ this.serializeFunctions = typeof options.serializeFunctions == 'boolean' ? options.serializeFunctions : false;
+ this.ignoreUndefined = typeof options.ignoreUndefined == 'boolean' ? options.ignoreUndefined : false;
+ this.maxBsonSize = options.maxBsonSize || 1024 * 1024 * 16;
+ this.checkKeys = typeof options.checkKeys == 'boolean' ? options.checkKeys : true;
+ this.batchSize = self.numberToReturn;
+
+ // Flags
+ this.tailable = false;
+ this.slaveOk = typeof options.slaveOk == 'boolean'? options.slaveOk : false;
+ this.oplogReplay = false;
+ this.noCursorTimeout = false;
+ this.awaitData = false;
+ this.exhaust = false;
+ this.partial = false;
+}
+
+//
+// Assign a new request Id
+Query.prototype.incRequestId = function() {
+ this.requestId = _requestId++;
+}
+
+//
+// Assign a new request Id
+Query.nextRequestId = function() {
+ return _requestId + 1;
+}
+
+//
+// Uses a single allocated buffer for the process, avoiding multiple memory allocations
+Query.prototype.toBin = function() {
+ var self = this;
+ var buffers = [];
+ var projection = null;
+
+ // Set up the flags
+ var flags = 0;
+ if(this.tailable) {
+ flags |= OPTS_TAILABLE_CURSOR;
+ }
+
+ if(this.slaveOk) {
+ flags |= OPTS_SLAVE;
+ }
+
+ if(this.oplogReplay) {
+ flags |= OPTS_OPLOG_REPLAY;
+ }
+
+ if(this.noCursorTimeout) {
+ flags |= OPTS_NO_CURSOR_TIMEOUT;
+ }
+
+ if(this.awaitData) {
+ flags |= OPTS_AWAIT_DATA;
+ }
+
+ if(this.exhaust) {
+ flags |= OPTS_EXHAUST;
+ }
+
+ if(this.partial) {
+ flags |= OPTS_PARTIAL;
+ }
+
+ // If batchSize is different to self.numberToReturn
+ if(self.batchSize != self.numberToReturn) self.numberToReturn = self.batchSize;
+
+ // Allocate write protocol header buffer
+ var header = new Buffer(
+ 4 * 4 // Header
+ + 4 // Flags
+ + Buffer.byteLength(self.ns) + 1 // namespace
+ + 4 // numberToSkip
+ + 4 // numberToReturn
+ );
+
+ // Add header to buffers
+ buffers.push(header);
+
+ // Serialize the query
+ var query = self.bson.serialize(this.query
+ , this.checkKeys
+ , true
+ , this.serializeFunctions
+ , 0, this.ignoreUndefined);
+
+ // Add query document
+ buffers.push(query);
+
+ if(self.returnFieldSelector && Object.keys(self.returnFieldSelector).length > 0) {
+ // Serialize the projection document
+ projection = self.bson.serialize(this.returnFieldSelector, this.checkKeys, true, this.serializeFunctions, this.ignoreUndefined);
+ // Add projection document
+ buffers.push(projection);
+ }
+
+ // Total message size
+ var totalLength = header.length + query.length + (projection ? projection.length : 0);
+
+ // Set up the index
+ var index = 4;
+
+ // Write total document length
+ header[3] = (totalLength >> 24) & 0xff;
+ header[2] = (totalLength >> 16) & 0xff;
+ header[1] = (totalLength >> 8) & 0xff;
+ header[0] = (totalLength) & 0xff;
+
+ // Write header information requestId
+ header[index + 3] = (this.requestId >> 24) & 0xff;
+ header[index + 2] = (this.requestId >> 16) & 0xff;
+ header[index + 1] = (this.requestId >> 8) & 0xff;
+ header[index] = (this.requestId) & 0xff;
+ index = index + 4;
+
+ // Write header information responseTo
+ header[index + 3] = (0 >> 24) & 0xff;
+ header[index + 2] = (0 >> 16) & 0xff;
+ header[index + 1] = (0 >> 8) & 0xff;
+ header[index] = (0) & 0xff;
+ index = index + 4;
+
+ // Write header information OP_QUERY
+ header[index + 3] = (OP_QUERY >> 24) & 0xff;
+ header[index + 2] = (OP_QUERY >> 16) & 0xff;
+ header[index + 1] = (OP_QUERY >> 8) & 0xff;
+ header[index] = (OP_QUERY) & 0xff;
+ index = index + 4;
+
+ // Write header information flags
+ header[index + 3] = (flags >> 24) & 0xff;
+ header[index + 2] = (flags >> 16) & 0xff;
+ header[index + 1] = (flags >> 8) & 0xff;
+ header[index] = (flags) & 0xff;
+ index = index + 4;
+
+ // Write collection name
+ index = index + header.write(this.ns, index, 'utf8') + 1;
+ header[index - 1] = 0;
+
+ // Write header information flags numberToSkip
+ header[index + 3] = (this.numberToSkip >> 24) & 0xff;
+ header[index + 2] = (this.numberToSkip >> 16) & 0xff;
+ header[index + 1] = (this.numberToSkip >> 8) & 0xff;
+ header[index] = (this.numberToSkip) & 0xff;
+ index = index + 4;
+
+ // Write header information flags numberToReturn
+ header[index + 3] = (this.numberToReturn >> 24) & 0xff;
+ header[index + 2] = (this.numberToReturn >> 16) & 0xff;
+ header[index + 1] = (this.numberToReturn >> 8) & 0xff;
+ header[index] = (this.numberToReturn) & 0xff;
+ index = index + 4;
+
+ // Return the buffers
+ return buffers;
+}
+
+Query.getRequestId = function() {
+ return ++_requestId;
+}
+
+/**************************************************************
+ * GETMORE
+ **************************************************************/
+var GetMore = function(bson, ns, cursorId, opts) {
+ opts = opts || {};
+ this.numberToReturn = opts.numberToReturn || 0;
+ this.requestId = _requestId++;
+ this.bson = bson;
+ this.ns = ns;
+ this.cursorId = cursorId;
+}
+
+//
+// Uses a single allocated buffer for the process, avoiding multiple memory allocations
+GetMore.prototype.toBin = function() {
+ var length = 4 + Buffer.byteLength(this.ns) + 1 + 4 + 8 + (4 * 4);
+ // Create command buffer
+ var index = 0;
+ // Allocate buffer
+ var _buffer = new Buffer(length);
+
+ // Write header information
+ // index = write32bit(index, _buffer, length);
+ _buffer[index + 3] = (length >> 24) & 0xff;
+ _buffer[index + 2] = (length >> 16) & 0xff;
+ _buffer[index + 1] = (length >> 8) & 0xff;
+ _buffer[index] = (length) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, requestId);
+ _buffer[index + 3] = (this.requestId >> 24) & 0xff;
+ _buffer[index + 2] = (this.requestId >> 16) & 0xff;
+ _buffer[index + 1] = (this.requestId >> 8) & 0xff;
+ _buffer[index] = (this.requestId) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, 0);
+ _buffer[index + 3] = (0 >> 24) & 0xff;
+ _buffer[index + 2] = (0 >> 16) & 0xff;
+ _buffer[index + 1] = (0 >> 8) & 0xff;
+ _buffer[index] = (0) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, OP_GETMORE);
+ _buffer[index + 3] = (OP_GETMORE >> 24) & 0xff;
+ _buffer[index + 2] = (OP_GETMORE >> 16) & 0xff;
+ _buffer[index + 1] = (OP_GETMORE >> 8) & 0xff;
+ _buffer[index] = (OP_GETMORE) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, 0);
+ _buffer[index + 3] = (0 >> 24) & 0xff;
+ _buffer[index + 2] = (0 >> 16) & 0xff;
+ _buffer[index + 1] = (0 >> 8) & 0xff;
+ _buffer[index] = (0) & 0xff;
+ index = index + 4;
+
+ // Write collection name
+ index = index + _buffer.write(this.ns, index, 'utf8') + 1;
+ _buffer[index - 1] = 0;
+
+ // Write batch size
+ // index = write32bit(index, _buffer, numberToReturn);
+ _buffer[index + 3] = (this.numberToReturn >> 24) & 0xff;
+ _buffer[index + 2] = (this.numberToReturn >> 16) & 0xff;
+ _buffer[index + 1] = (this.numberToReturn >> 8) & 0xff;
+ _buffer[index] = (this.numberToReturn) & 0xff;
+ index = index + 4;
+
+ // Write cursor id
+ // index = write32bit(index, _buffer, cursorId.getLowBits());
+ _buffer[index + 3] = (this.cursorId.getLowBits() >> 24) & 0xff;
+ _buffer[index + 2] = (this.cursorId.getLowBits() >> 16) & 0xff;
+ _buffer[index + 1] = (this.cursorId.getLowBits() >> 8) & 0xff;
+ _buffer[index] = (this.cursorId.getLowBits()) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, cursorId.getHighBits());
+ _buffer[index + 3] = (this.cursorId.getHighBits() >> 24) & 0xff;
+ _buffer[index + 2] = (this.cursorId.getHighBits() >> 16) & 0xff;
+ _buffer[index + 1] = (this.cursorId.getHighBits() >> 8) & 0xff;
+ _buffer[index] = (this.cursorId.getHighBits()) & 0xff;
+ index = index + 4;
+
+ // Return buffer
+ return _buffer;
+}
+
+/**************************************************************
+ * KILLCURSOR
+ **************************************************************/
+var KillCursor = function(bson, cursorIds) {
+ this.requestId = _requestId++;
+ this.cursorIds = cursorIds;
+}
+
+//
+// Uses a single allocated buffer for the process, avoiding multiple memory allocations
+KillCursor.prototype.toBin = function() {
+ var length = 4 + 4 + (4 * 4) + (this.cursorIds.length * 8);
+
+ // Create command buffer
+ var index = 0;
+ var _buffer = new Buffer(length);
+
+ // Write header information
+ // index = write32bit(index, _buffer, length);
+ _buffer[index + 3] = (length >> 24) & 0xff;
+ _buffer[index + 2] = (length >> 16) & 0xff;
+ _buffer[index + 1] = (length >> 8) & 0xff;
+ _buffer[index] = (length) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, requestId);
+ _buffer[index + 3] = (this.requestId >> 24) & 0xff;
+ _buffer[index + 2] = (this.requestId >> 16) & 0xff;
+ _buffer[index + 1] = (this.requestId >> 8) & 0xff;
+ _buffer[index] = (this.requestId) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, 0);
+ _buffer[index + 3] = (0 >> 24) & 0xff;
+ _buffer[index + 2] = (0 >> 16) & 0xff;
+ _buffer[index + 1] = (0 >> 8) & 0xff;
+ _buffer[index] = (0) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, OP_KILL_CURSORS);
+ _buffer[index + 3] = (OP_KILL_CURSORS >> 24) & 0xff;
+ _buffer[index + 2] = (OP_KILL_CURSORS >> 16) & 0xff;
+ _buffer[index + 1] = (OP_KILL_CURSORS >> 8) & 0xff;
+ _buffer[index] = (OP_KILL_CURSORS) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, 0);
+ _buffer[index + 3] = (0 >> 24) & 0xff;
+ _buffer[index + 2] = (0 >> 16) & 0xff;
+ _buffer[index + 1] = (0 >> 8) & 0xff;
+ _buffer[index] = (0) & 0xff;
+ index = index + 4;
+
+ // Write batch size
+ // index = write32bit(index, _buffer, this.cursorIds.length);
+ _buffer[index + 3] = (this.cursorIds.length >> 24) & 0xff;
+ _buffer[index + 2] = (this.cursorIds.length >> 16) & 0xff;
+ _buffer[index + 1] = (this.cursorIds.length >> 8) & 0xff;
+ _buffer[index] = (this.cursorIds.length) & 0xff;
+ index = index + 4;
+
+ // Write all the cursor ids into the array
+ for(var i = 0; i < this.cursorIds.length; i++) {
+ // Write cursor id
+ // index = write32bit(index, _buffer, cursorIds[i].getLowBits());
+ _buffer[index + 3] = (this.cursorIds[i].getLowBits() >> 24) & 0xff;
+ _buffer[index + 2] = (this.cursorIds[i].getLowBits() >> 16) & 0xff;
+ _buffer[index + 1] = (this.cursorIds[i].getLowBits() >> 8) & 0xff;
+ _buffer[index] = (this.cursorIds[i].getLowBits()) & 0xff;
+ index = index + 4;
+
+ // index = write32bit(index, _buffer, cursorIds[i].getHighBits());
+ _buffer[index + 3] = (this.cursorIds[i].getHighBits() >> 24) & 0xff;
+ _buffer[index + 2] = (this.cursorIds[i].getHighBits() >> 16) & 0xff;
+ _buffer[index + 1] = (this.cursorIds[i].getHighBits() >> 8) & 0xff;
+ _buffer[index] = (this.cursorIds[i].getHighBits()) & 0xff;
+ index = index + 4;
+ }
+
+ // Return buffer
+ return _buffer;
+}
+
+var Response = function(bson, data, opts) {
+ opts = opts || {promoteLongs: true, promoteValues: true, promoteBuffers: false};
+ this.parsed = false;
+
+ //
+ // Parse Header
+ //
+ this.index = 0;
+ this.raw = data;
+ this.data = data;
+ this.bson = bson;
+ this.opts = opts;
+
+ // Read the message length
+ this.length = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Fetch the request id for this reply
+ this.requestId = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Fetch the id of the request that triggered the response
+ this.responseTo = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Skip op-code field
+ this.index = this.index + 4;
+
+ // Unpack flags
+ this.responseFlags = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Unpack the cursor
+ var lowBits = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+ var highBits = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+ // Create long object
+ this.cursorId = new Long(lowBits, highBits);
+
+ // Unpack the starting from
+ this.startingFrom = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Unpack the number of objects returned
+ this.numberReturned = data[this.index] | data[this.index + 1] << 8 | data[this.index + 2] << 16 | data[this.index + 3] << 24;
+ this.index = this.index + 4;
+
+ // Preallocate document array
+ this.documents = new Array(this.numberReturned);
+
+ // Flag values
+ this.cursorNotFound = (this.responseFlags & CURSOR_NOT_FOUND) != 0;
+ this.queryFailure = (this.responseFlags & QUERY_FAILURE) != 0;
+ this.shardConfigStale = (this.responseFlags & SHARD_CONFIG_STALE) != 0;
+ this.awaitCapable = (this.responseFlags & AWAIT_CAPABLE) != 0;
+ this.promoteLongs = typeof opts.promoteLongs == 'boolean' ? opts.promoteLongs : true;
+ this.promoteValues = typeof opts.promoteValues == 'boolean' ? opts.promoteValues : true;
+ this.promoteBuffers = typeof opts.promoteBuffers == 'boolean' ? opts.promoteBuffers : false;
+}
+
+Response.prototype.isParsed = function() {
+ return this.parsed;
+}
+
+// Validation buffers
+var firstBatch = new Buffer('firstBatch', 'utf8');
+var nextBatch = new Buffer('nextBatch', 'utf8');
+var cursorId = new Buffer('id', 'utf8').toString('hex');
+
+var documentBuffers = {
+ firstBatch: firstBatch.toString('hex'),
+ nextBatch: nextBatch.toString('hex')
+};
+
+Response.prototype.parse = function(options) {
+ // Don't parse again if not needed
+ if(this.parsed) return;
+ options = options || {};
+
+ // Allow the return of raw documents instead of parsing
+ var raw = options.raw || false;
+ var documentsReturnedIn = options.documentsReturnedIn || null;
+ var promoteLongs = typeof options.promoteLongs == 'boolean'
+ ? options.promoteLongs
+ : this.opts.promoteLongs;
+ var promoteValues = typeof options.promoteValues == 'boolean'
+ ? options.promoteValues
+ : this.opts.promoteValues;
+ var promoteBuffers = typeof options.promoteBuffers == 'boolean'
+ ? options.promoteBuffers
+ : this.opts.promoteBuffers
+
+ //
+ // Single document and documentsReturnedIn set
+ //
+ if(this.numberReturned == 1 && documentsReturnedIn != null && raw) {
+ // Calculate the bson size
+ var bsonSize = this.data[this.index] | this.data[this.index + 1] << 8 | this.data[this.index + 2] << 16 | this.data[this.index + 3] << 24;
+ // Slice out the buffer containing the command result document
+ var document = this.data.slice(this.index, this.index + bsonSize);
+ // Set up field we wish to keep as raw
+ var fieldsAsRaw = {}
+ fieldsAsRaw[documentsReturnedIn] = true;
+ // Set up the options
+ var _options = {
+ promoteLongs: promoteLongs,
+ promoteValues: promoteValues,
+ promoteBuffers: promoteBuffers,
+ fieldsAsRaw: fieldsAsRaw
+ };
+
+ // Deserialize but keep the array of documents in non-parsed form
+ var doc = this.bson.deserialize(document, _options);
+
+ // Get the documents
+ this.documents = doc.cursor[documentsReturnedIn];
+ this.numberReturned = this.documents.length;
+ // Ensure we have a Long valie cursor id
+ this.cursorId = typeof doc.cursor.id == 'number'
+ ? Long.fromNumber(doc.cursor.id)
+ : doc.cursor.id;
+
+ // Adjust the index
+ this.index = this.index + bsonSize;
+
+ // Set as parsed
+ this.parsed = true
+ return;
+ }
+
+ //
+ // Parse Body
+ //
+ for(var i = 0; i < this.numberReturned; i++) {
+ var bsonSize = this.data[this.index] | this.data[this.index + 1] << 8 | this.data[this.index + 2] << 16 | this.data[this.index + 3] << 24;
+ // Parse options
+ var _options = {promoteLongs: promoteLongs, promoteValues: promoteValues, promoteBuffers: promoteBuffers};
+
+ // If we have raw results specified slice the return document
+ if(raw) {
+ this.documents[i] = this.data.slice(this.index, this.index + bsonSize);
+ } else {
+ this.documents[i] = this.bson.deserialize(this.data.slice(this.index, this.index + bsonSize), _options);
+ }
+
+ // Adjust the index
+ this.index = this.index + bsonSize;
+ }
+
+ // Set parsed
+ this.parsed = true;
+}
+
+module.exports = {
+ Query: Query
+ , GetMore: GetMore
+ , Response: Response
+ , KillCursor: KillCursor
+}
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js
new file mode 100644
index 0000000..261fd77
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js
@@ -0,0 +1,568 @@
+"use strict";
+
+var inherits = require('util').inherits
+ , EventEmitter = require('events').EventEmitter
+ , net = require('net')
+ , tls = require('tls')
+ , crypto = require('crypto')
+ , f = require('util').format
+ , debugOptions = require('./utils').debugOptions
+ , Response = require('./commands').Response
+ , MongoError = require('../error')
+ , Logger = require('./logger');
+
+var _id = 0;
+var debugFields = ['host', 'port', 'size', 'keepAlive', 'keepAliveInitialDelay', 'noDelay'
+ , 'connectionTimeout', 'socketTimeout', 'singleBufferSerializtion', 'ssl', 'ca', 'cert'
+ , 'rejectUnauthorized', 'promoteLongs', 'promoteValues', 'promoteBuffers', 'checkServerIdentity'];
+var connectionAccounting = false;
+var connections = {};
+
+/**
+ * Creates a new Connection instance
+ * @class
+ * @param {string} options.host The server host
+ * @param {number} options.port The server port
+ * @param {boolean} [options.keepAlive=true] TCP Connection keep alive enabled
+ * @param {number} [options.keepAliveInitialDelay=0] Initial delay before TCP keep alive enabled
+ * @param {boolean} [options.noDelay=true] TCP Connection no delay
+ * @param {number} [options.connectionTimeout=0] TCP Connection timeout setting
+ * @param {number} [options.socketTimeout=0] TCP Socket timeout setting
+ * @param {boolean} [options.singleBufferSerializtion=true] Serialize into single buffer, trade of peak memory for serialization speed
+ * @param {boolean} [options.ssl=false] Use SSL for connection
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {Buffer} [options.ca] SSL Certificate store binary buffer
+ * @param {Buffer} [options.cert] SSL Certificate binary buffer
+ * @param {Buffer} [options.key] SSL Key file binary buffer
+ * @param {string} [options.passphrase] SSL Certificate pass phrase
+ * @param {boolean} [options.rejectUnauthorized=true] Reject unauthorized server certificates
+ * @param {boolean} [options.promoteLongs=true] Convert Long values from the db into Numbers if they fit into 53 bits
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @fires Connection#connect
+ * @fires Connection#close
+ * @fires Connection#error
+ * @fires Connection#timeout
+ * @fires Connection#parseError
+ * @return {Connection} A cursor instance
+ */
+var Connection = function(messageHandler, options) {
+ // Add event listener
+ EventEmitter.call(this);
+ // Set empty if no options passed
+ this.options = options || {};
+ // Identification information
+ this.id = _id++;
+ // Logger instance
+ this.logger = Logger('Connection', options);
+ // No bson parser passed in
+ if(!options.bson) throw new Error("must pass in valid bson parser");
+ // Get bson parser
+ this.bson = options.bson;
+ // Grouping tag used for debugging purposes
+ this.tag = options.tag;
+ // Message handler
+ this.messageHandler = messageHandler;
+
+ // Max BSON message size
+ this.maxBsonMessageSize = options.maxBsonMessageSize || (1024 * 1024 * 16 * 4);
+ // Debug information
+ if(this.logger.isDebug()) this.logger.debug(f('creating connection %s with options [%s]', this.id, JSON.stringify(debugOptions(debugFields, options))));
+
+ // Default options
+ this.port = options.port || 27017;
+ this.host = options.host || 'localhost';
+ this.keepAlive = typeof options.keepAlive == 'boolean' ? options.keepAlive : true;
+ this.keepAliveInitialDelay = options.keepAliveInitialDelay || 0;
+ this.noDelay = typeof options.noDelay == 'boolean' ? options.noDelay : true;
+ this.connectionTimeout = options.connectionTimeout || 0;
+ this.socketTimeout = options.socketTimeout || 0;
+
+ // If connection was destroyed
+ this.destroyed = false;
+
+ // Check if we have a domain socket
+ this.domainSocket = this.host.indexOf('\/') != -1;
+
+ // Serialize commands using function
+ this.singleBufferSerializtion = typeof options.singleBufferSerializtion == 'boolean' ? options.singleBufferSerializtion : true;
+ this.serializationFunction = this.singleBufferSerializtion ? 'toBinUnified' : 'toBin';
+
+ // SSL options
+ this.ca = options.ca || null;
+ this.cert = options.cert || null;
+ this.key = options.key || null;
+ this.passphrase = options.passphrase || null;
+ this.ssl = typeof options.ssl == 'boolean' ? options.ssl : false;
+ this.rejectUnauthorized = typeof options.rejectUnauthorized == 'boolean' ? options.rejectUnauthorized : true;
+ this.checkServerIdentity = typeof options.checkServerIdentity == 'boolean'
+ || typeof options.checkServerIdentity == 'function' ? options.checkServerIdentity : true;
+
+ // If ssl not enabled
+ if(!this.ssl) this.rejectUnauthorized = false;
+
+ // Response options
+ this.responseOptions = {
+ promoteLongs: typeof options.promoteLongs == 'boolean' ? options.promoteLongs : true,
+ promoteValues: typeof options.promoteValues == 'boolean' ? options.promoteValues : true,
+ promoteBuffers: typeof options.promoteBuffers == 'boolean' ? options.promoteBuffers: false
+ }
+
+ // Flushing
+ this.flushing = false;
+ this.queue = [];
+
+ // Internal state
+ this.connection = null;
+ this.writeStream = null;
+
+ // Create hash method
+ var hash = crypto.createHash('sha1');
+ hash.update(f('%s:%s', this.host, this.port));
+
+ // Create a hash name
+ this.hashedName = hash.digest('hex');
+
+ // All operations in flight on the connection
+ this.workItems = [];
+}
+
+inherits(Connection, EventEmitter);
+
+Connection.prototype.setSocketTimeout = function(value) {
+ if(this.connection) {
+ this.connection.setTimeout(value);
+ }
+}
+
+Connection.prototype.resetSocketTimeout = function(value) {
+ if(this.connection) {
+ this.connection.setTimeout(this.socketTimeout);;
+ }
+}
+
+Connection.enableConnectionAccounting = function() {
+ connectionAccounting = true;
+ connections = {};
+}
+
+Connection.disableConnectionAccounting = function() {
+ connectionAccounting = false;
+}
+
+Connection.connections = function() {
+ return connections;
+}
+
+//
+// Connection handlers
+var errorHandler = function(self) {
+ return function(err) {
+ if(connectionAccounting) delete connections[self.id];
+ // Debug information
+ if(self.logger.isDebug()) self.logger.debug(f('connection %s for [%s:%s] errored out with [%s]', self.id, self.host, self.port, JSON.stringify(err)));
+ // Emit the error
+ if(self.listeners('error').length > 0) self.emit("error", MongoError.create(err), self);
+ }
+}
+
+var timeoutHandler = function(self) {
+ return function(err) {
+ if(connectionAccounting) delete connections[self.id];
+ // Debug information
+ if(self.logger.isDebug()) self.logger.debug(f('connection %s for [%s:%s] timed out', self.id, self.host, self.port));
+ // Emit timeout error
+ self.emit("timeout"
+ , MongoError.create(f("connection %s to %s:%s timed out", self.id, self.host, self.port))
+ , self);
+ }
+}
+
+var closeHandler = function(self) {
+ return function(hadError) {
+ if(connectionAccounting) delete connections[self.id];
+ // Debug information
+ if(self.logger.isDebug()) self.logger.debug(f('connection %s with for [%s:%s] closed', self.id, self.host, self.port));
+
+ // Emit close event
+ if(!hadError) {
+ self.emit("close"
+ , MongoError.create(f("connection %s to %s:%s closed", self.id, self.host, self.port))
+ , self);
+ }
+ }
+}
+
+var dataHandler = function(self) {
+ return function(data) {
+ // Parse until we are done with the data
+ while(data.length > 0) {
+ // If we still have bytes to read on the current message
+ if(self.bytesRead > 0 && self.sizeOfMessage > 0) {
+ // Calculate the amount of remaining bytes
+ var remainingBytesToRead = self.sizeOfMessage - self.bytesRead;
+ // Check if the current chunk contains the rest of the message
+ if(remainingBytesToRead > data.length) {
+ // Copy the new data into the exiting buffer (should have been allocated when we know the message size)
+ data.copy(self.buffer, self.bytesRead);
+ // Adjust the number of bytes read so it point to the correct index in the buffer
+ self.bytesRead = self.bytesRead + data.length;
+
+ // Reset state of buffer
+ data = new Buffer(0);
+ } else {
+ // Copy the missing part of the data into our current buffer
+ data.copy(self.buffer, self.bytesRead, 0, remainingBytesToRead);
+ // Slice the overflow into a new buffer that we will then re-parse
+ data = data.slice(remainingBytesToRead);
+
+ // Emit current complete message
+ try {
+ var emitBuffer = self.buffer;
+ // Reset state of buffer
+ self.buffer = null;
+ self.sizeOfMessage = 0;
+ self.bytesRead = 0;
+ self.stubBuffer = null;
+ // Emit the buffer
+ self.messageHandler(new Response(self.bson, emitBuffer, self.responseOptions), self);
+ } catch(err) {
+ var errorObject = {err:"socketHandler", trace:err, bin:self.buffer, parseState:{
+ sizeOfMessage:self.sizeOfMessage,
+ bytesRead:self.bytesRead,
+ stubBuffer:self.stubBuffer}};
+ // We got a parse Error fire it off then keep going
+ self.emit("parseError", errorObject, self);
+ }
+ }
+ } else {
+ // Stub buffer is kept in case we don't get enough bytes to determine the
+ // size of the message (< 4 bytes)
+ if(self.stubBuffer != null && self.stubBuffer.length > 0) {
+ // If we have enough bytes to determine the message size let's do it
+ if(self.stubBuffer.length + data.length > 4) {
+ // Prepad the data
+ var newData = new Buffer(self.stubBuffer.length + data.length);
+ self.stubBuffer.copy(newData, 0);
+ data.copy(newData, self.stubBuffer.length);
+ // Reassign for parsing
+ data = newData;
+
+ // Reset state of buffer
+ self.buffer = null;
+ self.sizeOfMessage = 0;
+ self.bytesRead = 0;
+ self.stubBuffer = null;
+
+ } else {
+
+ // Add the the bytes to the stub buffer
+ var newStubBuffer = new Buffer(self.stubBuffer.length + data.length);
+ // Copy existing stub buffer
+ self.stubBuffer.copy(newStubBuffer, 0);
+ // Copy missing part of the data
+ data.copy(newStubBuffer, self.stubBuffer.length);
+ // Exit parsing loop
+ data = new Buffer(0);
+ }
+ } else {
+ if(data.length > 4) {
+ // Retrieve the message size
+ // var sizeOfMessage = data.readUInt32LE(0);
+ var sizeOfMessage = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
+ // If we have a negative sizeOfMessage emit error and return
+ if(sizeOfMessage < 0 || sizeOfMessage > self.maxBsonMessageSize) {
+ var errorObject = {err:"socketHandler", trace:'', bin:self.buffer, parseState:{
+ sizeOfMessage: sizeOfMessage,
+ bytesRead: self.bytesRead,
+ stubBuffer: self.stubBuffer}};
+ // We got a parse Error fire it off then keep going
+ self.emit("parseError", errorObject, self);
+ return;
+ }
+
+ // Ensure that the size of message is larger than 0 and less than the max allowed
+ if(sizeOfMessage > 4 && sizeOfMessage < self.maxBsonMessageSize && sizeOfMessage > data.length) {
+ self.buffer = new Buffer(sizeOfMessage);
+ // Copy all the data into the buffer
+ data.copy(self.buffer, 0);
+ // Update bytes read
+ self.bytesRead = data.length;
+ // Update sizeOfMessage
+ self.sizeOfMessage = sizeOfMessage;
+ // Ensure stub buffer is null
+ self.stubBuffer = null;
+ // Exit parsing loop
+ data = new Buffer(0);
+
+ } else if(sizeOfMessage > 4 && sizeOfMessage < self.maxBsonMessageSize && sizeOfMessage == data.length) {
+ try {
+ var emitBuffer = data;
+ // Reset state of buffer
+ self.buffer = null;
+ self.sizeOfMessage = 0;
+ self.bytesRead = 0;
+ self.stubBuffer = null;
+ // Exit parsing loop
+ data = new Buffer(0);
+ // Emit the message
+ self.messageHandler(new Response(self.bson, emitBuffer, self.responseOptions), self);
+ } catch (err) {
+ self.emit("parseError", err, self);
+ }
+ } else if(sizeOfMessage <= 4 || sizeOfMessage > self.maxBsonMessageSize) {
+ var errorObject = {err:"socketHandler", trace:null, bin:data, parseState:{
+ sizeOfMessage:sizeOfMessage,
+ bytesRead:0,
+ buffer:null,
+ stubBuffer:null}};
+ // We got a parse Error fire it off then keep going
+ self.emit("parseError", errorObject, self);
+
+ // Clear out the state of the parser
+ self.buffer = null;
+ self.sizeOfMessage = 0;
+ self.bytesRead = 0;
+ self.stubBuffer = null;
+ // Exit parsing loop
+ data = new Buffer(0);
+ } else {
+ var emitBuffer = data.slice(0, sizeOfMessage);
+ // Reset state of buffer
+ self.buffer = null;
+ self.sizeOfMessage = 0;
+ self.bytesRead = 0;
+ self.stubBuffer = null;
+ // Copy rest of message
+ data = data.slice(sizeOfMessage);
+ // Emit the message
+ self.messageHandler(new Response(self.bson, emitBuffer, self.responseOptions), self);
+ }
+ } else {
+ // Create a buffer that contains the space for the non-complete message
+ self.stubBuffer = new Buffer(data.length)
+ // Copy the data to the stub buffer
+ data.copy(self.stubBuffer, 0);
+ // Exit parsing loop
+ data = new Buffer(0);
+ }
+ }
+ }
+ }
+ }
+}
+
+// List of socket level valid ssl options
+var legalSslSocketOptions = ['pfx', 'key', 'passphrase', 'cert', 'ca', 'ciphers'
+ , 'NPNProtocols', 'ALPNProtocols', 'servername'
+ , 'secureProtocol', 'secureContext', 'session'
+ , 'minDHSize'];
+
+function merge(options1, options2) {
+ // Merge in any allowed ssl options
+ for(var name in options2) {
+ if(options2[name] != null && legalSslSocketOptions.indexOf(name) != -1) {
+ options1[name] = options2[name];
+ }
+ }
+}
+
+/**
+ * Connect
+ * @method
+ */
+Connection.prototype.connect = function(_options) {
+ var self = this;
+ _options = _options || {};
+ // Set the connections
+ if(connectionAccounting) connections[this.id] = this;
+ // Check if we are overriding the promoteLongs
+ if(typeof _options.promoteLongs == 'boolean') {
+ self.responseOptions.promoteLongs = _options.promoteLongs;
+ self.responseOptions.promoteValues = _options.promoteValues;
+ self.responseOptions.promoteBuffers = _options.promoteBuffers;
+ }
+
+ // Create new connection instance
+ self.connection = self.domainSocket
+ ? net.createConnection(self.host)
+ : net.createConnection(self.port, self.host);
+
+ // Set the options for the connection
+ self.connection.setKeepAlive(self.keepAlive, self.keepAliveInitialDelay);
+ self.connection.setTimeout(self.connectionTimeout);
+ self.connection.setNoDelay(self.noDelay);
+
+ // If we have ssl enabled
+ if(self.ssl) {
+ var sslOptions = {
+ socket: self.connection
+ , rejectUnauthorized: self.rejectUnauthorized
+ }
+
+ // Merge in options
+ merge(sslOptions, this.options);
+ merge(sslOptions, _options);
+
+ // Set options for ssl
+ if(self.ca) sslOptions.ca = self.ca;
+ if(self.cert) sslOptions.cert = self.cert;
+ if(self.key) sslOptions.key = self.key;
+ if(self.passphrase) sslOptions.passphrase = self.passphrase;
+
+ // Override checkServerIdentity behavior
+ if(self.checkServerIdentity == false) {
+ // Skip the identiy check by retuning undefined as per node documents
+ // https://nodejs.org/api/tls.html#tls_tls_connect_options_callback
+ sslOptions.checkServerIdentity = function(servername, cert) {
+ return undefined;
+ }
+ } else if(typeof self.checkServerIdentity == 'function') {
+ sslOptions.checkServerIdentity = self.checkServerIdentity;
+ }
+
+ // Attempt SSL connection
+ self.connection = tls.connect(self.port, self.host, sslOptions, function() {
+ // Error on auth or skip
+ if(self.connection.authorizationError && self.rejectUnauthorized) {
+ return self.emit("error", self.connection.authorizationError, self, {ssl:true});
+ }
+
+ // Set socket timeout instead of connection timeout
+ self.connection.setTimeout(self.socketTimeout);
+ // We are done emit connect
+ self.emit('connect', self);
+ });
+ self.connection.setTimeout(self.connectionTimeout);
+ } else {
+ self.connection.on('connect', function() {
+ // Set socket timeout instead of connection timeout
+ self.connection.setTimeout(self.socketTimeout);
+ // Emit connect event
+ self.emit('connect', self);
+ });
+ }
+
+ // Add handlers for events
+ self.connection.once('error', errorHandler(self));
+ self.connection.once('timeout', timeoutHandler(self));
+ self.connection.once('close', closeHandler(self));
+ self.connection.on('data', dataHandler(self));
+}
+
+/**
+ * Unref this connection
+ * @method
+ * @return {boolean}
+ */
+Connection.prototype.unref = function() {
+ if (this.connection) this.connection.unref();
+ else {
+ var self = this;
+ this.once('connect', function() {
+ self.connection.unref();
+ });
+ }
+}
+
+/**
+ * Destroy connection
+ * @method
+ */
+Connection.prototype.destroy = function() {
+ // Set the connections
+ if(connectionAccounting) delete connections[this.id];
+ if(this.connection) {
+ this.connection.end();
+ this.connection.destroy();
+ }
+
+ this.destroyed = true;
+}
+
+/**
+ * Write to connection
+ * @method
+ * @param {Command} command Command to write out need to implement toBin and toBinUnified
+ */
+Connection.prototype.write = function(buffer) {
+ // Debug Log
+ if(this.logger.isDebug()) {
+ if(!Array.isArray(buffer)) {
+ this.logger.debug(f('writing buffer [%s] to %s:%s', buffer.toString('hex'), this.host, this.port));
+ } else {
+ for(var i = 0; i < buffer.length; i++)
+ this.logger.debug(f('writing buffer [%s] to %s:%s', buffer[i].toString('hex'), this.host, this.port));
+ }
+ }
+
+ // Write out the command
+ if(!Array.isArray(buffer)) return this.connection.write(buffer, 'binary');
+ // Iterate over all buffers and write them in order to the socket
+ for(var i = 0; i < buffer.length; i++) this.connection.write(buffer[i], 'binary');
+}
+
+/**
+ * Return id of connection as a string
+ * @method
+ * @return {string}
+ */
+Connection.prototype.toString = function() {
+ return "" + this.id;
+}
+
+/**
+ * Return json object of connection
+ * @method
+ * @return {object}
+ */
+Connection.prototype.toJSON = function() {
+ return {id: this.id, host: this.host, port: this.port};
+}
+
+/**
+ * Is the connection connected
+ * @method
+ * @return {boolean}
+ */
+Connection.prototype.isConnected = function() {
+ if(this.destroyed) return false;
+ return !this.connection.destroyed && this.connection.writable;
+}
+
+/**
+ * A server connect event, used to verify that the connection is up and running
+ *
+ * @event Connection#connect
+ * @type {Connection}
+ */
+
+/**
+ * The server connection closed, all pool connections closed
+ *
+ * @event Connection#close
+ * @type {Connection}
+ */
+
+/**
+ * The server connection caused an error, all pool connections closed
+ *
+ * @event Connection#error
+ * @type {Connection}
+ */
+
+/**
+ * The server connection timed out, all pool connections closed
+ *
+ * @event Connection#timeout
+ * @type {Connection}
+ */
+
+/**
+ * The driver experienced an invalid message, all pool connections closed
+ *
+ * @event Connection#parseError
+ * @type {Connection}
+ */
+
+module.exports = Connection;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/logger.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/logger.js
new file mode 100644
index 0000000..cba8954
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/logger.js
@@ -0,0 +1,229 @@
+"use strict";
+
+var f = require('util').format
+ , MongoError = require('../error');
+
+// Filters for classes
+var classFilters = {};
+var filteredClasses = {};
+var level = null;
+// Save the process id
+var pid = process.pid;
+// current logger
+var currentLogger = null;
+
+/**
+ * Creates a new Logger instance
+ * @class
+ * @param {string} className The Class name associated with the logging instance
+ * @param {object} [options=null] Optional settings.
+ * @param {Function} [options.logger=null] Custom logger function;
+ * @param {string} [options.loggerLevel=error] Override default global log level.
+ * @return {Logger} a Logger instance.
+ */
+var Logger = function(className, options) {
+ if(!(this instanceof Logger)) return new Logger(className, options);
+ options = options || {};
+
+ // Current reference
+ var self = this;
+ this.className = className;
+
+ // Current logger
+ if(options.logger) {
+ currentLogger = options.logger;
+ } else if(currentLogger == null) {
+ currentLogger = console.log;
+ }
+
+ // Set level of logging, default is error
+ if(options.loggerLevel) {
+ level = options.loggerLevel || 'error';
+ }
+
+ // Add all class names
+ if(filteredClasses[this.className] == null) classFilters[this.className] = true;
+}
+
+/**
+ * Log a message at the debug level
+ * @method
+ * @param {string} message The message to log
+ * @param {object} object additional meta data to log
+ * @return {null}
+ */
+Logger.prototype.debug = function(message, object) {
+ if(this.isDebug()
+ && ((Object.keys(filteredClasses).length > 0 && filteredClasses[this.className])
+ || (Object.keys(filteredClasses).length == 0 && classFilters[this.className]))) {
+ var dateTime = new Date().getTime();
+ var msg = f("[%s-%s:%s] %s %s", 'DEBUG', this.className, pid, dateTime, message);
+ var state = {
+ type: 'debug', message: message, className: this.className, pid: pid, date: dateTime
+ };
+ if(object) state.meta = object;
+ currentLogger(msg, state);
+ }
+}
+
+/**
+ * Log a message at the warn level
+ * @method
+ * @param {string} message The message to log
+ * @param {object} object additional meta data to log
+ * @return {null}
+ */
+Logger.prototype.warn = function(message, object) {
+ if(this.isWarn()
+ && ((Object.keys(filteredClasses).length > 0 && filteredClasses[this.className])
+ || (Object.keys(filteredClasses).length == 0 && classFilters[this.className]))) {
+ var dateTime = new Date().getTime();
+ var msg = f("[%s-%s:%s] %s %s", 'WARN', this.className, pid, dateTime, message);
+ var state = {
+ type: 'warn', message: message, className: this.className, pid: pid, date: dateTime
+ };
+ if(object) state.meta = object;
+ currentLogger(msg, state);
+ }
+},
+
+/**
+ * Log a message at the info level
+ * @method
+ * @param {string} message The message to log
+ * @param {object} object additional meta data to log
+ * @return {null}
+ */
+Logger.prototype.info = function(message, object) {
+ if(this.isInfo()
+ && ((Object.keys(filteredClasses).length > 0 && filteredClasses[this.className])
+ || (Object.keys(filteredClasses).length == 0 && classFilters[this.className]))) {
+ var dateTime = new Date().getTime();
+ var msg = f("[%s-%s:%s] %s %s", 'INFO', this.className, pid, dateTime, message);
+ var state = {
+ type: 'info', message: message, className: this.className, pid: pid, date: dateTime
+ };
+ if(object) state.meta = object;
+ currentLogger(msg, state);
+ }
+},
+
+/**
+ * Log a message at the error level
+ * @method
+ * @param {string} message The message to log
+ * @param {object} object additional meta data to log
+ * @return {null}
+ */
+Logger.prototype.error = function(message, object) {
+ if(this.isError()
+ && ((Object.keys(filteredClasses).length > 0 && filteredClasses[this.className])
+ || (Object.keys(filteredClasses).length == 0 && classFilters[this.className]))) {
+ var dateTime = new Date().getTime();
+ var msg = f("[%s-%s:%s] %s %s", 'ERROR', this.className, pid, dateTime, message);
+ var state = {
+ type: 'error', message: message, className: this.className, pid: pid, date: dateTime
+ };
+ if(object) state.meta = object;
+ currentLogger(msg, state);
+ }
+},
+
+/**
+ * Is the logger set at info level
+ * @method
+ * @return {boolean}
+ */
+Logger.prototype.isInfo = function() {
+ return level == 'info' || level == 'debug';
+},
+
+/**
+ * Is the logger set at error level
+ * @method
+ * @return {boolean}
+ */
+Logger.prototype.isError = function() {
+ return level == 'error' || level == 'info' || level == 'debug';
+},
+
+/**
+ * Is the logger set at error level
+ * @method
+ * @return {boolean}
+ */
+Logger.prototype.isWarn = function() {
+ return level == 'error' || level == 'warn' || level == 'info' || level == 'debug';
+},
+
+/**
+ * Is the logger set at debug level
+ * @method
+ * @return {boolean}
+ */
+Logger.prototype.isDebug = function() {
+ return level == 'debug';
+}
+
+/**
+ * Resets the logger to default settings, error and no filtered classes
+ * @method
+ * @return {null}
+ */
+Logger.reset = function() {
+ level = 'error';
+ filteredClasses = {};
+}
+
+/**
+ * Get the current logger function
+ * @method
+ * @return {function}
+ */
+Logger.currentLogger = function() {
+ return currentLogger;
+}
+
+/**
+ * Set the current logger function
+ * @method
+ * @param {function} logger Logger function.
+ * @return {null}
+ */
+Logger.setCurrentLogger = function(logger) {
+ if(typeof logger != 'function') throw new MongoError("current logger must be a function");
+ currentLogger = logger;
+}
+
+/**
+ * Set what classes to log.
+ * @method
+ * @param {string} type The type of filter (currently only class)
+ * @param {string[]} values The filters to apply
+ * @return {null}
+ */
+Logger.filter = function(type, values) {
+ if(type == 'class' && Array.isArray(values)) {
+ filteredClasses = {};
+
+ values.forEach(function(x) {
+ filteredClasses[x] = true;
+ });
+ }
+}
+
+/**
+ * Set the current log level
+ * @method
+ * @param {string} level Set current log level (debug, info, error)
+ * @return {null}
+ */
+Logger.setLevel = function(_level) {
+ if(_level != 'info' && _level != 'error' && _level != 'debug' && _level != 'warn') {
+ throw new Error(f("%s is an illegal logging level", _level));
+ }
+
+ level = _level;
+}
+
+module.exports = Logger;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js
new file mode 100644
index 0000000..4e3ad83
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js
@@ -0,0 +1,1220 @@
+"use strict";
+
+var inherits = require('util').inherits,
+ EventEmitter = require('events').EventEmitter,
+ Connection = require('./connection'),
+ MongoError = require('../error'),
+ Logger = require('./logger'),
+ f = require('util').format,
+ Query = require('./commands').Query,
+ CommandResult = require('./command_result'),
+ assign = require('../topologies/shared').assign;
+
+var MongoCR = require('../auth/mongocr')
+ , X509 = require('../auth/x509')
+ , Plain = require('../auth/plain')
+ , GSSAPI = require('../auth/gssapi')
+ , SSPI = require('../auth/sspi')
+ , ScramSHA1 = require('../auth/scram');
+
+var DISCONNECTED = 'disconnected';
+var CONNECTING = 'connecting';
+var CONNECTED = 'connected';
+var DESTROYING = 'destroying';
+var DESTROYED = 'destroyed';
+
+var _id = 0;
+
+/**
+ * Creates a new Pool instance
+ * @class
+ * @param {string} options.host The server host
+ * @param {number} options.port The server port
+ * @param {number} [options.size=1] Max server connection pool size
+ * @param {boolean} [options.reconnect=true] Server will attempt to reconnect on loss of connection
+ * @param {number} [options.reconnectTries=30] Server attempt to reconnect #times
+ * @param {number} [options.reconnectInterval=1000] Server will wait # milliseconds between retries
+ * @param {boolean} [options.keepAlive=true] TCP Connection keep alive enabled
+ * @param {number} [options.keepAliveInitialDelay=0] Initial delay before TCP keep alive enabled
+ * @param {boolean} [options.noDelay=true] TCP Connection no delay
+ * @param {number} [options.connectionTimeout=0] TCP Connection timeout setting
+ * @param {number} [options.socketTimeout=0] TCP Socket timeout setting
+ * @param {number} [options.monitoringSocketTimeout=30000] TCP Socket timeout setting for replicaset monitoring socket
+ * @param {boolean} [options.ssl=false] Use SSL for connection
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {Buffer} [options.ca] SSL Certificate store binary buffer
+ * @param {Buffer} [options.cert] SSL Certificate binary buffer
+ * @param {Buffer} [options.key] SSL Key file binary buffer
+ * @param {string} [options.passPhrase] SSL Certificate pass phrase
+ * @param {boolean} [options.rejectUnauthorized=false] Reject unauthorized server certificates
+ * @param {boolean} [options.promoteLongs=true] Convert Long values from the db into Numbers if they fit into 53 bits
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @fires Pool#connect
+ * @fires Pool#close
+ * @fires Pool#error
+ * @fires Pool#timeout
+ * @fires Pool#parseError
+ * @return {Pool} A cursor instance
+ */
+var Pool = function(options) {
+ var self = this;
+ // Add event listener
+ EventEmitter.call(this);
+ // Add the options
+ this.options = assign({
+ // Host and port settings
+ host: 'localhost',
+ port: 27017,
+ // Pool default max size
+ size: 5,
+ // socket settings
+ connectionTimeout: 30000,
+ socketTimeout: 30000,
+ keepAlive: true,
+ keepAliveInitialDelay: 0,
+ noDelay: true,
+ // SSL Settings
+ ssl: false, checkServerIdentity: true,
+ ca: null, cert: null, key: null, passPhrase: null,
+ rejectUnauthorized: false,
+ promoteLongs: true,
+ promoteValues: true,
+ promoteBuffers: false,
+ // Reconnection options
+ reconnect: true,
+ reconnectInterval: 1000,
+ reconnectTries: 30,
+ // Enable domains
+ domainsEnabled: false
+ }, options);
+
+ // Identification information
+ this.id = _id++;
+ // Current reconnect retries
+ this.retriesLeft = this.options.reconnectTries;
+ this.reconnectId = null;
+ // No bson parser passed in
+ if(!options.bson || (options.bson
+ && (typeof options.bson.serialize != 'function'
+ || typeof options.bson.deserialize != 'function'))) {
+ throw new Error("must pass in valid bson parser");
+ }
+
+ // Logger instance
+ this.logger = Logger('Pool', options);
+ // Pool state
+ this.state = DISCONNECTED;
+ // Connections
+ this.availableConnections = [];
+ this.inUseConnections = [];
+ this.connectingConnections = [];
+ // Currently executing
+ this.executing = false;
+ // Operation work queue
+ this.queue = [];
+
+ // All the authProviders
+ this.authProviders = options.authProviders || {
+ 'mongocr': new MongoCR(options.bson), 'x509': new X509(options.bson)
+ , 'plain': new Plain(options.bson), 'gssapi': new GSSAPI(options.bson)
+ , 'sspi': new SSPI(options.bson), 'scram-sha-1': new ScramSHA1(options.bson)
+ }
+
+ // Are we currently authenticating
+ this.authenticating = false;
+ this.loggingout = false;
+ this.nonAuthenticatedConnections = [];
+ this.authenticatingTimestamp = null;
+ // Number of consecutive timeouts caught
+ this.numberOfConsecutiveTimeouts = 0;
+ // Current pool Index
+ this.connectionIndex = 0;
+}
+
+inherits(Pool, EventEmitter);
+
+Object.defineProperty(Pool.prototype, 'size', {
+ enumerable:true,
+ get: function() { return this.options.size; }
+});
+
+Object.defineProperty(Pool.prototype, 'connectionTimeout', {
+ enumerable:true,
+ get: function() { return this.options.connectionTimeout; }
+});
+
+Object.defineProperty(Pool.prototype, 'socketTimeout', {
+ enumerable:true,
+ get: function() { return this.options.socketTimeout; }
+});
+
+function stateTransition(self, newState) {
+ var legalTransitions = {
+ 'disconnected': [CONNECTING, DESTROYING, DISCONNECTED],
+ 'connecting': [CONNECTING, DESTROYING, CONNECTED, DISCONNECTED],
+ 'connected': [CONNECTED, DISCONNECTED, DESTROYING],
+ 'destroying': [DESTROYING, DESTROYED],
+ 'destroyed': [DESTROYED]
+ }
+
+ // Get current state
+ var legalStates = legalTransitions[self.state];
+ if(legalStates && legalStates.indexOf(newState) != -1) {
+ self.state = newState;
+ } else {
+ self.logger.error(f('Pool with id [%s] failed attempted illegal state transition from [%s] to [%s] only following state allowed [%s]'
+ , self.id, self.state, newState, legalStates));
+ }
+}
+
+function authenticate(pool, auth, connection, cb) {
+ if(auth[0] === undefined) return cb(null);
+ // We need to authenticate the server
+ var mechanism = auth[0];
+ var db = auth[1];
+ // Validate if the mechanism exists
+ if(!pool.authProviders[mechanism]) {
+ throw new MongoError(f('authMechanism %s not supported', mechanism));
+ }
+
+ // Get the provider
+ var provider = pool.authProviders[mechanism];
+
+ // Authenticate using the provided mechanism
+ provider.auth.apply(provider, [write(pool), [connection], db].concat(auth.slice(2)).concat([cb]));
+}
+
+// The write function used by the authentication mechanism (bypasses external)
+function write(self) {
+ return function(connection, command, callback) {
+ // Get the raw buffer
+ // Ensure we stop auth if pool was destroyed
+ if(self.state == DESTROYED || self.state == DESTROYING) {
+ return callback(new MongoError('pool destroyed'));
+ }
+
+ // Set the connection workItem callback
+ connection.workItems.push({
+ cb: callback, command: true, requestId: command.requestId
+ });
+
+ // Write the buffer out to the connection
+ connection.write(command.toBin());
+ };
+}
+
+
+function reauthenticate(pool, connection, cb) {
+ // Authenticate
+ function authenticateAgainstProvider(pool, connection, providers, cb) {
+ // Finished re-authenticating against providers
+ if(providers.length == 0) return cb();
+ // Get the provider name
+ var provider = pool.authProviders[providers.pop()];
+
+ // Auth provider
+ provider.reauthenticate(write(pool), [connection], function(err, r) {
+ // We got an error return immediately
+ if(err) return cb(err);
+ // Continue authenticating the connection
+ authenticateAgainstProvider(pool, connection, providers, cb);
+ });
+ }
+
+ // Start re-authenticating process
+ authenticateAgainstProvider(pool, connection, Object.keys(pool.authProviders), cb);
+}
+
+function connectionFailureHandler(self, event) {
+ return function(err) {
+ if (this._connectionFailHandled) return;
+ this._connectionFailHandled = true;
+ // Destroy the connection
+ this.destroy();
+
+ // Remove the connection
+ removeConnection(self, this);
+
+ // Flush all work Items on this connection
+ while(this.workItems.length > 0) {
+ var workItem = this.workItems.shift();
+ // if(workItem.cb) workItem.cb(err);
+ if(workItem.cb) workItem.cb(err);
+ }
+
+ // Did we catch a timeout, increment the numberOfConsecutiveTimeouts
+ if(event == 'timeout') {
+ self.numberOfConsecutiveTimeouts = self.numberOfConsecutiveTimeouts + 1;
+
+ // Have we timed out more than reconnectTries in a row ?
+ // Force close the pool as we are trying to connect to tcp sink hole
+ if(self.numberOfConsecutiveTimeouts > self.options.reconnectTries) {
+ self.numberOfConsecutiveTimeouts = 0;
+ // Destroy all connections and pool
+ self.destroy(true);
+ // Emit close event
+ return self.emit('close', self);
+ }
+ }
+
+ // No more socket available propegate the event
+ if(self.socketCount() == 0) {
+ if(self.state != DESTROYED && self.state != DESTROYING) {
+ stateTransition(self, DISCONNECTED);
+ }
+
+ // Do not emit error events, they are always close events
+ // do not trigger the low level error handler in node
+ event = event == 'error' ? 'close' : event;
+ self.emit(event, err);
+ }
+
+ // Start reconnection attempts
+ if(!self.reconnectId && self.options.reconnect) {
+ self.reconnectId = setTimeout(attemptReconnect(self), self.options.reconnectInterval);
+ }
+ };
+}
+
+function attemptReconnect(self) {
+ return function() {
+ self.emit('attemptReconnect', self);
+ if(self.state == DESTROYED || self.state == DESTROYING) return;
+
+ // We are connected do not try again
+ if(self.isConnected()) {
+ self.reconnectId = null;
+ return;
+ }
+
+ // If we have failure schedule a retry
+ function _connectionFailureHandler(self, event) {
+ return function() {
+ if (this._connectionFailHandled) return;
+ this._connectionFailHandled = true;
+ // Destroy the connection
+ this.destroy();
+ // Count down the number of reconnects
+ self.retriesLeft = self.retriesLeft - 1;
+ // How many retries are left
+ if(self.retriesLeft == 0) {
+ // Destroy the instance
+ self.destroy();
+ // Emit close event
+ self.emit('reconnectFailed'
+ , new MongoError(f('failed to reconnect after %s attempts with interval %s ms', self.options.reconnectTries, self.options.reconnectInterval)));
+ } else {
+ self.reconnectId = setTimeout(attemptReconnect(self), self.options.reconnectInterval);
+ }
+ }
+ }
+
+ // Got a connect handler
+ function _connectHandler(self) {
+ return function() {
+ // Assign
+ var connection = this;
+
+ // Pool destroyed stop the connection
+ if(self.state == DESTROYED || self.state == DESTROYING) {
+ return connection.destroy();
+ }
+
+ // Clear out all handlers
+ handlers.forEach(function(event) {
+ connection.removeAllListeners(event);
+ });
+
+ // Reset reconnect id
+ self.reconnectId = null;
+
+ // Apply pool connection handlers
+ connection.on('error', connectionFailureHandler(self, 'error'));
+ connection.on('close', connectionFailureHandler(self, 'close'));
+ connection.on('timeout', connectionFailureHandler(self, 'timeout'));
+ connection.on('parseError', connectionFailureHandler(self, 'parseError'));
+
+ // Apply any auth to the connection
+ reauthenticate(self, this, function(err) {
+ // Reset retries
+ self.retriesLeft = self.options.reconnectTries;
+ // Push to available connections
+ self.availableConnections.push(connection);
+ // Emit reconnect event
+ self.emit('reconnect', self);
+ // Trigger execute to start everything up again
+ _execute(self)();
+ });
+ }
+ }
+
+ // Create a connection
+ var connection = new Connection(messageHandler(self), self.options);
+ // Add handlers
+ connection.on('close', _connectionFailureHandler(self, 'close'));
+ connection.on('error', _connectionFailureHandler(self, 'error'));
+ connection.on('timeout', _connectionFailureHandler(self, 'timeout'));
+ connection.on('parseError', _connectionFailureHandler(self, 'parseError'));
+ // On connection
+ connection.on('connect', _connectHandler(self));
+ // Attempt connection
+ connection.connect();
+ }
+}
+
+function moveConnectionBetween(connection, from, to) {
+ var index = from.indexOf(connection);
+ // Move the connection from connecting to available
+ if(index != -1) {
+ from.splice(index, 1);
+ to.push(connection);
+ }
+}
+
+function messageHandler(self) {
+ return function(message, connection) {
+ // workItem to execute
+ var workItem = null;
+
+ // Locate the workItem
+ for(var i = 0; i < connection.workItems.length; i++) {
+ if(connection.workItems[i].requestId == message.responseTo) {
+ // Get the callback
+ var workItem = connection.workItems[i];
+ // Remove from list of workItems
+ connection.workItems.splice(i, 1);
+ }
+ }
+
+ // Reset timeout counter
+ self.numberOfConsecutiveTimeouts = 0;
+
+ // Reset the connection timeout if we modified it for
+ // this operation
+ if(workItem.socketTimeout) {
+ connection.resetSocketTimeout();
+ }
+
+ // Log if debug enabled
+ if(self.logger.isDebug()) {
+ self.logger.debug(f('message [%s] received from %s:%s'
+ , message.raw.toString('hex'), self.options.host, self.options.port));
+ }
+
+ // Authenticate any straggler connections
+ function authenticateStragglers(self, connection, callback) {
+ // Get any non authenticated connections
+ var connections = self.nonAuthenticatedConnections.slice(0);
+ var nonAuthenticatedConnections = self.nonAuthenticatedConnections;
+ self.nonAuthenticatedConnections = [];
+
+ // Establish if the connection need to be authenticated
+ // Add to authentication list if
+ // 1. we were in an authentication process when the operation was executed
+ // 2. our current authentication timestamp is from the workItem one, meaning an auth has happened
+ if(connection.workItems.length == 1 && (connection.workItems[0].authenticating == true
+ || (typeof connection.workItems[0].authenticatingTimestamp == 'number'
+ && connection.workItems[0].authenticatingTimestamp != self.authenticatingTimestamp))) {
+ // Add connection to the list
+ connections.push(connection);
+ }
+
+ // No connections need to be re-authenticated
+ if(connections.length == 0) {
+ // Release the connection back to the pool
+ moveConnectionBetween(connection, self.inUseConnections, self.availableConnections);
+ // Finish
+ return callback();
+ }
+
+ // Apply re-authentication to all connections before releasing back to pool
+ var connectionCount = connections.length;
+ // Authenticate all connections
+ for(var i = 0; i < connectionCount; i++) {
+ reauthenticate(self, connections[i], function(err) {
+ connectionCount = connectionCount - 1;
+
+ if(connectionCount == 0) {
+ // Put non authenticated connections in available connections
+ self.availableConnections = self.availableConnections.concat(nonAuthenticatedConnections);
+ // Release the connection back to the pool
+ moveConnectionBetween(connection, self.inUseConnections, self.availableConnections);
+ // Return
+ callback();
+ }
+ });
+ }
+ }
+
+ function handleOperationCallback(self, cb, err, result) {
+ // No domain enabled
+ if(!self.options.domainsEnabled) {
+ return process.nextTick(function() {
+ return cb(err, result);
+ });
+ }
+
+ // Domain enabled just call the callback
+ cb(err, result);
+ }
+
+ authenticateStragglers(self, connection, function(err) {
+ // Keep executing, ensure current message handler does not stop execution
+ if(!self.executing) {
+ process.nextTick(function() {
+ _execute(self)();
+ });
+ }
+
+ // Time to dispatch the message if we have a callback
+ if(!workItem.immediateRelease) {
+ try {
+ // Parse the message according to the provided options
+ message.parse(workItem);
+ } catch(err) {
+ return handleOperationCallback(self, workItem.cb, MongoError.create(err));
+ }
+
+ // Establish if we have an error
+ if(workItem.command && message.documents[0] && (message.documents[0].ok == 0 || message.documents[0]['$err']
+ || message.documents[0]['errmsg'] || message.documents[0]['code'])) {
+ return handleOperationCallback(self, workItem.cb, MongoError.create(message.documents[0]));
+ }
+
+ // Add the connection details
+ message.hashedName = connection.hashedName;
+
+ // Return the documents
+ handleOperationCallback(self, workItem.cb, null, new CommandResult(workItem.fullResult ? message : message.documents[0], connection, message));
+ }
+ });
+ }
+}
+
+/**
+ * Return the total socket count in the pool.
+ * @method
+ * @return {Number} The number of socket available.
+ */
+Pool.prototype.socketCount = function() {
+ return this.availableConnections.length
+ + this.inUseConnections.length
+ + this.connectingConnections.length;
+}
+
+/**
+ * Return all pool connections
+ * @method
+ * @return {Connectio[]} The pool connections
+ */
+Pool.prototype.allConnections = function() {
+ return this.availableConnections
+ .concat(this.inUseConnections)
+ .concat(this.connectingConnections);
+}
+
+/**
+ * Get a pool connection (round-robin)
+ * @method
+ * @return {Connection}
+ */
+Pool.prototype.get = function() {
+ return this.allConnections()[0];
+}
+
+/**
+ * Is the pool connected
+ * @method
+ * @return {boolean}
+ */
+Pool.prototype.isConnected = function() {
+ // We are in a destroyed state
+ if(this.state == DESTROYED || this.state == DESTROYING) {
+ return false;
+ }
+
+ // Get connections
+ var connections = this.availableConnections
+ .concat(this.inUseConnections);
+
+ for(var i = 0; i < connections.length; i++) {
+ if(connections[i].isConnected()) return true;
+ }
+
+ // Might be authenticating, but we are still connected
+ if(connections.length == 0 && this.authenticating) {
+ return true
+ }
+
+ // Not connected
+ return false;
+}
+
+/**
+ * Was the pool destroyed
+ * @method
+ * @return {boolean}
+ */
+Pool.prototype.isDestroyed = function() {
+ return this.state == DESTROYED || this.state == DESTROYING;
+}
+
+/**
+ * Is the pool in a disconnected state
+ * @method
+ * @return {boolean}
+ */
+Pool.prototype.isDisconnected = function() {
+ return this.state == DISCONNECTED;
+}
+
+/**
+ * Connect pool
+ * @method
+ */
+Pool.prototype.connect = function(auth) {
+ if(this.state != DISCONNECTED) {
+ throw new MongoError('connection in unlawful state ' + this.state);
+ }
+
+ var self = this;
+ // Transition to connecting state
+ stateTransition(this, CONNECTING);
+ // Create an array of the arguments
+ var args = Array.prototype.slice.call(arguments, 0);
+ // Create a connection
+ var connection = new Connection(messageHandler(self), this.options);
+ // Add to list of connections
+ this.connectingConnections.push(connection);
+ // Add listeners to the connection
+ connection.once('connect', function(connection) {
+ if(self.state == DESTROYED || self.state == DESTROYING) return self.destroy();
+
+ // Apply any store credentials
+ reauthenticate(self, connection, function(err) {
+ if(self.state == DESTROYED || self.state == DESTROYING) return self.destroy();
+
+ // We have an error emit it
+ if(err) {
+ // Destroy the pool
+ self.destroy();
+ // Emit the error
+ return self.emit('error', err);
+ }
+
+ // Authenticate
+ authenticate(self, args, connection, function(err) {
+ if(self.state == DESTROYED || self.state == DESTROYING) return self.destroy();
+
+ // We have an error emit it
+ if(err) {
+ // Destroy the pool
+ self.destroy();
+ // Emit the error
+ return self.emit('error', err);
+ }
+ // Set connected mode
+ stateTransition(self, CONNECTED);
+
+ // Move the active connection
+ moveConnectionBetween(connection, self.connectingConnections, self.availableConnections);
+
+ // Emit the connect event
+ self.emit('connect', self);
+ });
+ });
+ });
+
+ // Add error handlers
+ connection.once('error', connectionFailureHandler(this, 'error'));
+ connection.once('close', connectionFailureHandler(this, 'close'));
+ connection.once('timeout', connectionFailureHandler(this, 'timeout'));
+ connection.once('parseError', connectionFailureHandler(this, 'parseError'));
+
+ try {
+ connection.connect();
+ } catch(err) {
+ // SSL or something threw on connect
+ self.emit('error', err);
+ }
+}
+
+/**
+ * Authenticate using a specified mechanism
+ * @method
+ * @param {string} mechanism The Auth mechanism we are invoking
+ * @param {string} db The db we are invoking the mechanism against
+ * @param {...object} param Parameters for the specific mechanism
+ * @param {authResultCallback} callback A callback function
+ */
+Pool.prototype.auth = function(mechanism, db) {
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 0);
+ var callback = args.pop();
+
+ // If we don't have the mechanism fail
+ if(self.authProviders[mechanism] == null && mechanism != 'default') {
+ throw new MongoError(f("auth provider %s does not exist", mechanism));
+ }
+
+ // Signal that we are authenticating a new set of credentials
+ this.authenticating = true;
+ this.authenticatingTimestamp = new Date().getTime();
+
+ // Authenticate all live connections
+ function authenticateLiveConnections(self, args, cb) {
+ // Get the current viable connections
+ var connections = self.availableConnections;
+ // Allow nothing else to use the connections while we authenticate them
+ self.availableConnections = [];
+
+ var connectionsCount = connections.length;
+ var error = null;
+ // No connections available, return
+ if(connectionsCount == 0) return callback(null);
+ // Authenticate the connections
+ for(var i = 0; i < connections.length; i++) {
+ authenticate(self, args, connections[i], function(err) {
+ connectionsCount = connectionsCount - 1;
+
+ // Store the error
+ if(err) error = err;
+
+ // Processed all connections
+ if(connectionsCount == 0) {
+ // Auth finished
+ self.authenticating = false;
+ // Add the connections back to available connections
+ self.availableConnections = self.availableConnections.concat(connections);
+ // We had an error, return it
+ if(error) {
+ // Log the error
+ if(self.logger.isError()) {
+ self.logger.error(f('[%s] failed to authenticate against server %s:%s'
+ , self.id, self.options.host, self.options.port));
+ }
+
+ return cb(error);
+ }
+ cb(null);
+ }
+ });
+ }
+ }
+
+ // Wait for a logout in process to happen
+ function waitForLogout(self, cb) {
+ if(!self.loggingout) return cb();
+ setTimeout(function() {
+ waitForLogout(self, cb);
+ }, 1)
+ }
+
+ // Wait for loggout to finish
+ waitForLogout(self, function() {
+ // Authenticate all live connections
+ authenticateLiveConnections(self, args, function(err) {
+ // Credentials correctly stored in auth provider if successful
+ // Any new connections will now reauthenticate correctly
+ self.authenticating = false;
+ // Return after authentication connections
+ callback(err);
+ });
+ });
+}
+
+/**
+ * Logout all users against a database
+ * @method
+ * @param {string} dbName The database name
+ * @param {authResultCallback} callback A callback function
+ */
+Pool.prototype.logout = function(dbName, callback) {
+ var self = this;
+ if(typeof dbName != 'string') {
+ throw new MongoError('logout method requires a db name as first argument');
+ }
+
+ if(typeof callback != 'function') {
+ throw new MongoError('logout method requires a callback');
+ }
+
+ // Indicate logout in process
+ this.loggingout = true;
+
+ // Get all relevant connections
+ var connections = self.availableConnections.concat(self.inUseConnections);
+ var count = connections.length;
+ // Store any error
+ var error = null;
+
+ // Send logout command over all the connections
+ for(var i = 0; i < connections.length; i++) {
+ write(self)(connections[i], new Query(this.options.bson
+ , f('%s.$cmd', dbName)
+ , {logout:1}, {numberToSkip: 0, numberToReturn: 1}), function(err, r) {
+ count = count - 1;
+ if(err) error = err;
+
+ if(count == 0) {
+ self.loggingout = false;
+ callback(error);
+ };
+ });
+ }
+}
+
+/**
+ * Unref the pool
+ * @method
+ */
+Pool.prototype.unref = function() {
+ // Get all the known connections
+ var connections = this.availableConnections
+ .concat(this.inUseConnections)
+ .concat(this.connectingConnections);
+ connections.forEach(function(c) {
+ c.unref();
+ });
+}
+
+// Events
+var events = ['error', 'close', 'timeout', 'parseError', 'connect'];
+
+// Destroy the connections
+function destroy(self, connections) {
+ // Destroy all connections
+ connections.forEach(function(c) {
+ // Remove all listeners
+ for(var i = 0; i < events.length; i++) {
+ c.removeAllListeners(events[i]);
+ }
+ // Destroy connection
+ c.destroy();
+ });
+
+ // Zero out all connections
+ self.inUseConnections = [];
+ self.availableConnections = [];
+ self.nonAuthenticatedConnections = [];
+ self.connectingConnections = [];
+
+ // Set state to destroyed
+ stateTransition(self, DESTROYED);
+}
+
+/**
+ * Destroy pool
+ * @method
+ */
+Pool.prototype.destroy = function(force) {
+ var self = this;
+ // Do not try again if the pool is already dead
+ if(this.state == DESTROYED || self.state == DESTROYING) return;
+ // Set state to destroyed
+ stateTransition(this, DESTROYING);
+
+ // Are we force closing
+ if(force) {
+ // Get all the known connections
+ var connections = self.availableConnections
+ .concat(self.inUseConnections)
+ .concat(self.nonAuthenticatedConnections)
+ .concat(self.connectingConnections);
+ return destroy(self, connections);
+ }
+
+ // Wait for the operations to drain before we close the pool
+ function checkStatus() {
+ if(self.queue.length == 0) {
+ // Get all the known connections
+ var connections = self.availableConnections
+ .concat(self.inUseConnections)
+ .concat(self.nonAuthenticatedConnections)
+ .concat(self.connectingConnections);
+
+ // Check if we have any in flight operations
+ for(var i = 0; i < connections.length; i++) {
+ // There is an operation still in flight, reschedule a
+ // check waiting for it to drain
+ if(connections[i].workItems.length > 0) {
+ return setTimeout(checkStatus, 1);
+ }
+ }
+
+ destroy(self, connections);
+ } else {
+ setTimeout(checkStatus, 1);
+ }
+ }
+
+ // Initiate drain of operations
+ checkStatus();
+}
+
+/**
+ * Write a message to MongoDB
+ * @method
+ * @return {Connection}
+ */
+Pool.prototype.write = function(commands, options, cb) {
+ var self = this;
+ // Ensure we have a callback
+ if(typeof options == 'function') {
+ cb = options;
+ }
+
+ // Always have options
+ options = options || {};
+
+ // Pool was destroyed error out
+ if(this.state == DESTROYED || this.state == DESTROYING) {
+ // Callback with an error
+ if(cb) {
+ try {
+ cb(new MongoError('pool destroyed'));
+ } catch(err) {
+ process.nextTick(function() {
+ throw err;
+ });
+ }
+ }
+
+ return;
+ }
+
+ if(this.options.domainsEnabled
+ && process.domain && typeof cb === "function") {
+ // if we have a domain bind to it
+ var oldCb = cb;
+ cb = process.domain.bind(function() {
+ // v8 - argumentsToArray one-liner
+ var args = new Array(arguments.length); for(var i = 0; i < arguments.length; i++) { args[i] = arguments[i]; }
+ // bounce off event loop so domain switch takes place
+ process.nextTick(function() {
+ oldCb.apply(null, args);
+ });
+ });
+ }
+
+ // Do we have an operation
+ var operation = {
+ cb: cb, raw: false, promoteLongs: true, promoteValues: true, promoteBuffers: false, fullResult: false
+ };
+
+ var buffer = null
+
+ if(Array.isArray(commands)) {
+ buffer = [];
+
+ for(var i = 0; i < commands.length; i++) {
+ buffer.push(commands[i].toBin());
+ }
+
+ // Get the requestId
+ operation.requestId = commands[commands.length - 1].requestId;
+ } else {
+ operation.requestId = commands.requestId;
+ buffer = commands.toBin();
+ }
+
+ // Set the buffers
+ operation.buffer = buffer;
+
+ // Set the options for the parsing
+ operation.promoteLongs = typeof options.promoteLongs == 'boolean' ? options.promoteLongs : true;
+ operation.promoteValues = typeof options.promoteValues == 'boolean' ? options.promoteValues : true;
+ operation.promoteBuffers = typeof options.promoteBuffers == 'boolean' ? options.promoteBuffers : false;
+ operation.raw = typeof options.raw == 'boolean' ? options.raw : false;
+ operation.immediateRelease = typeof options.immediateRelease == 'boolean' ? options.immediateRelease : false;
+ operation.documentsReturnedIn = options.documentsReturnedIn;
+ operation.command = typeof options.command == 'boolean' ? options.command : false;
+ operation.fullResult = typeof options.fullResult == 'boolean' ? options.fullResult : false;
+ // operation.requestId = options.requestId;
+
+ // Optional per operation socketTimeout
+ operation.socketTimeout = options.socketTimeout;
+ operation.monitoring = options.monitoring;
+
+ // We need to have a callback function unless the message returns no response
+ if(!(typeof cb == 'function') && !options.noResponse) {
+ throw new MongoError('write method must provide a callback');
+ }
+
+ // If we have a monitoring operation schedule as the very first operation
+ // Otherwise add to back of queue
+ if(options.monitoring) {
+ this.queue.unshift(operation);
+ } else {
+ this.queue.push(operation);
+ }
+
+ // Attempt to execute the operation
+ if(!self.executing) {
+ process.nextTick(function() {
+ _execute(self)();
+ });
+ }
+}
+
+// Remove connection method
+function remove(connection, connections) {
+ for(var i = 0; i < connections.length; i++) {
+ if(connections[i] === connection) {
+ connections.splice(i, 1);
+ return true;
+ }
+ }
+}
+
+function removeConnection(self, connection) {
+ if(remove(connection, self.availableConnections)) return;
+ if(remove(connection, self.inUseConnections)) return;
+ if(remove(connection, self.connectingConnections)) return;
+ if(remove(connection, self.nonAuthenticatedConnections)) return;
+}
+
+// All event handlers
+var handlers = ["close", "message", "error", "timeout", "parseError", "connect"];
+
+function _createConnection(self) {
+ var connection = new Connection(messageHandler(self), self.options);
+
+ // Push the connection
+ self.connectingConnections.push(connection);
+
+ // Handle any errors
+ var tempErrorHandler = function(_connection) {
+ return function(err) {
+ // Destroy the connection
+ _connection.destroy();
+ // Remove the connection from the connectingConnections list
+ removeConnection(self, _connection);
+ // Start reconnection attempts
+ if(!self.reconnectId && self.options.reconnect) {
+ self.reconnectId = setTimeout(attemptReconnect(self), self.options.reconnectInterval);
+ }
+ }
+ }
+
+ // Handle successful connection
+ var tempConnectHandler = function(_connection) {
+ return function() {
+ // Destroyed state return
+ if(self.state == DESTROYED || self.state == DESTROYING) {
+ // Remove the connection from the list
+ removeConnection(self, _connection);
+ return _connection.destroy();
+ }
+
+ // Destroy all event emitters
+ handlers.forEach(function(e) {
+ _connection.removeAllListeners(e);
+ });
+
+ // Add the final handlers
+ _connection.once('close', connectionFailureHandler(self, 'close'));
+ _connection.once('error', connectionFailureHandler(self, 'error'));
+ _connection.once('timeout', connectionFailureHandler(self, 'timeout'));
+ _connection.once('parseError', connectionFailureHandler(self, 'parseError'));
+
+ // Signal
+ reauthenticate(self, _connection, function(err) {
+ if(self.state == DESTROYED || self.state == DESTROYING) {
+ return _connection.destroy();
+ }
+ // Remove the connection from the connectingConnections list
+ removeConnection(self, _connection);
+
+ // Handle error
+ if(err) {
+ return _connection.destroy();
+ }
+
+ // If we are authenticating at the moment
+ // Do not automatially put in available connections
+ // As we need to apply the credentials first
+ if(self.authenticating) {
+ self.nonAuthenticatedConnections.push(_connection);
+ } else {
+ // Push to available
+ self.availableConnections.push(_connection);
+ // Execute any work waiting
+ _execute(self)();
+ }
+ });
+ }
+ }
+
+ // Add all handlers
+ connection.once('close', tempErrorHandler(connection));
+ connection.once('error', tempErrorHandler(connection));
+ connection.once('timeout', tempErrorHandler(connection));
+ connection.once('parseError', tempErrorHandler(connection));
+ connection.once('connect', tempConnectHandler(connection));
+
+ // Start connection
+ connection.connect();
+}
+
+function flushMonitoringOperations(queue) {
+ for(var i = 0; i < queue.length; i++) {
+ if(queue[i].monitoring) {
+ var workItem = queue[i];
+ queue.splice(i, 1);
+ workItem.cb(new MongoError({ message: 'no connection available for monitoring', driver:true }));
+ }
+ }
+}
+
+function _execute(self) {
+ return function() {
+ if(self.state == DESTROYED) return;
+ // Already executing, skip
+ if(self.executing) return;
+ // Set pool as executing
+ self.executing = true;
+
+ // Wait for auth to clear before continuing
+ function waitForAuth(cb) {
+ if(!self.authenticating) return cb();
+ // Wait for a milisecond and try again
+ setTimeout(function() {
+ waitForAuth(cb);
+ }, 1);
+ }
+
+ // Block on any auth in process
+ waitForAuth(function() {
+ // As long as we have available connections
+ while(true) {
+ // Total availble connections
+ var totalConnections = self.availableConnections.length
+ + self.connectingConnections.length
+ + self.inUseConnections.length;
+
+ // No available connections available, flush any monitoring ops
+ if(self.availableConnections.length == 0) {
+ // Flush any monitoring operations
+ flushMonitoringOperations(self.queue);
+ break;
+ }
+
+ // No queue break
+ if(self.queue.length == 0) {
+ break;
+ }
+
+ // Get a connection
+ // var connection = self.availableConnections.pop();
+ var connection = self.availableConnections[self.connectionIndex++ % self.availableConnections.length];
+ // Is the connection connected
+ if(connection.isConnected()) {
+ // Get the next work item
+ var workItem = self.queue.shift();
+
+ // Get actual binary commands
+ var buffer = workItem.buffer;
+
+ // Set current status of authentication process
+ workItem.authenticating = self.authenticating;
+ workItem.authenticatingTimestamp = self.authenticatingTimestamp;
+
+ // Add current associated callback to the connection
+ // connection.workItem = workItem
+ connection.workItems.push(workItem);
+
+ // We have a custom socketTimeout
+ if(!workItem.immediateRelease && typeof workItem.socketTimeout == 'number') {
+ connection.setSocketTimeout(workItem.socketTimeout);
+ }
+
+ // Put operation on the wire
+ if(Array.isArray(buffer)) {
+ for(var i = 0; i < buffer.length; i++) {
+ connection.write(buffer[i])
+ }
+ } else {
+ connection.write(buffer);
+ }
+
+ if(workItem.immediateRelease && self.authenticating) {
+ self.nonAuthenticatedConnections.push(connection);
+ }
+
+ // Have we not reached the max connection size yet
+ if(totalConnections < self.options.size
+ && self.queue.length > 0) {
+ // Create a new connection
+ _createConnection(self);
+ }
+ } else {
+ flushMonitoringOperations(self.queue);
+ }
+ }
+ });
+
+ self.executing = false;
+ }
+}
+
+var connectionId = 0
+/**
+ * A server connect event, used to verify that the connection is up and running
+ *
+ * @event Pool#connect
+ * @type {Pool}
+ */
+
+/**
+ * A server reconnect event, used to verify that pool reconnected.
+ *
+ * @event Pool#reconnect
+ * @type {Pool}
+ */
+
+/**
+ * The server connection closed, all pool connections closed
+ *
+ * @event Pool#close
+ * @type {Pool}
+ */
+
+/**
+ * The server connection caused an error, all pool connections closed
+ *
+ * @event Pool#error
+ * @type {Pool}
+ */
+
+/**
+ * The server connection timed out, all pool connections closed
+ *
+ * @event Pool#timeout
+ * @type {Pool}
+ */
+
+/**
+ * The driver experienced an invalid message, all pool connections closed
+ *
+ * @event Pool#parseError
+ * @type {Pool}
+ */
+
+/**
+ * The driver attempted to reconnect
+ *
+ * @event Pool#attemptReconnect
+ * @type {Pool}
+ */
+
+/**
+ * The driver exhausted all reconnect attempts
+ *
+ * @event Pool#reconnectFailed
+ * @type {Pool}
+ */
+
+module.exports = Pool;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/utils.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/utils.js
new file mode 100644
index 0000000..019ef19
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/connection/utils.js
@@ -0,0 +1,67 @@
+"use strict";
+
+// Set property function
+var setProperty = function(obj, prop, flag, values) {
+ Object.defineProperty(obj, prop.name, {
+ enumerable:true,
+ set: function(value) {
+ if(typeof value != 'boolean') throw new Error(f("%s required a boolean", prop.name));
+ // Flip the bit to 1
+ if(value == true) values.flags |= flag;
+ // Flip the bit to 0 if it's set, otherwise ignore
+ if(value == false && (values.flags & flag) == flag) values.flags ^= flag;
+ prop.value = value;
+ }
+ , get: function() { return prop.value; }
+ });
+}
+
+// Set property function
+var getProperty = function(obj, propName, fieldName, values, func) {
+ Object.defineProperty(obj, propName, {
+ enumerable:true,
+ get: function() {
+ // Not parsed yet, parse it
+ if(values[fieldName] == null && obj.isParsed && !obj.isParsed()) {
+ obj.parse();
+ }
+
+ // Do we have a post processing function
+ if(typeof func == 'function') return func(values[fieldName]);
+ // Return raw value
+ return values[fieldName];
+ }
+ });
+}
+
+// Set simple property
+var getSingleProperty = function(obj, name, value) {
+ Object.defineProperty(obj, name, {
+ enumerable:true,
+ get: function() {
+ return value
+ }
+ });
+}
+
+// Shallow copy
+var copy = function(fObj, tObj) {
+ tObj = tObj || {};
+ for(var name in fObj) tObj[name] = fObj[name];
+ return tObj;
+}
+
+var debugOptions = function(debugFields, options) {
+ var finaloptions = {};
+ debugFields.forEach(function(n) {
+ finaloptions[n] = options[n];
+ });
+
+ return finaloptions;
+}
+
+exports.setProperty = setProperty;
+exports.getProperty = getProperty;
+exports.getSingleProperty = getSingleProperty;
+exports.copy = copy;
+exports.debugOptions = debugOptions;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/cursor.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/cursor.js
new file mode 100644
index 0000000..12c9c27
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/cursor.js
@@ -0,0 +1,699 @@
+"use strict";
+
+var Long = require('bson').Long
+ , Logger = require('./connection/logger')
+ , MongoError = require('./error')
+ , f = require('util').format;
+
+/**
+ * This is a cursor results callback
+ *
+ * @callback resultCallback
+ * @param {error} error An error object. Set to null if no error present
+ * @param {object} document
+ */
+
+/**
+ * @fileOverview The **Cursor** class is an internal class that embodies a cursor on MongoDB
+ * allowing for iteration over the results returned from the underlying query.
+ *
+ * **CURSORS Cannot directly be instantiated**
+ * @example
+ * var Server = require('mongodb-core').Server
+ * , ReadPreference = require('mongodb-core').ReadPreference
+ * , assert = require('assert');
+ *
+ * var server = new Server({host: 'localhost', port: 27017});
+ * // Wait for the connection event
+ * server.on('connect', function(server) {
+ * assert.equal(null, err);
+ *
+ * // Execute the write
+ * var cursor = _server.cursor('integration_tests.inserts_example4', {
+ * find: 'integration_tests.example4'
+ * , query: {a:1}
+ * }, {
+ * readPreference: new ReadPreference('secondary');
+ * });
+ *
+ * // Get the first document
+ * cursor.next(function(err, doc) {
+ * assert.equal(null, err);
+ * server.destroy();
+ * });
+ * });
+ *
+ * // Start connecting
+ * server.connect();
+ */
+
+/**
+ * Creates a new Cursor, not to be used directly
+ * @class
+ * @param {object} bson An instance of the BSON parser
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {{object}|Long} cmd The selector (can be a command or a cursorId)
+ * @param {object} [options=null] Optional settings.
+ * @param {object} [options.batchSize=1000] Batchsize for the operation
+ * @param {array} [options.documents=[]] Initial documents list for cursor
+ * @param {object} [options.transforms=null] Transform methods for the cursor results
+ * @param {function} [options.transforms.query] Transform the value returned from the initial query
+ * @param {function} [options.transforms.doc] Transform each document returned from Cursor.prototype.next
+ * @param {object} topology The server topology instance.
+ * @param {object} topologyOptions The server topology options.
+ * @return {Cursor} A cursor instance
+ * @property {number} cursorBatchSize The current cursorBatchSize for the cursor
+ * @property {number} cursorLimit The current cursorLimit for the cursor
+ * @property {number} cursorSkip The current cursorSkip for the cursor
+ */
+var Cursor = function(bson, ns, cmd, options, topology, topologyOptions) {
+ options = options || {};
+ // Cursor reference
+ var self = this;
+ // Initial query
+ var query = null;
+
+ // Cursor pool
+ this.pool = null;
+ // Cursor server
+ this.server = null;
+
+ // Do we have a not connected handler
+ this.disconnectHandler = options.disconnectHandler;
+
+ // Set local values
+ this.bson = bson;
+ this.ns = ns;
+ this.cmd = cmd;
+ this.options = options;
+ this.topology = topology;
+
+ // All internal state
+ this.cursorState = {
+ cursorId: null
+ , cmd: cmd
+ , documents: options.documents || []
+ , cursorIndex: 0
+ , dead: false
+ , killed: false
+ , init: false
+ , notified: false
+ , limit: options.limit || cmd.limit || 0
+ , skip: options.skip || cmd.skip || 0
+ , batchSize: options.batchSize || cmd.batchSize || 1000
+ , currentLimit: 0
+ // Result field name if not a cursor (contains the array of results)
+ , transforms: options.transforms
+ }
+
+ // Add promoteLong to cursor state
+ if(typeof topologyOptions.promoteLongs == 'boolean') {
+ this.cursorState.promoteLongs = topologyOptions.promoteLongs;
+ } else if(typeof options.promoteLongs == 'boolean') {
+ this.cursorState.promoteLongs = options.promoteLongs;
+ }
+
+ // Add promoteValues to cursor state
+ if(typeof topologyOptions.promoteValues == 'boolean') {
+ this.cursorState.promoteValues = topologyOptions.promoteValues;
+ } else if(typeof options.promoteValues == 'boolean') {
+ this.cursorState.promoteValues = options.promoteValues;
+ }
+
+ // Add promoteBuffers to cursor state
+ if(typeof topologyOptions.promoteBuffers == 'boolean') {
+ this.cursorState.promoteBuffers = topologyOptions.promoteBuffers;
+ } else if(typeof options.promoteBuffers == 'boolean') {
+ this.cursorState.promoteBuffers = options.promoteBuffers;
+ }
+
+ // Logger
+ this.logger = Logger('Cursor', topologyOptions);
+
+ //
+ // Did we pass in a cursor id
+ if(typeof cmd == 'number') {
+ this.cursorState.cursorId = Long.fromNumber(cmd);
+ this.cursorState.lastCursorId = this.cursorState.cursorId;
+ } else if(cmd instanceof Long) {
+ this.cursorState.cursorId = cmd;
+ this.cursorState.lastCursorId = cmd;
+ }
+}
+
+Cursor.prototype.setCursorBatchSize = function(value) {
+ this.cursorState.batchSize = value;
+}
+
+Cursor.prototype.cursorBatchSize = function() {
+ return this.cursorState.batchSize;
+}
+
+Cursor.prototype.setCursorLimit = function(value) {
+ this.cursorState.limit = value;
+}
+
+Cursor.prototype.cursorLimit = function() {
+ return this.cursorState.limit;
+}
+
+Cursor.prototype.setCursorSkip = function(value) {
+ this.cursorState.skip = value;
+}
+
+Cursor.prototype.cursorSkip = function() {
+ return this.cursorState.skip;
+}
+
+//
+// Handle callback (including any exceptions thrown)
+var handleCallback = function(callback, err, result) {
+ try {
+ callback(err, result);
+ } catch(err) {
+ process.nextTick(function() {
+ throw err;
+ });
+ }
+}
+
+// Internal methods
+Cursor.prototype._find = function(callback) {
+ var self = this;
+
+ if(self.logger.isDebug()) {
+ self.logger.debug(f("issue initial query [%s] with flags [%s]"
+ , JSON.stringify(self.cmd)
+ , JSON.stringify(self.query)));
+ }
+
+ var queryCallback = function(err, r) {
+ if(err) return callback(err);
+
+ // Get the raw message
+ var result = r.message;
+
+ // Query failure bit set
+ if(result.queryFailure) {
+ return callback(MongoError.create(result.documents[0]), null);
+ }
+
+ // Check if we have a command cursor
+ if(Array.isArray(result.documents) && result.documents.length == 1
+ && (!self.cmd.find || (self.cmd.find && self.cmd.virtual == false))
+ && (result.documents[0].cursor != 'string'
+ || result.documents[0]['$err']
+ || result.documents[0]['errmsg']
+ || Array.isArray(result.documents[0].result))
+ ) {
+
+ // We have a an error document return the error
+ if(result.documents[0]['$err']
+ || result.documents[0]['errmsg']) {
+ return callback(MongoError.create(result.documents[0]), null);
+ }
+
+ // We have a cursor document
+ if(result.documents[0].cursor != null
+ && typeof result.documents[0].cursor != 'string') {
+ var id = result.documents[0].cursor.id;
+ // If we have a namespace change set the new namespace for getmores
+ if(result.documents[0].cursor.ns) {
+ self.ns = result.documents[0].cursor.ns;
+ }
+ // Promote id to long if needed
+ self.cursorState.cursorId = typeof id == 'number' ? Long.fromNumber(id) : id;
+ self.cursorState.lastCursorId = self.cursorState.cursorId;
+ // If we have a firstBatch set it
+ if(Array.isArray(result.documents[0].cursor.firstBatch)) {
+ self.cursorState.documents = result.documents[0].cursor.firstBatch;//.reverse();
+ }
+
+ // Return after processing command cursor
+ return callback(null, null);
+ }
+
+ if(Array.isArray(result.documents[0].result)) {
+ self.cursorState.documents = result.documents[0].result;
+ self.cursorState.cursorId = Long.ZERO;
+ return callback(null, null);
+ }
+ }
+
+ // Otherwise fall back to regular find path
+ self.cursorState.cursorId = result.cursorId;
+ self.cursorState.documents = result.documents;
+ self.cursorState.lastCursorId = result.cursorId;
+
+ // Transform the results with passed in transformation method if provided
+ if(self.cursorState.transforms && typeof self.cursorState.transforms.query == 'function') {
+ self.cursorState.documents = self.cursorState.transforms.query(result);
+ }
+
+ // Return callback
+ callback(null, null);
+ }
+
+ // Options passed to the pool
+ var queryOptions = {};
+
+ // If we have a raw query decorate the function
+ if(self.options.raw || self.cmd.raw) {
+ // queryCallback.raw = self.options.raw || self.cmd.raw;
+ queryOptions.raw = self.options.raw || self.cmd.raw;
+ }
+
+ // Do we have documentsReturnedIn set on the query
+ if(typeof self.query.documentsReturnedIn == 'string') {
+ // queryCallback.documentsReturnedIn = self.query.documentsReturnedIn;
+ queryOptions.documentsReturnedIn = self.query.documentsReturnedIn;
+ }
+
+ // Add promote Long value if defined
+ if(typeof self.cursorState.promoteLongs == 'boolean') {
+ queryOptions.promoteLongs = self.cursorState.promoteLongs;
+ }
+
+ // Add promote values if defined
+ if(typeof self.cursorState.promoteValues == 'boolean') {
+ queryOptions.promoteValues = self.cursorState.promoteValues;
+ }
+
+ // Add promote values if defined
+ if(typeof self.cursorState.promoteBuffers == 'boolean') {
+ queryOptions.promoteBuffers = self.cursorState.promoteBuffers;
+ }
+
+ // Write the initial command out
+ self.server.s.pool.write(self.query, queryOptions, queryCallback);
+}
+
+Cursor.prototype._getmore = function(callback) {
+ if(this.logger.isDebug()) this.logger.debug(f("schedule getMore call for query [%s]", JSON.stringify(this.query)))
+ // Determine if it's a raw query
+ var raw = this.options.raw || this.cmd.raw;
+
+ // Set the current batchSize
+ var batchSize = this.cursorState.batchSize;
+ if(this.cursorState.limit > 0
+ && ((this.cursorState.currentLimit + batchSize) > this.cursorState.limit)) {
+ batchSize = this.cursorState.limit - this.cursorState.currentLimit;
+ }
+
+ // Default pool
+ var pool = this.server.s.pool;
+
+ // We have a wire protocol handler
+ this.server.wireProtocolHandler.getMore(this.bson, this.ns, this.cursorState, batchSize, raw, pool, this.options, callback);
+}
+
+Cursor.prototype._killcursor = function(callback) {
+ // Set cursor to dead
+ this.cursorState.dead = true;
+ this.cursorState.killed = true;
+ // Remove documents
+ this.cursorState.documents = [];
+
+ // If no cursor id just return
+ if(this.cursorState.cursorId == null || this.cursorState.cursorId.isZero() || this.cursorState.init == false) {
+ if(callback) callback(null, null);
+ return;
+ }
+
+ // Default pool
+ var pool = this.server.s.pool;
+ // Execute command
+ this.server.wireProtocolHandler.killCursor(this.bson, this.ns, this.cursorState.cursorId, pool, callback);
+}
+
+/**
+ * Clone the cursor
+ * @method
+ * @return {Cursor}
+ */
+Cursor.prototype.clone = function() {
+ return this.topology.cursor(this.ns, this.cmd, this.options);
+}
+
+/**
+ * Checks if the cursor is dead
+ * @method
+ * @return {boolean} A boolean signifying if the cursor is dead or not
+ */
+Cursor.prototype.isDead = function() {
+ return this.cursorState.dead == true;
+}
+
+/**
+ * Checks if the cursor was killed by the application
+ * @method
+ * @return {boolean} A boolean signifying if the cursor was killed by the application
+ */
+Cursor.prototype.isKilled = function() {
+ return this.cursorState.killed == true;
+}
+
+/**
+ * Checks if the cursor notified it's caller about it's death
+ * @method
+ * @return {boolean} A boolean signifying if the cursor notified the callback
+ */
+Cursor.prototype.isNotified = function() {
+ return this.cursorState.notified == true;
+}
+
+/**
+ * Returns current buffered documents length
+ * @method
+ * @return {number} The number of items in the buffered documents
+ */
+Cursor.prototype.bufferedCount = function() {
+ return this.cursorState.documents.length - this.cursorState.cursorIndex;
+}
+
+/**
+ * Returns current buffered documents
+ * @method
+ * @return {Array} An array of buffered documents
+ */
+Cursor.prototype.readBufferedDocuments = function(number) {
+ var unreadDocumentsLength = this.cursorState.documents.length - this.cursorState.cursorIndex;
+ var length = number < unreadDocumentsLength ? number : unreadDocumentsLength;
+ var elements = this.cursorState.documents.slice(this.cursorState.cursorIndex, this.cursorState.cursorIndex + length);
+
+ // Transform the doc with passed in transformation method if provided
+ if(this.cursorState.transforms && typeof this.cursorState.transforms.doc == 'function') {
+ // Transform all the elements
+ for(var i = 0; i < elements.length; i++) {
+ elements[i] = this.cursorState.transforms.doc(elements[i]);
+ }
+ }
+
+ // Ensure we do not return any more documents than the limit imposed
+ // Just return the number of elements up to the limit
+ if(this.cursorState.limit > 0 && (this.cursorState.currentLimit + elements.length) > this.cursorState.limit) {
+ elements = elements.slice(0, (this.cursorState.limit - this.cursorState.currentLimit));
+ this.kill();
+ }
+
+ // Adjust current limit
+ this.cursorState.currentLimit = this.cursorState.currentLimit + elements.length;
+ this.cursorState.cursorIndex = this.cursorState.cursorIndex + elements.length;
+
+ // Return elements
+ return elements;
+}
+
+/**
+ * Kill the cursor
+ * @method
+ * @param {resultCallback} callback A callback function
+ */
+Cursor.prototype.kill = function(callback) {
+ this._killcursor(callback);
+}
+
+/**
+ * Resets the cursor
+ * @method
+ * @return {null}
+ */
+Cursor.prototype.rewind = function() {
+ if(this.cursorState.init) {
+ if(!this.cursorState.dead) {
+ this.kill();
+ }
+
+ this.cursorState.currentLimit = 0;
+ this.cursorState.init = false;
+ this.cursorState.dead = false;
+ this.cursorState.killed = false;
+ this.cursorState.notified = false;
+ this.cursorState.documents = [];
+ this.cursorState.cursorId = null;
+ this.cursorState.cursorIndex = 0;
+ }
+}
+
+/**
+ * Validate if the pool is dead and return error
+ */
+var isConnectionDead = function(self, callback) {
+ if(self.pool
+ && self.pool.isDestroyed()) {
+ self.cursorState.notified = true;
+ self.cursorState.killed = true;
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+ callback(MongoError.create(f('connection to host %s:%s was destroyed', self.pool.host, self.pool.port)))
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * Validate if the cursor is dead but was not explicitly killed by user
+ */
+var isCursorDeadButNotkilled = function(self, callback) {
+ // Cursor is dead but not marked killed, return null
+ if(self.cursorState.dead && !self.cursorState.killed) {
+ self.cursorState.notified = true;
+ self.cursorState.killed = true;
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+ handleCallback(callback, null, null);
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * Validate if the cursor is dead and was killed by user
+ */
+var isCursorDeadAndKilled = function(self, callback) {
+ if(self.cursorState.dead && self.cursorState.killed) {
+ handleCallback(callback, MongoError.create("cursor is dead"));
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * Validate if the cursor was killed by the user
+ */
+var isCursorKilled = function(self, callback) {
+ if(self.cursorState.killed) {
+ self.cursorState.notified = true;
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+ handleCallback(callback, null, null);
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * Mark cursor as being dead and notified
+ */
+var setCursorDeadAndNotified = function(self, callback) {
+ self.cursorState.dead = true;
+ self.cursorState.notified = true;
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+ handleCallback(callback, null, null);
+}
+
+/**
+ * Mark cursor as being notified
+ */
+var setCursorNotified = function(self, callback) {
+ self.cursorState.notified = true;
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+ handleCallback(callback, null, null);
+}
+
+var push = Array.prototype.push;
+
+var nextFunction = function(self, callback) {
+ // We have notified about it
+ if(self.cursorState.notified) {
+ return callback(new Error('cursor is exhausted'));
+ }
+
+ // Cursor is killed return null
+ if(isCursorKilled(self, callback)) return;
+
+ // Cursor is dead but not marked killed, return null
+ if(isCursorDeadButNotkilled(self, callback)) return;
+
+ // We have a dead and killed cursor, attempting to call next should error
+ if(isCursorDeadAndKilled(self, callback)) return;
+
+ // We have just started the cursor
+ if(!self.cursorState.init) {
+ // Topology is not connected, save the call in the provided store to be
+ // Executed at some point when the handler deems it's reconnected
+ if(!self.topology.isConnected(self.options) && self.disconnectHandler != null) {
+ if (self.topology.isDestroyed()) {
+ // Topology was destroyed, so don't try to wait for it to reconnect
+ return callback(new MongoError('Topology was destroyed'));
+ }
+ return self.disconnectHandler.addObjectAndMethod('cursor', self, 'next', [callback], callback);
+ }
+
+ try {
+ self.server = self.topology.getServer(self.options);
+ } catch(err) {
+ // Handle the error and add object to next method call
+ if(self.disconnectHandler != null) {
+ return self.disconnectHandler.addObjectAndMethod('cursor', self, 'next', [callback], callback);
+ }
+
+ // Otherwise return the error
+ return callback(err);
+ }
+
+ // Set as init
+ self.cursorState.init = true;
+
+ // Server does not support server
+ if(self.cmd
+ && self.cmd.collation
+ && self.server.ismaster.maxWireVersion < 5) {
+ return callback(new MongoError(f('server %s does not support collation', self.server.name)));
+ }
+
+ try {
+ self.query = self.server.wireProtocolHandler.command(self.bson, self.ns, self.cmd, self.cursorState, self.topology, self.options);
+ } catch(err) {
+ return callback(err);
+ }
+ }
+
+ // If we don't have a cursorId execute the first query
+ if(self.cursorState.cursorId == null) {
+ // Check if pool is dead and return if not possible to
+ // execute the query against the db
+ if(isConnectionDead(self, callback)) return;
+
+ // Check if topology is destroyed
+ if(self.topology.isDestroyed()) return callback(new MongoError(f('connection destroyed, not possible to instantiate cursor')));
+
+ // query, cmd, options, cursorState, callback
+ self._find(function(err, r) {
+ if(err) return handleCallback(callback, err, null);
+
+ if(self.cursorState.documents.length == 0
+ && self.cursorState.cursorId && self.cursorState.cursorId.isZero()
+ && !self.cmd.tailable && !self.cmd.awaitData) {
+ return setCursorNotified(self, callback);
+ }
+
+ nextFunction(self, callback);
+ });
+ } else if(self.cursorState.limit > 0 && self.cursorState.currentLimit >= self.cursorState.limit) {
+ // Ensure we kill the cursor on the server
+ self.kill();
+ // Set cursor in dead and notified state
+ return setCursorDeadAndNotified(self, callback);
+ } else if(self.cursorState.cursorIndex == self.cursorState.documents.length
+ && !Long.ZERO.equals(self.cursorState.cursorId)) {
+ // Ensure an empty cursor state
+ self.cursorState.documents = [];
+ self.cursorState.cursorIndex = 0;
+
+ // Check if topology is destroyed
+ if(self.topology.isDestroyed()) return callback(new MongoError(f('connection destroyed, not possible to instantiate cursor')));
+
+ // Check if connection is dead and return if not possible to
+ // execute a getmore on this connection
+ if(isConnectionDead(self, callback)) return;
+
+ // Execute the next get more
+ self._getmore(function(err, doc, connection) {
+ if(err) return handleCallback(callback, err);
+
+ // Save the returned connection to ensure all getMore's fire over the same connection
+ self.connection = connection;
+
+ // Tailable cursor getMore result, notify owner about it
+ // No attempt is made here to retry, this is left to the user of the
+ // core module to handle to keep core simple
+ if(self.cursorState.documents.length == 0
+ && self.cmd.tailable && Long.ZERO.equals(self.cursorState.cursorId)) {
+ // No more documents in the tailed cursor
+ return handleCallback(callback, MongoError.create({
+ message: "No more documents in tailed cursor"
+ , tailable: self.cmd.tailable
+ , awaitData: self.cmd.awaitData
+ }));
+ } else if(self.cursorState.documents.length == 0
+ && self.cmd.tailable && !Long.ZERO.equals(self.cursorState.cursorId)) {
+ return nextFunction(self, callback);
+ }
+
+ if(self.cursorState.limit > 0 && self.cursorState.currentLimit >= self.cursorState.limit) {
+ return setCursorDeadAndNotified(self, callback);
+ }
+
+ nextFunction(self, callback);
+ });
+ } else if(self.cursorState.documents.length == self.cursorState.cursorIndex
+ && self.cmd.tailable && Long.ZERO.equals(self.cursorState.cursorId)) {
+ return handleCallback(callback, MongoError.create({
+ message: "No more documents in tailed cursor"
+ , tailable: self.cmd.tailable
+ , awaitData: self.cmd.awaitData
+ }));
+ } else if(self.cursorState.documents.length == self.cursorState.cursorIndex
+ && Long.ZERO.equals(self.cursorState.cursorId)) {
+ setCursorDeadAndNotified(self, callback);
+ } else {
+ if(self.cursorState.limit > 0 && self.cursorState.currentLimit >= self.cursorState.limit) {
+ // Ensure we kill the cursor on the server
+ self.kill();
+ // Set cursor in dead and notified state
+ return setCursorDeadAndNotified(self, callback);
+ }
+
+ // Increment the current cursor limit
+ self.cursorState.currentLimit += 1;
+
+ // Get the document
+ var doc = self.cursorState.documents[self.cursorState.cursorIndex++];
+
+ // Doc overflow
+ if(doc.$err) {
+ // Ensure we kill the cursor on the server
+ self.kill();
+ // Set cursor in dead and notified state
+ return setCursorDeadAndNotified(self, function() {
+ handleCallback(callback, new MongoError(doc.$err));
+ });
+ }
+
+ // Transform the doc with passed in transformation method if provided
+ if(self.cursorState.transforms && typeof self.cursorState.transforms.doc == 'function') {
+ doc = self.cursorState.transforms.doc(doc);
+ }
+
+ // Return the document
+ handleCallback(callback, null, doc);
+ }
+}
+
+/**
+ * Retrieve the next document from the cursor
+ * @method
+ * @param {resultCallback} callback A callback function
+ */
+Cursor.prototype.next = function(callback) {
+ nextFunction(this, callback);
+}
+
+module.exports = Cursor;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/error.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/error.js
new file mode 100644
index 0000000..31ede94
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/error.js
@@ -0,0 +1,44 @@
+"use strict";
+
+/**
+ * Creates a new MongoError
+ * @class
+ * @augments Error
+ * @param {string} message The error message
+ * @return {MongoError} A MongoError instance
+ */
+function MongoError(message) {
+ this.name = 'MongoError';
+ this.message = message;
+ Error.captureStackTrace(this, MongoError);
+}
+
+/**
+ * Creates a new MongoError object
+ * @method
+ * @param {object} options The error options
+ * @return {MongoError} A MongoError instance
+ */
+MongoError.create = function(options) {
+ var err = null;
+
+ if(options instanceof Error) {
+ err = new MongoError(options.message);
+ err.stack = options.stack;
+ } else if(typeof options == 'string') {
+ err = new MongoError(options);
+ } else {
+ err = new MongoError(options.message || options.errmsg || options.$err || "n/a");
+ // Other options
+ for(var name in options) {
+ err[name] = options[name];
+ }
+ }
+
+ return err;
+}
+
+// Extend JavaScript error
+MongoError.prototype = new Error;
+
+module.exports = MongoError;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/tools/smoke_plugin.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/tools/smoke_plugin.js
new file mode 100644
index 0000000..dcceda4
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/tools/smoke_plugin.js
@@ -0,0 +1,59 @@
+var fs = require('fs');
+
+/* Note: because this plugin uses process.on('uncaughtException'), only one
+ * of these can exist at any given time. This plugin and anything else that
+ * uses process.on('uncaughtException') will conflict. */
+exports.attachToRunner = function(runner, outputFile) {
+ var smokeOutput = { results : [] };
+ var runningTests = {};
+
+ var integraPlugin = {
+ beforeTest: function(test, callback) {
+ test.startTime = Date.now();
+ runningTests[test.name] = test;
+ callback();
+ },
+ afterTest: function(test, callback) {
+ smokeOutput.results.push({
+ status: test.status,
+ start: test.startTime,
+ end: Date.now(),
+ test_file: test.name,
+ exit_code: 0,
+ url: ""
+ });
+ delete runningTests[test.name];
+ callback();
+ },
+ beforeExit: function(obj, callback) {
+ fs.writeFile(outputFile, JSON.stringify(smokeOutput), function() {
+ callback();
+ });
+ }
+ };
+
+ // In case of exception, make sure we write file
+ process.on('uncaughtException', function(err) {
+ // Mark all currently running tests as failed
+ for (var testName in runningTests) {
+ smokeOutput.results.push({
+ status: "fail",
+ start: runningTests[testName].startTime,
+ end: Date.now(),
+ test_file: testName,
+ exit_code: 0,
+ url: ""
+ });
+ }
+
+ // write file
+ fs.writeFileSync(outputFile, JSON.stringify(smokeOutput));
+
+ // Standard NodeJS uncaught exception handler
+ console.error(err.stack);
+ process.exit(1);
+ });
+
+ runner.plugin(integraPlugin);
+ return integraPlugin;
+};
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/mongos.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/mongos.js
new file mode 100644
index 0000000..7ba6220
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/mongos.js
@@ -0,0 +1,1112 @@
+"use strict"
+
+var inherits = require('util').inherits,
+ f = require('util').format,
+ EventEmitter = require('events').EventEmitter,
+ BSON = require('bson').native().BSON,
+ ReadPreference = require('./read_preference'),
+ BasicCursor = require('../cursor'),
+ Logger = require('../connection/logger'),
+ debugOptions = require('../connection/utils').debugOptions,
+ MongoError = require('../error'),
+ Server = require('./server'),
+ ReplSetState = require('./replset_state'),
+ assign = require('./shared').assign,
+ clone = require('./shared').clone,
+ createClientInfo = require('./shared').createClientInfo;
+
+/**
+ * @fileOverview The **Mongos** class is a class that represents a Mongos Proxy topology and is
+ * used to construct connections.
+ *
+ * @example
+ * var Mongos = require('mongodb-core').Mongos
+ * , ReadPreference = require('mongodb-core').ReadPreference
+ * , assert = require('assert');
+ *
+ * var server = new Mongos([{host: 'localhost', port: 30000}]);
+ * // Wait for the connection event
+ * server.on('connect', function(server) {
+ * server.destroy();
+ * });
+ *
+ * // Start connecting
+ * server.connect();
+ */
+
+var MongoCR = require('../auth/mongocr')
+ , X509 = require('../auth/x509')
+ , Plain = require('../auth/plain')
+ , GSSAPI = require('../auth/gssapi')
+ , SSPI = require('../auth/sspi')
+ , ScramSHA1 = require('../auth/scram');
+
+//
+// States
+var DISCONNECTED = 'disconnected';
+var CONNECTING = 'connecting';
+var CONNECTED = 'connected';
+var DESTROYED = 'destroyed';
+
+function stateTransition(self, newState) {
+ var legalTransitions = {
+ 'disconnected': [CONNECTING, DESTROYED, DISCONNECTED],
+ 'connecting': [CONNECTING, DESTROYED, CONNECTED, DISCONNECTED],
+ 'connected': [CONNECTED, DISCONNECTED, DESTROYED],
+ 'destroyed': [DESTROYED]
+ }
+
+ // Get current state
+ var legalStates = legalTransitions[self.state];
+ if(legalStates && legalStates.indexOf(newState) != -1) {
+ self.state = newState;
+ } else {
+ self.logger.error(f('Pool with id [%s] failed attempted illegal state transition from [%s] to [%s] only following state allowed [%s]'
+ , self.id, self.state, newState, legalStates));
+ }
+}
+
+//
+// ReplSet instance id
+var id = 1;
+var handlers = ['connect', 'close', 'error', 'timeout', 'parseError'];
+
+/**
+ * Creates a new Mongos instance
+ * @class
+ * @param {array} seedlist A list of seeds for the replicaset
+ * @param {number} [options.haInterval=5000] The High availability period for replicaset inquiry
+ * @param {Cursor} [options.cursorFactory=Cursor] The cursor factory class used for all query cursors
+ * @param {number} [options.size=5] Server connection pool size
+ * @param {boolean} [options.keepAlive=true] TCP Connection keep alive enabled
+ * @param {number} [options.keepAliveInitialDelay=0] Initial delay before TCP keep alive enabled
+ * @param {number} [options.localThresholdMS=15] Cutoff latency point in MS for MongoS proxy selection
+ * @param {boolean} [options.noDelay=true] TCP Connection no delay
+ * @param {number} [options.connectionTimeout=1000] TCP Connection timeout setting
+ * @param {number} [options.socketTimeout=0] TCP Socket timeout setting
+ * @param {boolean} [options.singleBufferSerializtion=true] Serialize into single buffer, trade of peak memory for serialization speed
+ * @param {boolean} [options.ssl=false] Use SSL for connection
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {Buffer} [options.ca] SSL Certificate store binary buffer
+ * @param {Buffer} [options.cert] SSL Certificate binary buffer
+ * @param {Buffer} [options.key] SSL Key file binary buffer
+ * @param {string} [options.passphrase] SSL Certificate pass phrase
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {boolean} [options.rejectUnauthorized=true] Reject unauthorized server certificates
+ * @param {boolean} [options.promoteLongs=true] Convert Long values from the db into Numbers if they fit into 53 bits
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @return {Mongos} A cursor instance
+ * @fires Mongos#connect
+ * @fires Mongos#reconnect
+ * @fires Mongos#joined
+ * @fires Mongos#left
+ * @fires Mongos#failed
+ * @fires Mongos#fullsetup
+ * @fires Mongos#all
+ * @fires Mongos#serverHeartbeatStarted
+ * @fires Mongos#serverHeartbeatSucceeded
+ * @fires Mongos#serverHeartbeatFailed
+ * @fires Mongos#topologyOpening
+ * @fires Mongos#topologyClosed
+ * @fires Mongos#topologyDescriptionChanged
+ */
+var Mongos = function(seedlist, options) {
+ var self = this;
+ options = options || {};
+
+ // Get replSet Id
+ this.id = id++;
+
+ // Internal state
+ this.s = {
+ options: assign({}, options),
+ // BSON instance
+ bson: options.bson || new BSON(),
+ // Factory overrides
+ Cursor: options.cursorFactory || BasicCursor,
+ // Logger instance
+ logger: Logger('Mongos', options),
+ // Seedlist
+ seedlist: seedlist,
+ // Ha interval
+ haInterval: options.haInterval ? options.haInterval : 10000,
+ // Disconnect handler
+ disconnectHandler: options.disconnectHandler,
+ // Server selection index
+ index: 0,
+ // Connect function options passed in
+ connectOptions: {},
+ // Are we running in debug mode
+ debug: typeof options.debug == 'boolean' ? options.debug : false,
+ // localThresholdMS
+ localThresholdMS: options.localThresholdMS || 15,
+ // Client info
+ clientInfo: createClientInfo(options)
+ }
+
+ // Set the client info
+ this.s.options.clientInfo = createClientInfo(options);
+
+ // Log info warning if the socketTimeout < haInterval as it will cause
+ // a lot of recycled connections to happen.
+ if(this.s.logger.isWarn()
+ && this.s.options.socketTimeout != 0
+ && this.s.options.socketTimeout < this.s.haInterval) {
+ this.s.logger.warn(f('warning socketTimeout %s is less than haInterval %s. This might cause unnecessary server reconnections due to socket timeouts'
+ , this.s.options.socketTimeout, this.s.haInterval));
+ }
+
+ // All the authProviders
+ this.authProviders = options.authProviders || {
+ 'mongocr': new MongoCR(this.s.bson), 'x509': new X509(this.s.bson)
+ , 'plain': new Plain(this.s.bson), 'gssapi': new GSSAPI(this.s.bson)
+ , 'sspi': new SSPI(this.s.bson), 'scram-sha-1': new ScramSHA1(this.s.bson)
+ }
+
+ // Disconnected state
+ this.state = DISCONNECTED;
+
+ // Current proxies we are connecting to
+ this.connectingProxies = [];
+ // Currently connected proxies
+ this.connectedProxies = [];
+ // Disconnected proxies
+ this.disconnectedProxies = [];
+ // Are we authenticating
+ this.authenticating = false;
+ // Index of proxy to run operations against
+ this.index = 0;
+ // High availability timeout id
+ this.haTimeoutId = null;
+ // Last ismaster
+ this.ismaster = null;
+
+ // Add event listener
+ EventEmitter.call(this);
+}
+
+inherits(Mongos, EventEmitter);
+
+Object.defineProperty(Mongos.prototype, 'type', {
+ enumerable:true, get: function() { return 'mongos'; }
+});
+
+/**
+ * Emit event if it exists
+ * @method
+ */
+function emitSDAMEvent(self, event, description) {
+ if(self.listeners(event).length > 0) {
+ self.emit(event, description);
+ }
+}
+
+/**
+ * Initiate server connect
+ * @method
+ * @param {array} [options.auth=null] Array of auth options to apply on connect
+ */
+Mongos.prototype.connect = function(options) {
+ var self = this;
+ // Add any connect level options to the internal state
+ this.s.connectOptions = options || {};
+ // Set connecting state
+ stateTransition(this, CONNECTING);
+ // Create server instances
+ var servers = this.s.seedlist.map(function(x) {
+ return new Server(assign({}, self.s.options, x, {
+ authProviders: self.authProviders, reconnect:false, monitoring:false, inTopology: true
+ }, {
+ clientInfo: clone(self.s.clientInfo)
+ }));
+ });
+
+ // Emit the topology opening event
+ emitSDAMEvent(this, 'topologyOpening', { topologyId: this.id });
+
+ // Start all server connections
+ connectProxies(self, servers);
+}
+
+function handleEvent(self, event) {
+ return function(err) {
+ if(self.state == DESTROYED) return;
+ // Move to list of disconnectedProxies
+ moveServerFrom(self.connectedProxies, self.disconnectedProxies, this);
+ // Emit the left signal
+ self.emit('left', 'mongos', this);
+ }
+}
+
+function handleInitialConnectEvent(self, event) {
+ return function(err) {
+ // Destroy the instance
+ if(self.state == DESTROYED) {
+ // Move from connectingProxies
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, this);
+ return this.destroy();
+ }
+
+ // Check the type of server
+ if(event == 'connect') {
+ // Get last known ismaster
+ self.ismaster = this.lastIsMaster();
+
+ // Is this not a proxy, remove t
+ if(self.ismaster.msg == 'isdbgrid') {
+ // Add to the connectd list
+ for(var i = 0; i < self.connectedProxies.length; i++) {
+ if(self.connectedProxies[i].name == this.name) {
+ // Move from connectingProxies
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, this);
+ this.destroy();
+ return self.emit('failed', this);
+ }
+ }
+
+ // Remove the handlers
+ for(var i = 0; i < handlers.length; i++) {
+ this.removeAllListeners(handlers[i]);
+ }
+
+ // Add stable state handlers
+ this.on('error', handleEvent(self, 'error'));
+ this.on('close', handleEvent(self, 'close'));
+ this.on('timeout', handleEvent(self, 'timeout'));
+ this.on('parseError', handleEvent(self, 'parseError'));
+
+ // Move from connecting proxies connected
+ moveServerFrom(self.connectingProxies, self.connectedProxies, this);
+ // Emit the joined event
+ self.emit('joined', 'mongos', this);
+ } else {
+
+ // Print warning if we did not find a mongos proxy
+ if(self.s.logger.isWarn()) {
+ var message = 'expected mongos proxy, but found replicaset member mongod for server %s';
+ // We have a standalone server
+ if(!self.ismaster.hosts) {
+ message = 'expected mongos proxy, but found standalone mongod for server %s';
+ }
+
+ self.s.logger.warn(f(message, this.name));
+ }
+
+ // This is not a mongos proxy, remove it completely
+ removeProxyFrom(self.connectingProxies, this);
+ // Emit the left event
+ self.emit('left', 'server', this);
+ // Emit failed event
+ self.emit('failed', this);
+ }
+ } else {
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, this);
+ // Emit the left event
+ self.emit('left', 'mongos', this);
+ // Emit failed event
+ self.emit('failed', this);
+ }
+
+ // Trigger topologyMonitor
+ if(self.connectingProxies.length == 0) {
+ // Emit connected if we are connected
+ if(self.connectedProxies.length > 0) {
+ // Set the state to connected
+ stateTransition(self, CONNECTED);
+ // Emit the connect event
+ self.emit('connect', self);
+ self.emit('fullsetup', self);
+ self.emit('all', self);
+ } else if(self.disconnectedProxies.length == 0) {
+ // Print warning if we did not find a mongos proxy
+ if(self.s.logger.isWarn()) {
+ self.s.logger.warn(f('no mongos proxies found in seed list, did you mean to connect to a replicaset'));
+ }
+
+ // Emit the error that no proxies were found
+ return self.emit('error', new MongoError('no mongos proxies found in seed list'));
+ }
+
+ // Topology monitor
+ topologyMonitor(self, {firstConnect:true});
+ }
+ };
+}
+
+function connectProxies(self, servers) {
+ // Update connectingProxies
+ self.connectingProxies = self.connectingProxies.concat(servers);
+
+ // Index used to interleaf the server connects, avoiding
+ // runtime issues on io constrained vm's
+ var timeoutInterval = 0;
+
+ function connect(server, timeoutInterval) {
+ setTimeout(function() {
+ // Add event handlers
+ server.once('close', handleInitialConnectEvent(self, 'close'));
+ server.once('timeout', handleInitialConnectEvent(self, 'timeout'));
+ server.once('parseError', handleInitialConnectEvent(self, 'parseError'));
+ server.once('error', handleInitialConnectEvent(self, 'error'));
+ server.once('connect', handleInitialConnectEvent(self, 'connect'));
+ // SDAM Monitoring events
+ server.on('serverOpening', function(e) { self.emit('serverOpening', e); });
+ server.on('serverDescriptionChanged', function(e) { self.emit('serverDescriptionChanged', e); });
+ server.on('serverClosed', function(e) { self.emit('serverClosed', e); });
+ // Start connection
+ server.connect(self.s.connectOptions);
+ }, timeoutInterval);
+ }
+ // Start all the servers
+ while(servers.length > 0) {
+ connect(servers.shift(), timeoutInterval++);
+ }
+}
+
+function pickProxy(self) {
+ // Get the currently connected Proxies
+ var connectedProxies = self.connectedProxies.slice(0);
+
+ // Set lower bound
+ var lowerBoundLatency = Number.MAX_VALUE;
+
+ // Determine the lower bound for the Proxies
+ for(var i = 0; i < connectedProxies.length; i++) {
+ if(connectedProxies[i].lastIsMasterMS < lowerBoundLatency) {
+ lowerBoundLatency = connectedProxies[i].lastIsMasterMS;
+ }
+ }
+
+ // Filter out the possible servers
+ connectedProxies = connectedProxies.filter(function(server) {
+ if((server.lastIsMasterMS <= (lowerBoundLatency + self.s.localThresholdMS))
+ && server.isConnected()) {
+ return true;
+ }
+ });
+
+ // We have no connectedProxies pick first of the connected ones
+ if(connectedProxies.length == 0) {
+ return self.connectedProxies[0];
+ }
+
+ // Get proxy
+ var proxy = connectedProxies[self.index % connectedProxies.length];
+ // Update the index
+ self.index = (self.index + 1) % connectedProxies.length;
+ // Return the proxy
+ return proxy;
+}
+
+function moveServerFrom(from, to, proxy) {
+ for(var i = 0; i < from.length; i++) {
+ if(from[i].name == proxy.name) {
+ from.splice(i, 1);
+ }
+ }
+
+ for(var i = 0; i < to.length; i++) {
+ if(to[i].name == proxy.name) {
+ to.splice(i, 1);
+ }
+ }
+
+ to.push(proxy);
+}
+
+function removeProxyFrom(from, proxy) {
+ for(var i = 0; i < from.length; i++) {
+ if(from[i].name == proxy.name) {
+ from.splice(i, 1);
+ }
+ }
+}
+
+function reconnectProxies(self, proxies, callback) {
+ // Count lefts
+ var count = proxies.length;
+
+ // Handle events
+ var _handleEvent = function(self, event) {
+ return function(err, r) {
+ var _self = this;
+ count = count - 1;
+
+ // Destroyed
+ if(self.state == DESTROYED) {
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, _self);
+ return this.destroy();
+ }
+
+ if(event == 'connect' && !self.authenticating) {
+ // Destroyed
+ if(self.state == DESTROYED) {
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, _self);
+ return _self.destroy();
+ }
+
+ // Remove the handlers
+ for(var i = 0; i < handlers.length; i++) {
+ _self.removeAllListeners(handlers[i]);
+ }
+
+ // Add stable state handlers
+ _self.on('error', handleEvent(self, 'error'));
+ _self.on('close', handleEvent(self, 'close'));
+ _self.on('timeout', handleEvent(self, 'timeout'));
+ _self.on('parseError', handleEvent(self, 'parseError'));
+
+ // Move to the connected servers
+ moveServerFrom(self.disconnectedProxies, self.connectedProxies, _self);
+ // Emit joined event
+ self.emit('joined', 'mongos', _self);
+ } else if(event == 'connect' && self.authenticating) {
+ // Move from connectingProxies
+ moveServerFrom(self.connectingProxies, self.disconnectedProxies, _self);
+ this.destroy();
+ }
+
+ // Are we done finish up callback
+ if(count == 0) {
+ callback();
+ }
+ }
+ }
+
+ // No new servers
+ if(count == 0) {
+ return callback();
+ }
+
+ // Execute method
+ function execute(_server, i) {
+ setTimeout(function() {
+ // Destroyed
+ if(self.state == DESTROYED) {
+ return;
+ }
+
+ // Create a new server instance
+ var server = new Server(assign({}, self.s.options, {
+ host: _server.name.split(':')[0],
+ port: parseInt(_server.name.split(':')[1], 10)
+ }, {
+ authProviders: self.authProviders, reconnect:false, monitoring: false, inTopology: true
+ }, {
+ clientInfo: clone(self.s.clientInfo)
+ }));
+
+ // Add temp handlers
+ server.once('connect', _handleEvent(self, 'connect'));
+ server.once('close', _handleEvent(self, 'close'));
+ server.once('timeout', _handleEvent(self, 'timeout'));
+ server.once('error', _handleEvent(self, 'error'));
+ server.once('parseError', _handleEvent(self, 'parseError'));
+
+ // SDAM Monitoring events
+ server.on('serverOpening', function(e) { self.emit('serverOpening', e); });
+ server.on('serverDescriptionChanged', function(e) { self.emit('serverDescriptionChanged', e); });
+ server.on('serverClosed', function(e) { self.emit('serverClosed', e); });
+ server.connect(self.s.connectOptions);
+ }, i);
+ }
+
+ // Create new instances
+ for(var i = 0; i < proxies.length; i++) {
+ execute(proxies[i], i);
+ }
+}
+
+function topologyMonitor(self, options) {
+ options = options || {};
+
+ // Set momitoring timeout
+ self.haTimeoutId = setTimeout(function() {
+ if(self.state == DESTROYED) return;
+ // If we have a primary and a disconnect handler, execute
+ // buffered operations
+ if(self.isConnected() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute();
+ }
+
+ // Get the connectingServers
+ var proxies = self.connectedProxies.slice(0);
+ // Get the count
+ var count = proxies.length;
+
+ // If the count is zero schedule a new fast
+ function pingServer(_self, _server, cb) {
+ // Measure running time
+ var start = new Date().getTime();
+
+ // Emit the server heartbeat start
+ emitSDAMEvent(self, 'serverHeartbeatStarted', { connectionId: _server.name });
+
+ // Execute ismaster
+ _server.command('admin.$cmd', {ismaster:true}, {monitoring: true}, function(err, r) {
+ if(self.state == DESTROYED) {
+ // Move from connectingProxies
+ moveServerFrom(self.connectedProxies, self.disconnectedProxies, _server);
+ _server.destroy();
+ return cb(err, r);
+ }
+
+ // Calculate latency
+ var latencyMS = new Date().getTime() - start;
+
+ // We had an error, remove it from the state
+ if(err) {
+ // Emit the server heartbeat failure
+ emitSDAMEvent(self, 'serverHeartbeatFailed', { durationMS: latencyMS, failure: err, connectionId: _server.name });
+ } else {
+ // Update the server ismaster
+ _server.ismaster = r.result;
+ _server.lastIsMasterMS = latencyMS;
+
+ // Server heart beat event
+ emitSDAMEvent(self, 'serverHeartbeatSucceeded', { durationMS: latencyMS, reply: r.result, connectionId: _server.name });
+ }
+
+ cb(err, r);
+ });
+ }
+
+ // No proxies initiate monitor again
+ if(proxies.length == 0) {
+ // Emit close event if any listeners registered
+ if(self.listeners("close").length > 0) {
+ self.emit('close', self);
+ }
+
+ // Attempt to connect to any unknown servers
+ return reconnectProxies(self, self.disconnectedProxies, function(err, cb) {
+ if(self.state == DESTROYED) return;
+
+ // Are we connected ? emit connect event
+ if(self.state == CONNECTING && options.firstConnect) {
+ self.emit('connect', self);
+ self.emit('fullsetup', self);
+ self.emit('all', self);
+ } else if(self.isConnected()) {
+ self.emit('reconnect', self);
+ } else if(!self.isConnected() && self.listeners("close").length > 0) {
+ self.emit('close', self);
+ }
+
+ // Perform topology monitor
+ topologyMonitor(self);
+ });
+ }
+
+ // Ping all servers
+ for(var i = 0; i < proxies.length; i++) {
+ pingServer(self, proxies[i], function(err, r) {
+ count = count - 1;
+
+ if(count == 0) {
+ if(self.state == DESTROYED) return;
+
+ // Attempt to connect to any unknown servers
+ reconnectProxies(self, self.disconnectedProxies, function(err, cb) {
+ if(self.state == DESTROYED) return;
+ // Perform topology monitor
+ topologyMonitor(self);
+ });
+ }
+ });
+ }
+ }, self.s.haInterval);
+}
+
+/**
+ * Returns the last known ismaster document for this server
+ * @method
+ * @return {object}
+ */
+Mongos.prototype.lastIsMaster = function() {
+ return this.ismaster;
+}
+
+/**
+ * Unref all connections belong to this server
+ * @method
+ */
+Mongos.prototype.unref = function(emitClose) {
+ // Transition state
+ stateTransition(this, DISCONNECTED);
+ // Get all proxies
+ var proxies = self.connectedProxies.concat(self.connectingProxies);
+ proxies.forEach(function(x) {
+ x.unref();
+ });
+
+ clearTimeout(this.haTimeoutId);
+}
+
+/**
+ * Destroy the server connection
+ * @method
+ */
+Mongos.prototype.destroy = function(emitClose) {
+ // Transition state
+ stateTransition(this, DESTROYED);
+ // Get all proxies
+ var proxies = this.connectedProxies.concat(this.connectingProxies);
+ // Clear out any monitoring process
+ if(this.haTimeoutId) clearTimeout(this.haTimeoutId);
+
+ // Destroy all connecting servers
+ proxies.forEach(function(x) {
+ x.destroy();
+ });
+
+ // Emit toplogy closing event
+ emitSDAMEvent(this, 'topologyClosed', { topologyId: this.id });
+}
+
+/**
+ * Figure out if the server is connected
+ * @method
+ * @return {boolean}
+ */
+Mongos.prototype.isConnected = function(options) {
+ return this.connectedProxies.length > 0;
+}
+
+/**
+ * Figure out if the server instance was destroyed by calling destroy
+ * @method
+ * @return {boolean}
+ */
+Mongos.prototype.isDestroyed = function() {
+ return this.state == DESTROYED;
+}
+
+//
+// Operations
+//
+
+// Execute write operation
+var executeWriteOperation = function(self, op, ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ // Ensure we have no options
+ options = options || {};
+ // Pick a server
+ var server = pickProxy(self);
+ // No server found error out
+ if(!server) return callback(new MongoError('no mongos proxy available'));
+ // Execute the command
+ server[op](ns, ops, options, callback);
+}
+
+/**
+ * Insert one or more documents
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of documents to insert
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Mongos.prototype.insert = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.isConnected() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('insert', ns, ops, options, callback);
+ }
+
+ // No mongos proxy available
+ if(!this.isConnected()) {
+ return callback(new MongoError('no mongos proxy available'));
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'insert', ns, ops, options, callback);
+}
+
+/**
+ * Perform one or more update operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of updates
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Mongos.prototype.update = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.isConnected() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('update', ns, ops, options, callback);
+ }
+
+ // No mongos proxy available
+ if(!this.isConnected()) {
+ return callback(new MongoError('no mongos proxy available'));
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'update', ns, ops, options, callback);
+}
+
+/**
+ * Perform one or more remove operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of removes
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Mongos.prototype.remove = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.isConnected() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('remove', ns, ops, options, callback);
+ }
+
+ // No mongos proxy available
+ if(!this.isConnected()) {
+ return callback(new MongoError('no mongos proxy available'));
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'remove', ns, ops, options, callback);
+}
+
+/**
+ * Execute a command
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {object} cmd The command hash
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Connection} [options.connection] Specify connection object to execute command against
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Mongos.prototype.command = function(ns, cmd, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+ var self = this;
+
+ // Establish readPreference
+ var readPreference = options.readPreference ? options.readPreference : ReadPreference.primary;
+
+ // Pick a proxy
+ var server = pickProxy(self);
+
+ // Topology is not connected, save the call in the provided store to be
+ // Executed at some point when the handler deems it's reconnected
+ if((server == null || !server.isConnected()) && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('command', ns, cmd, options, callback);
+ }
+
+ // No server returned we had an error
+ if(server == null) {
+ return callback(new MongoError('no mongos proxy available'));
+ }
+
+ // Execute the command
+ server.command(ns, cmd, options, callback);
+}
+
+/**
+ * Perform one or more remove operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {{object}|{Long}} cmd Can be either a command returning a cursor or a cursorId
+ * @param {object} [options.batchSize=0] Batchsize for the operation
+ * @param {array} [options.documents=[]] Initial documents list for cursor
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Mongos.prototype.cursor = function(ns, cmd, cursorOptions) {
+ cursorOptions = cursorOptions || {};
+ var FinalCursor = cursorOptions.cursorFactory || this.s.Cursor;
+ return new FinalCursor(this.s.bson, ns, cmd, cursorOptions, this, this.s.options);
+}
+
+/**
+ * Authenticate using a specified mechanism
+ * @method
+ * @param {string} mechanism The Auth mechanism we are invoking
+ * @param {string} db The db we are invoking the mechanism against
+ * @param {...object} param Parameters for the specific mechanism
+ * @param {authResultCallback} callback A callback function
+ */
+Mongos.prototype.auth = function(mechanism, db) {
+ var allArgs = Array.prototype.slice.call(arguments, 0).slice(0);
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ var callback = args.pop();
+
+ // If we don't have the mechanism fail
+ if(this.authProviders[mechanism] == null && mechanism != 'default') {
+ return callback(new MongoError(f("auth provider %s does not exist", mechanism)));
+ }
+
+ // Are we already authenticating, throw
+ if(this.authenticating) {
+ return callback(new MongoError('authentication or logout allready in process'));
+ }
+
+ // Topology is not connected, save the call in the provided store to be
+ // Executed at some point when the handler deems it's reconnected
+ if(!self.isConnected() && self.s.disconnectHandler != null) {
+ return self.s.disconnectHandler.add('auth', db, allArgs, {}, callback);
+ }
+
+ // Set to authenticating
+ this.authenticating = true;
+ // All errors
+ var errors = [];
+
+ // Get all the servers
+ var servers = this.connectedProxies.slice(0);
+ // No servers return
+ if(servers.length == 0) {
+ this.authenticating = false;
+ callback(null, true);
+ }
+
+ // Authenticate
+ function auth(server) {
+ // Arguments without a callback
+ var argsWithoutCallback = [mechanism, db].concat(args.slice(0));
+ // Create arguments
+ var finalArguments = argsWithoutCallback.concat([function(err, r) {
+ count = count - 1;
+ // Save all the errors
+ if(err) errors.push({name: server.name, err: err});
+ // We are done
+ if(count == 0) {
+ // Auth is done
+ self.authenticating = false;
+
+ // Return the auth error
+ if(errors.length) return callback(MongoError.create({
+ message: 'authentication fail', errors: errors
+ }), false);
+
+ // Successfully authenticated session
+ callback(null, self);
+ }
+ }]);
+
+ // Execute the auth only against non arbiter servers
+ if(!server.lastIsMaster().arbiterOnly) {
+ server.auth.apply(server, finalArguments);
+ }
+ }
+
+ // Get total count
+ var count = servers.length;
+ // Authenticate against all servers
+ while(servers.length > 0) {
+ auth(servers.shift());
+ }
+}
+
+/**
+ * Logout from a database
+ * @method
+ * @param {string} db The db we are logging out from
+ * @param {authResultCallback} callback A callback function
+ */
+Mongos.prototype.logout = function(dbName, callback) {
+ var self = this;
+ // Are we authenticating or logging out, throw
+ if(this.authenticating) {
+ throw new MongoError('authentication or logout allready in process');
+ }
+
+ // Ensure no new members are processed while logging out
+ this.authenticating = true;
+
+ // Remove from all auth providers (avoid any reaplication of the auth details)
+ var providers = Object.keys(this.authProviders);
+ for(var i = 0; i < providers.length; i++) {
+ this.authProviders[providers[i]].logout(dbName);
+ }
+
+ // Now logout all the servers
+ var servers = this.connectedProxies.slice(0);
+ var count = servers.length;
+ if(count == 0) return callback();
+ var errors = [];
+
+ // Execute logout on all server instances
+ for(var i = 0; i < servers.length; i++) {
+ servers[i].logout(dbName, function(err) {
+ count = count - 1;
+ if(err) errors.push({name: server.name, err: err});
+
+ if(count == 0) {
+ // Do not block new operations
+ self.authenticating = false;
+ // If we have one or more errors
+ if(errors.length) return callback(MongoError.create({
+ message: f('logout failed against db %s', dbName), errors: errors
+ }), false);
+
+ // No errors
+ callback();
+ }
+ });
+ }
+}
+
+/**
+ * Get server
+ * @method
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @return {Server}
+ */
+Mongos.prototype.getServer = function() {
+ var server = pickProxy(this);
+ if(this.s.debug) this.emit('pickedServer', null, server);
+ return server;
+}
+
+/**
+ * All raw connections
+ * @method
+ * @return {Connection[]}
+ */
+Mongos.prototype.connections = function() {
+ var connections = [];
+
+ for(var i = 0; i < this.connectedProxies.length; i++) {
+ connections = connections.concat(this.connectedProxies[i].connections());
+ }
+
+ return connections;
+}
+
+/**
+ * A mongos connect event, used to verify that the connection is up and running
+ *
+ * @event Mongos#connect
+ * @type {Mongos}
+ */
+
+/**
+ * A mongos reconnect event, used to verify that the mongos topology has reconnected
+ *
+ * @event Mongos#reconnect
+ * @type {Mongos}
+ */
+
+/**
+ * A mongos fullsetup event, used to signal that all topology members have been contacted.
+ *
+ * @event Mongos#fullsetup
+ * @type {Mongos}
+ */
+
+/**
+ * A mongos all event, used to signal that all topology members have been contacted.
+ *
+ * @event Mongos#all
+ * @type {Mongos}
+ */
+
+/**
+ * A server member left the mongos list
+ *
+ * @event Mongos#left
+ * @type {Mongos}
+ * @param {string} type The type of member that left (mongos)
+ * @param {Server} server The server object that left
+ */
+
+/**
+ * A server member joined the mongos list
+ *
+ * @event Mongos#joined
+ * @type {Mongos}
+ * @param {string} type The type of member that left (mongos)
+ * @param {Server} server The server object that joined
+ */
+
+/**
+ * A server opening SDAM monitoring event
+ *
+ * @event Mongos#serverOpening
+ * @type {object}
+ */
+
+/**
+ * A server closed SDAM monitoring event
+ *
+ * @event Mongos#serverClosed
+ * @type {object}
+ */
+
+/**
+ * A server description SDAM change monitoring event
+ *
+ * @event Mongos#serverDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * A topology open SDAM event
+ *
+ * @event Mongos#topologyOpening
+ * @type {object}
+ */
+
+/**
+ * A topology closed SDAM event
+ *
+ * @event Mongos#topologyClosed
+ * @type {object}
+ */
+
+/**
+ * A topology structure SDAM change event
+ *
+ * @event Mongos#topologyDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatStarted SDAM event
+ *
+ * @event Mongos#serverHeartbeatStarted
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatFailed SDAM event
+ *
+ * @event Mongos#serverHeartbeatFailed
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatSucceeded SDAM change event
+ *
+ * @event Mongos#serverHeartbeatSucceeded
+ * @type {object}
+ */
+
+module.exports = Mongos;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/read_preference.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/read_preference.js
new file mode 100644
index 0000000..a801fbe
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/read_preference.js
@@ -0,0 +1,119 @@
+"use strict";
+
+var needSlaveOk = ['primaryPreferred', 'secondary', 'secondaryPreferred', 'nearest'];
+
+/**
+ * @fileOverview The **ReadPreference** class is a class that represents a MongoDB ReadPreference and is
+ * used to construct connections.
+ *
+ * @example
+ * var ReplSet = require('mongodb-core').ReplSet
+ * , ReadPreference = require('mongodb-core').ReadPreference
+ * , assert = require('assert');
+ *
+ * var server = new ReplSet([{host: 'localhost', port: 30000}], {setName: 'rs'});
+ * // Wait for the connection event
+ * server.on('connect', function(server) {
+ * var cursor = server.cursor('db.test'
+ * , {find: 'db.test', query: {}}
+ * , {readPreference: new ReadPreference('secondary')});
+ * cursor.next(function(err, doc) {
+ * server.destroy();
+ * });
+ * });
+ *
+ * // Start connecting
+ * server.connect();
+ */
+
+/**
+ * Creates a new Pool instance
+ * @class
+ * @param {string} preference A string describing the preference (primary|primaryPreferred|secondary|secondaryPreferred|nearest)
+ * @param {array} tags The tags object
+ * @param {object} [options] Additional read preference options
+ * @param {number} [options.maxStalenessMS] Max Secondary Read Stalleness in Miliseconds
+ * @property {string} preference The preference string (primary|primaryPreferred|secondary|secondaryPreferred|nearest)
+ * @property {array} tags The tags object
+ * @property {object} options Additional read preference options
+ * @property {number} maxStalenessMS MaxStalenessMS value for the read preference
+ * @return {ReadPreference}
+ */
+var ReadPreference = function(preference, tags, options) {
+ this.preference = preference;
+ this.tags = tags;
+ this.options = options;
+
+ // If no tags were passed in
+ if(tags && typeof tags == 'object') {
+ this.options = tags, tags = null;
+ }
+
+ // Add the maxStalenessMS value to the read Preference
+ if(this.options && this.options.maxStalenessMS) {
+ this.maxStalenessMS = this.options.maxStalenessMS;
+ }
+}
+
+/**
+ * This needs slaveOk bit set
+ * @method
+ * @return {boolean}
+ */
+ReadPreference.prototype.slaveOk = function() {
+ return needSlaveOk.indexOf(this.preference) != -1;
+}
+
+/**
+ * Are the two read preference equal
+ * @method
+ * @return {boolean}
+ */
+ReadPreference.prototype.equals = function(readPreference) {
+ return readPreference.preference == this.preference;
+}
+
+/**
+ * Return JSON representation
+ * @method
+ * @return {Object}
+ */
+ReadPreference.prototype.toJSON = function() {
+ var readPreference = {mode: this.preference};
+ if(Array.isArray(this.tags)) readPreference.tags = this.tags;
+ if(this.maxStalenessMS) readPreference.maxStalenessMS = this.maxStalenessMS;
+ return readPreference;
+}
+
+/**
+ * Primary read preference
+ * @method
+ * @return {ReadPreference}
+ */
+ReadPreference.primary = new ReadPreference('primary');
+/**
+ * Primary Preferred read preference
+ * @method
+ * @return {ReadPreference}
+ */
+ReadPreference.primaryPreferred = new ReadPreference('primaryPreferred');
+/**
+ * Secondary read preference
+ * @method
+ * @return {ReadPreference}
+ */
+ReadPreference.secondary = new ReadPreference('secondary');
+/**
+ * Secondary Preferred read preference
+ * @method
+ * @return {ReadPreference}
+ */
+ReadPreference.secondaryPreferred = new ReadPreference('secondaryPreferred');
+/**
+ * Nearest read preference
+ * @method
+ * @return {ReadPreference}
+ */
+ReadPreference.nearest = new ReadPreference('nearest');
+
+module.exports = ReadPreference;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset.js
new file mode 100644
index 0000000..90a6862
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset.js
@@ -0,0 +1,1365 @@
+"use strict"
+
+var inherits = require('util').inherits,
+ f = require('util').format,
+ EventEmitter = require('events').EventEmitter,
+ BSON = require('bson').native().BSON,
+ ReadPreference = require('./read_preference'),
+ BasicCursor = require('../cursor'),
+ Logger = require('../connection/logger'),
+ debugOptions = require('../connection/utils').debugOptions,
+ MongoError = require('../error'),
+ Server = require('./server'),
+ ReplSetState = require('./replset_state'),
+ assign = require('./shared').assign,
+ clone = require('./shared').clone,
+ createClientInfo = require('./shared').createClientInfo;
+
+var MongoCR = require('../auth/mongocr')
+ , X509 = require('../auth/x509')
+ , Plain = require('../auth/plain')
+ , GSSAPI = require('../auth/gssapi')
+ , SSPI = require('../auth/sspi')
+ , ScramSHA1 = require('../auth/scram');
+
+//
+// States
+var DISCONNECTED = 'disconnected';
+var CONNECTING = 'connecting';
+var CONNECTED = 'connected';
+var DESTROYED = 'destroyed';
+
+function stateTransition(self, newState) {
+ var legalTransitions = {
+ 'disconnected': [CONNECTING, DESTROYED, DISCONNECTED],
+ 'connecting': [CONNECTING, DESTROYED, CONNECTED, DISCONNECTED],
+ 'connected': [CONNECTED, DISCONNECTED, DESTROYED],
+ 'destroyed': [DESTROYED]
+ }
+
+ // Get current state
+ var legalStates = legalTransitions[self.state];
+ if(legalStates && legalStates.indexOf(newState) != -1) {
+ self.state = newState;
+ } else {
+ self.logger.error(f('Pool with id [%s] failed attempted illegal state transition from [%s] to [%s] only following state allowed [%s]'
+ , self.id, self.state, newState, legalStates));
+ }
+}
+
+//
+// ReplSet instance id
+var id = 1;
+var handlers = ['connect', 'close', 'error', 'timeout', 'parseError'];
+
+/**
+ * Creates a new Replset instance
+ * @class
+ * @param {array} seedlist A list of seeds for the replicaset
+ * @param {boolean} options.setName The Replicaset set name
+ * @param {boolean} [options.secondaryOnlyConnectionAllowed=false] Allow connection to a secondary only replicaset
+ * @param {number} [options.haInterval=10000] The High availability period for replicaset inquiry
+ * @param {boolean} [options.emitError=false] Server will emit errors events
+ * @param {Cursor} [options.cursorFactory=Cursor] The cursor factory class used for all query cursors
+ * @param {number} [options.size=5] Server connection pool size
+ * @param {boolean} [options.keepAlive=true] TCP Connection keep alive enabled
+ * @param {number} [options.keepAliveInitialDelay=0] Initial delay before TCP keep alive enabled
+ * @param {boolean} [options.noDelay=true] TCP Connection no delay
+ * @param {number} [options.connectionTimeout=10000] TCP Connection timeout setting
+ * @param {number} [options.socketTimeout=0] TCP Socket timeout setting
+ * @param {boolean} [options.singleBufferSerializtion=true] Serialize into single buffer, trade of peak memory for serialization speed
+ * @param {boolean} [options.ssl=false] Use SSL for connection
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {Buffer} [options.ca] SSL Certificate store binary buffer
+ * @param {Buffer} [options.cert] SSL Certificate binary buffer
+ * @param {Buffer} [options.key] SSL Key file binary buffer
+ * @param {string} [options.passphrase] SSL Certificate pass phrase
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {boolean} [options.rejectUnauthorized=true] Reject unauthorized server certificates
+ * @param {boolean} [options.promoteLongs=true] Convert Long values from the db into Numbers if they fit into 53 bits
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {number} [options.pingInterval=5000] Ping interval to check the response time to the different servers
+ * @param {number} [options.localThresholdMS=15] Cutoff latency point in MS for MongoS proxy selection
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @return {ReplSet} A cursor instance
+ * @fires ReplSet#connect
+ * @fires ReplSet#ha
+ * @fires ReplSet#joined
+ * @fires ReplSet#left
+ * @fires ReplSet#failed
+ * @fires ReplSet#fullsetup
+ * @fires ReplSet#all
+ * @fires ReplSet#error
+ * @fires ReplSet#serverHeartbeatStarted
+ * @fires ReplSet#serverHeartbeatSucceeded
+ * @fires ReplSet#serverHeartbeatFailed
+ * @fires ReplSet#topologyOpening
+ * @fires ReplSet#topologyClosed
+ * @fires ReplSet#topologyDescriptionChanged
+ */
+var ReplSet = function(seedlist, options) {
+ var self = this;
+ options = options || {};
+
+ // Validate seedlist
+ if(!Array.isArray(seedlist)) throw new MongoError("seedlist must be an array");
+ // Validate list
+ if(seedlist.length == 0) throw new MongoError("seedlist must contain at least one entry");
+ // Validate entries
+ seedlist.forEach(function(e) {
+ if(typeof e.host != 'string' || typeof e.port != 'number')
+ throw new MongoError("seedlist entry must contain a host and port");
+ });
+
+ // Add event listener
+ EventEmitter.call(this);
+
+ // Get replSet Id
+ this.id = id++;
+
+ // Get the localThresholdMS
+ var localThresholdMS = options.localThresholdMS || 15;
+ // Backward compatibility
+ if(options.acceptableLatency) localThresholdMS = options.acceptableLatency;
+
+ // Create a logger
+ var logger = Logger('ReplSet', options);
+
+ // Internal state
+ this.s = {
+ options: assign({}, options),
+ // BSON instance
+ bson: options.bson || new BSON(),
+ // Factory overrides
+ Cursor: options.cursorFactory || BasicCursor,
+ // Logger instance
+ logger: logger,
+ // Seedlist
+ seedlist: seedlist,
+ // Replicaset state
+ replicaSetState: new ReplSetState({
+ id: this.id, setName: options.setName,
+ acceptableLatency: localThresholdMS,
+ heartbeatFrequencyMS: options.haInterval ? options.haInterval : 10000,
+ logger: logger
+ }),
+ // Current servers we are connecting to
+ connectingServers: [],
+ // Ha interval
+ haInterval: options.haInterval ? options.haInterval : 10000,
+ // Minimum heartbeat frequency used if we detect a server close
+ minHeartbeatFrequencyMS: 500,
+ // Disconnect handler
+ disconnectHandler: options.disconnectHandler,
+ // Server selection index
+ index: 0,
+ // Connect function options passed in
+ connectOptions: {},
+ // Are we running in debug mode
+ debug: typeof options.debug == 'boolean' ? options.debug : false,
+ // Client info
+ clientInfo: createClientInfo(options)
+ }
+
+ // Add handler for topology change
+ this.s.replicaSetState.on('topologyDescriptionChanged', function(r) { self.emit('topologyDescriptionChanged', r); });
+
+ // Log info warning if the socketTimeout < haInterval as it will cause
+ // a lot of recycled connections to happen.
+ if(this.s.logger.isWarn()
+ && this.s.options.socketTimeout != 0
+ && this.s.options.socketTimeout < this.s.haInterval) {
+ this.s.logger.warn(f('warning socketTimeout %s is less than haInterval %s. This might cause unnecessary server reconnections due to socket timeouts'
+ , this.s.options.socketTimeout, this.s.haInterval));
+ }
+
+ // All the authProviders
+ this.authProviders = options.authProviders || {
+ 'mongocr': new MongoCR(this.s.bson), 'x509': new X509(this.s.bson)
+ , 'plain': new Plain(this.s.bson), 'gssapi': new GSSAPI(this.s.bson)
+ , 'sspi': new SSPI(this.s.bson), 'scram-sha-1': new ScramSHA1(this.s.bson)
+ }
+
+ // Add forwarding of events from state handler
+ var types = ['joined', 'left'];
+ types.forEach(function(x) {
+ self.s.replicaSetState.on(x, function(t, s) {
+ self.emit(x, t, s);
+ });
+ });
+
+ // Connect stat
+ this.initialConnectState = {
+ connect: false, fullsetup: false, all: false
+ }
+
+ // Disconnected state
+ this.state = DISCONNECTED;
+ this.haTimeoutId = null;
+ // Are we authenticating
+ this.authenticating = false;
+ // Last ismaster
+ this.ismaster = null;
+}
+
+inherits(ReplSet, EventEmitter);
+
+Object.defineProperty(ReplSet.prototype, 'type', {
+ enumerable:true, get: function() { return 'replset'; }
+});
+
+function attemptReconnect(self) {
+ if(self.runningAttempReconnect) return;
+ // Set as running
+ self.runningAttempReconnect = true;
+ // Wait before execute
+ self.haTimeoutId = setTimeout(function() {
+ if(self.state == DESTROYED) return;
+
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('attemptReconnect for replset with id %s', self.id));
+ }
+
+ // Get all known hosts
+ var keys = Object.keys(self.s.replicaSetState.set);
+ var servers = keys.map(function(x) {
+ return new Server(assign({}, self.s.options, {
+ host: x.split(':')[0], port: parseInt(x.split(':')[1], 10)
+ }, {
+ authProviders: self.authProviders, reconnect:false, monitoring: false, inTopology: true
+ }, {
+ clientInfo: clone(self.s.clientInfo)
+ }));
+ });
+
+ // Create the list of servers
+ self.s.connectingServers = servers.slice(0);
+
+ // Handle all events coming from servers
+ function _handleEvent(self, event) {
+ return function(err) {
+ // Destroy the instance
+ if(self.state == DESTROYED) {
+ return this.destroy();
+ }
+
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('attemptReconnect for replset with id %s using server %s ended with event %s', self.id, this.name, event));
+ }
+
+ // Check if we are done
+ function done() {
+ // Done with the reconnection attempt
+ if(self.s.connectingServers.length == 0) {
+ if(self.state == DESTROYED) return;
+
+ // If we have a primary and a disconnect handler, execute
+ // buffered operations
+ if(self.s.replicaSetState.hasPrimaryAndSecondary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute();
+ } else if(self.s.replicaSetState.hasPrimary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute({ executePrimary:true });
+ } else if(self.s.replicaSetState.hasSecondary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute({ executeSecondary:true });
+ }
+
+ // Do we have a primary
+ if(self.s.replicaSetState.hasPrimary()) {
+ // Connect any missing servers
+ connectNewServers(self, self.s.replicaSetState.unknownServers, function(err, cb) {
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('attemptReconnect for replset with id successful resuming topologyMonitor', self.id));
+ }
+
+ // Reset the running
+ self.runningAttempReconnect = false;
+ // Go back to normal topology monitoring
+ topologyMonitor(self);
+ });
+ } else {
+ if(self.listeners("close").length > 0) {
+ self.emit('close', self);
+ }
+
+ // Reset the running
+ self.runningAttempReconnect = false;
+ // Attempt a new reconnect
+ attemptReconnect(self);
+ }
+ }
+ }
+
+ // Remove the server from our list
+ for(var i = 0; i < self.s.connectingServers.length; i++) {
+ if(self.s.connectingServers[i].equals(this)) {
+ self.s.connectingServers.splice(i, 1);
+ }
+ }
+
+ // Keep reference to server
+ var _self = this;
+
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('attemptReconnect in replset with id %s for', self.id));
+ }
+
+ // Connect and not authenticating
+ if(event == 'connect' && !self.authenticating) {
+ if(self.state == DESTROYED) {
+ return _self.destroy();
+ }
+
+ // Update the replicaset state
+ if(self.s.replicaSetState.update(_self)) {
+ // Primary lastIsMaster store it
+ if(_self.lastIsMaster() && _self.lastIsMaster().ismaster) {
+ self.ismaster = _self.lastIsMaster();
+ }
+
+ // Remove the handlers
+ for(var i = 0; i < handlers.length; i++) {
+ _self.removeAllListeners(handlers[i]);
+ }
+
+ // Add stable state handlers
+ _self.on('error', handleEvent(self, 'error'));
+ _self.on('close', handleEvent(self, 'close'));
+ _self.on('timeout', handleEvent(self, 'timeout'));
+ _self.on('parseError', handleEvent(self, 'parseError'));
+ } else {
+ _self.destroy();
+ }
+ } else if(event == 'connect' && self.authenticating) {
+ this.destroy();
+ }
+
+ done();
+ }
+ }
+
+ // Index used to interleaf the server connects, avoiding
+ // runtime issues on io constrained vm's
+ var timeoutInterval = 0;
+
+ function connect(server, timeoutInterval) {
+ setTimeout(function() {
+ server.once('connect', _handleEvent(self, 'connect'));
+ server.once('close', _handleEvent(self, 'close'));
+ server.once('timeout', _handleEvent(self, 'timeout'));
+ server.once('error', _handleEvent(self, 'error'));
+ server.once('parseError', _handleEvent(self, 'parseError'));
+
+ // SDAM Monitoring events
+ server.on('serverOpening', function(e) { self.emit('serverOpening', e); });
+ server.on('serverDescriptionChanged', function(e) { self.emit('serverDescriptionChanged', e); });
+ server.on('serverClosed', function(e) { self.emit('serverClosed', e); });
+
+ server.connect(self.s.connectOptions);
+ }, timeoutInterval);
+ }
+
+ // Connect all servers
+ while(servers.length > 0) {
+ connect(servers.shift(), timeoutInterval++);
+ }
+ }, self.s.minHeartbeatFrequencyMS);
+}
+
+function connectNewServers(self, servers, callback) {
+ // Count lefts
+ var count = servers.length;
+
+ // Handle events
+ var _handleEvent = function(self, event) {
+ return function(err, r) {
+ var _self = this;
+ count = count - 1;
+
+ // Destroyed
+ if(self.state == DESTROYED) {
+ return this.destroy();
+ }
+
+ if(event == 'connect' && !self.authenticating) {
+ // Destroyed
+ if(self.state == DESTROYED) {
+ return _self.destroy();
+ }
+
+ var result = self.s.replicaSetState.update(_self);
+ // Update the state with the new server
+ if(result) {
+ // Primary lastIsMaster store it
+ if(_self.lastIsMaster() && _self.lastIsMaster().ismaster) {
+ self.ismaster = _self.lastIsMaster();
+ }
+
+ // Remove the handlers
+ for(var i = 0; i < handlers.length; i++) {
+ _self.removeAllListeners(handlers[i]);
+ }
+
+ // Add stable state handlers
+ _self.on('error', handleEvent(self, 'error'));
+ _self.on('close', handleEvent(self, 'close'));
+ _self.on('timeout', handleEvent(self, 'timeout'));
+ _self.on('parseError', handleEvent(self, 'parseError'));
+ } else {
+ _self.destroy();
+ }
+ } else if(event == 'connect' && self.authenticating) {
+ this.destroy();
+ }
+
+ // Are we done finish up callback
+ if(count == 0) { callback(); }
+ }
+ }
+
+ // No new servers
+ if(count == 0) return callback();
+
+ // Execute method
+ function execute(_server, i) {
+ setTimeout(function() {
+ // Destroyed
+ if(self.state == DESTROYED) {
+ return;
+ }
+
+ // Create a new server instance
+ var server = new Server(assign({}, self.s.options, {
+ host: _server.split(':')[0],
+ port: parseInt(_server.split(':')[1], 10)
+ }, {
+ authProviders: self.authProviders, reconnect:false, monitoring: false, inTopology: true
+ }, {
+ clientInfo: clone(self.s.clientInfo)
+ }));
+
+ // Add temp handlers
+ server.once('connect', _handleEvent(self, 'connect'));
+ server.once('close', _handleEvent(self, 'close'));
+ server.once('timeout', _handleEvent(self, 'timeout'));
+ server.once('error', _handleEvent(self, 'error'));
+ server.once('parseError', _handleEvent(self, 'parseError'));
+
+ // SDAM Monitoring events
+ server.on('serverOpening', function(e) { self.emit('serverOpening', e); });
+ server.on('serverDescriptionChanged', function(e) { self.emit('serverDescriptionChanged', e); });
+ server.on('serverClosed', function(e) { self.emit('serverClosed', e); });
+ server.connect(self.s.connectOptions);
+ }, i);
+ }
+
+ // Create new instances
+ for(var i = 0; i < servers.length; i++) {
+ execute(servers[i], i);
+ }
+}
+
+function topologyMonitor(self, options) {
+ options = options || {};
+
+ // Set momitoring timeout
+ self.haTimeoutId = setTimeout(function() {
+ if(self.state == DESTROYED) return;
+
+ // Is this a on connect topology discovery
+ // Schedule a proper topology monitoring to happen
+ // To ensure any discovered servers do not timeout
+ // while waiting for the initial discovery to happen.
+ if(options.haInterval) {
+ topologyMonitor(self);
+ }
+
+ // If we have a primary and a disconnect handler, execute
+ // buffered operations
+ if(self.s.replicaSetState.hasPrimaryAndSecondary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute();
+ } else if(self.s.replicaSetState.hasPrimary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute({ executePrimary:true });
+ } else if(self.s.replicaSetState.hasSecondary() && self.s.disconnectHandler) {
+ self.s.disconnectHandler.execute({ executeSecondary:true });
+ }
+
+ // Get the connectingServers
+ var connectingServers = self.s.replicaSetState.allServers();
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('topologyMonitor in replset with id %s connected servers [%s]'
+ , self.id
+ , connectingServers.map(function(x) {
+ return x.name;
+ })));
+ }
+ // Get the count
+ var count = connectingServers.length;
+
+ // If we have no servers connected
+ if(count == 0 && !options.haInterval) {
+ if(self.listeners("close").length > 0) {
+ self.emit('close', self);
+ }
+
+ return attemptReconnect(self);
+ }
+
+ // If the count is zero schedule a new fast
+ function pingServer(_self, _server, cb) {
+ // Measure running time
+ var start = new Date().getTime();
+
+ // Emit the server heartbeat start
+ emitSDAMEvent(self, 'serverHeartbeatStarted', { connectionId: _server.name });
+ // Execute ismaster
+ _server.command('admin.$cmd', {ismaster:true}, {monitoring: true}, function(err, r) {
+ if(self.state == DESTROYED) {
+ _server.destroy();
+ return cb(err, r);
+ }
+
+ // Calculate latency
+ var latencyMS = new Date().getTime() - start;
+
+ // Set the last updatedTime
+ var hrTime = process.hrtime();
+ // Calculate the last update time
+ _server.lastUpdateTime = hrTime[0] * 1000 + Math.round(hrTime[1]/1000);
+
+ // We had an error, remove it from the state
+ if(err) {
+ // Emit the server heartbeat failure
+ emitSDAMEvent(self, 'serverHeartbeatFailed', { durationMS: latencyMS, failure: err, connectionId: _server.name });
+ } else {
+ // Update the server ismaster
+ _server.ismaster = r.result;
+
+ // Check if we have a lastWriteDate convert it to MS
+ // and store on the server instance for later use
+ if(_server.ismaster.lastWrite && _server.ismaster.lastWrite.lastWriteDate) {
+ _server.lastWriteDate = _server.ismaster.lastWrite.lastWriteDate.getTime();
+ }
+
+ // Do we have a brand new server
+ if(_server.lastIsMasterMS == -1) {
+ _server.lastIsMasterMS = latencyMS;
+ } else if(_server.lastIsMasterMS) {
+ // After the first measurement, average RTT MUST be computed using an
+ // exponentially-weighted moving average formula, with a weighting factor (alpha) of 0.2.
+ // If the prior average is denoted old_rtt, then the new average (new_rtt) is
+ // computed from a new RTT measurement (x) using the following formula:
+ // alpha = 0.2
+ // new_rtt = alpha * x + (1 - alpha) * old_rtt
+ _server.lastIsMasterMS = 0.2 * latencyMS + (1 - 0.2) * _server.lastIsMasterMS;
+ }
+
+ if(_self.s.replicaSetState.update(_server)) {
+ // Primary lastIsMaster store it
+ if(_server.lastIsMaster() && _server.lastIsMaster().ismaster) {
+ self.ismaster = _server.lastIsMaster();
+ }
+ };
+
+ // Server heart beat event
+ emitSDAMEvent(self, 'serverHeartbeatSucceeded', { durationMS: latencyMS, reply: r.result, connectionId: _server.name });
+ }
+
+ // Calculate the stalness for this server
+ self.s.replicaSetState.updateServerMaxStaleness(_server, self.s.haInterval);
+
+ // Callback
+ cb(err, r);
+ });
+ }
+
+ // Connect any missing servers
+ function connectMissingServers() {
+ if(self.state == DESTROYED) return;
+
+ // Attempt to connect to any unknown servers
+ connectNewServers(self, self.s.replicaSetState.unknownServers, function(err, cb) {
+ if(self.state == DESTROYED) return;
+
+ // Check if we have an options.haInterval (meaning it was triggered from connect)
+ if(options.haInterval) {
+ // Do we have a primary and secondary
+ if(self.state == CONNECTING
+ && self.s.replicaSetState.hasPrimaryAndSecondary()) {
+ // Transition to connected
+ stateTransition(self, CONNECTED);
+ // Update initial state
+ self.initialConnectState.connect = true;
+ self.initialConnectState.fullsetup = true;
+ self.initialConnectState.all = true;
+ // Emit fullsetup and all events
+ process.nextTick(function() {
+ self.emit('connect', self);
+ self.emit('fullsetup', self);
+ self.emit('all', self);
+ });
+ } else if(self.state == CONNECTING
+ && self.s.replicaSetState.hasPrimary()) {
+ // Transition to connected
+ stateTransition(self, CONNECTED);
+ // Update initial state
+ self.initialConnectState.connect = true;
+ // Emit connected sign
+ process.nextTick(function() {
+ self.emit('connect', self);
+ });
+ } else if(self.state == CONNECTING
+ && self.s.replicaSetState.hasSecondary()
+ && self.s.options.secondaryOnlyConnectionAllowed) {
+ // Transition to connected
+ stateTransition(self, CONNECTED);
+ // Update initial state
+ self.initialConnectState.connect = true;
+ // Emit connected sign
+ process.nextTick(function() {
+ self.emit('connect', self);
+ });
+ } else if(self.state == CONNECTING) {
+ self.emit('error', new MongoError('no primary found in replicaset'));
+ // Destroy the topology
+ return self.destroy();
+ } else if(self.state == CONNECTED
+ && self.s.replicaSetState.hasPrimaryAndSecondary()
+ && !self.initialConnectState.fullsetup) {
+ self.initialConnectState.fullsetup = true;
+ // Emit fullsetup and all events
+ process.nextTick(function() {
+ self.emit('fullsetup', self);
+ self.emit('all', self);
+ });
+ }
+ }
+
+ if(!options.haInterval) topologyMonitor(self);
+ });
+ }
+
+ // No connectingServers but unknown servers
+ if(connectingServers.length == 0
+ && self.s.replicaSetState.unknownServers.length > 0 && options.haInterval) {
+ return connectMissingServers();
+ } else if(connectingServers.length == 0 && options.haInterval) {
+ self.destroy();
+ return self.emit('error', new MongoError('no valid replicaset members found'));
+ }
+
+ // Ping all servers
+ for(var i = 0; i < connectingServers.length; i++) {
+ pingServer(self, connectingServers[i], function(err, r) {
+ count = count - 1;
+
+ if(count == 0) {
+ connectMissingServers();
+ }
+ });
+ }
+ }, options.haInterval || self.s.haInterval)
+}
+
+function handleEvent(self, event) {
+ return function(err) {
+ if(self.state == DESTROYED) return;
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('handleEvent %s from server %s in replset with id %s', event, this.name, self.id));
+ }
+
+ self.s.replicaSetState.remove(this);
+ }
+}
+
+function handleInitialConnectEvent(self, event) {
+ return function(err) {
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('handleInitialConnectEvent %s from server %s in replset with id %s', event, this.name, self.id));
+ }
+
+ // Destroy the instance
+ if(self.state == DESTROYED) {
+ return this.destroy();
+ }
+
+ // Check the type of server
+ if(event == 'connect') {
+ // Update the state
+ var result = self.s.replicaSetState.update(this);
+ if(result == true) {
+ // Primary lastIsMaster store it
+ if(this.lastIsMaster() && this.lastIsMaster().ismaster) {
+ self.ismaster = this.lastIsMaster();
+ }
+
+ // Debug log
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('handleInitialConnectEvent %s from server %s in replset with id %s has state [%s]', event, this.name, self.id, JSON.stringify(self.s.replicaSetState.set)));
+ }
+
+ // Remove the handlers
+ for(var i = 0; i < handlers.length; i++) {
+ this.removeAllListeners(handlers[i]);
+ }
+
+ // Add stable state handlers
+ this.on('error', handleEvent(self, 'error'));
+ this.on('close', handleEvent(self, 'close'));
+ this.on('timeout', handleEvent(self, 'timeout'));
+ this.on('parseError', handleEvent(self, 'parseError'));
+ } else if(result instanceof MongoError) {
+ this.destroy();
+ self.destroy();
+ return self.emit('error', result);
+ } else {
+ this.destroy();
+ }
+ } else {
+ // Emit failure to connect
+ self.emit('failed', this);
+ // Remove from the state
+ self.s.replicaSetState.remove(this);
+ }
+
+ // Remove from the list from connectingServers
+ for(var i = 0; i < self.s.connectingServers.length; i++) {
+ if(self.s.connectingServers[i].equals(this)) {
+ self.s.connectingServers.splice(i, 1);
+ }
+ }
+
+ // Trigger topologyMonitor
+ if(self.s.connectingServers.length == 0) {
+ topologyMonitor(self, {haInterval: 1});
+ }
+ };
+}
+
+function connectServers(self, servers) {
+ // Update connectingServers
+ self.s.connectingServers = self.s.connectingServers.concat(servers);
+
+ // Index used to interleaf the server connects, avoiding
+ // runtime issues on io constrained vm's
+ var timeoutInterval = 0;
+
+ function connect(server, timeoutInterval) {
+ setTimeout(function() {
+ // Add the server to the state
+ if(self.s.replicaSetState.update(server)) {
+ // Primary lastIsMaster store it
+ if(server.lastIsMaster() && server.lastIsMaster().ismaster) {
+ self.ismaster = server.lastIsMaster();
+ }
+ }
+
+ // Add event handlers
+ server.once('close', handleInitialConnectEvent(self, 'close'));
+ server.once('timeout', handleInitialConnectEvent(self, 'timeout'));
+ server.once('parseError', handleInitialConnectEvent(self, 'parseError'));
+ server.once('error', handleInitialConnectEvent(self, 'error'));
+ server.once('connect', handleInitialConnectEvent(self, 'connect'));
+ // SDAM Monitoring events
+ server.on('serverOpening', function(e) { self.emit('serverOpening', e); });
+ server.on('serverDescriptionChanged', function(e) { self.emit('serverDescriptionChanged', e); });
+ server.on('serverClosed', function(e) { self.emit('serverClosed', e); });
+ // Start connection
+ server.connect(self.s.connectOptions);
+ }, timeoutInterval);
+ }
+
+ // Start all the servers
+ while(servers.length > 0) {
+ connect(servers.shift(), timeoutInterval++);
+ }
+}
+
+/**
+ * Emit event if it exists
+ * @method
+ */
+function emitSDAMEvent(self, event, description) {
+ if(self.listeners(event).length > 0) {
+ self.emit(event, description);
+ }
+}
+
+/**
+ * Initiate server connect
+ * @method
+ * @param {array} [options.auth=null] Array of auth options to apply on connect
+ */
+ReplSet.prototype.connect = function(options) {
+ var self = this;
+ // Add any connect level options to the internal state
+ this.s.connectOptions = options || {};
+ // Set connecting state
+ stateTransition(this, CONNECTING);
+ // Create server instances
+ var servers = this.s.seedlist.map(function(x) {
+ return new Server(assign({}, self.s.options, x, {
+ authProviders: self.authProviders, reconnect:false, monitoring:false, inTopology: true
+ }, {
+ clientInfo: clone(self.s.clientInfo)
+ }));
+ });
+
+ // Emit the topology opening event
+ emitSDAMEvent(this, 'topologyOpening', { topologyId: this.id });
+
+ // Start all server connections
+ connectServers(self, servers);
+}
+
+/**
+ * Destroy the server connection
+ * @method
+ */
+ReplSet.prototype.destroy = function() {
+ // Transition state
+ stateTransition(this, DESTROYED);
+ // Clear out any monitoring process
+ if(this.haTimeoutId) clearTimeout(this.haTimeoutId);
+ // Destroy the replicaset
+ this.s.replicaSetState.destroy();
+
+ // Destroy all connecting servers
+ this.s.connectingServers.forEach(function(x) {
+ x.destroy();
+ });
+
+ // Emit toplogy closing event
+ emitSDAMEvent(this, 'topologyClosed', { topologyId: this.id });
+}
+
+/**
+ * Unref all connections belong to this server
+ * @method
+ */
+ReplSet.prototype.unref = function() {
+ // Transition state
+ stateTransition(this, DISCONNECTED);
+
+ this.s.replicaSetState.allServers().forEach(function(x) {
+ x.unref();
+ });
+
+ clearTimeout(this.haTimeoutId);
+}
+
+/**
+ * Returns the last known ismaster document for this server
+ * @method
+ * @return {object}
+ */
+ReplSet.prototype.lastIsMaster = function() {
+ return this.s.replicaSetState.primary
+ ? this.s.replicaSetState.primary.lastIsMaster() : this.ismaster;
+}
+
+/**
+ * All raw connections
+ * @method
+ * @return {Connection[]}
+ */
+ReplSet.prototype.connections = function() {
+ var servers = this.s.replicaSetState.allServers();
+ var connections = [];
+ for(var i = 0; i < servers.length; i++) {
+ connections = connections.concat(servers[i].connections());
+ }
+
+ return connections;
+}
+
+/**
+ * Figure out if the server is connected
+ * @method
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @return {boolean}
+ */
+ReplSet.prototype.isConnected = function(options) {
+ options = options || {};
+
+ // If we are authenticating signal not connected
+ // To avoid interleaving of operations
+ if(this.authenticating) return false;
+
+ // If we specified a read preference check if we are connected to something
+ // than can satisfy this
+ if(options.readPreference
+ && options.readPreference.equals(ReadPreference.secondary)) {
+ return this.s.replicaSetState.hasSecondary();
+ }
+
+ if(options.readPreference
+ && options.readPreference.equals(ReadPreference.primary)) {
+ return this.s.replicaSetState.hasPrimary();
+ }
+
+ if(options.readPreference
+ && options.readPreference.equals(ReadPreference.primaryPreferred)) {
+ return this.s.replicaSetState.hasSecondary() || this.s.replicaSetState.hasPrimary();
+ }
+
+ if(options.readPreference
+ && options.readPreference.equals(ReadPreference.secondaryPreferred)) {
+ return this.s.replicaSetState.hasSecondary() || this.s.replicaSetState.hasPrimary();
+ }
+
+ if(this.s.secondaryOnlyConnectionAllowed
+ && this.s.replicaSetState.hasSecondary()) {
+ return true;
+ }
+
+ return this.s.replicaSetState.hasPrimary();
+}
+
+/**
+ * Figure out if the replicaset instance was destroyed by calling destroy
+ * @method
+ * @return {boolean}
+ */
+ReplSet.prototype.isDestroyed = function() {
+ return this.state == DESTROYED;
+}
+
+/**
+ * Get server
+ * @method
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @return {Server}
+ */
+ReplSet.prototype.getServer = function(options) {
+ // Ensure we have no options
+ options = options || {};
+
+ // Pick the right server baspickServerd on readPreference
+ var server = this.s.replicaSetState.pickServer(options.readPreference);
+ if(this.s.debug) this.emit('pickedServer', options.readPreference, server);
+ return server;
+}
+
+/**
+ * Get all connected servers
+ * @method
+ * @return {Server[]}
+ */
+ReplSet.prototype.getServers = function() {
+ return this.s.replicaSetState.allServers();
+}
+
+function basicReadPreferenceValidation(self, options) {
+ if(options.readPreference && !(options.readPreference instanceof ReadPreference)) {
+ throw new Error("readPreference must be an instance of ReadPreference");
+ }
+}
+
+//
+// Execute write operation
+var executeWriteOperation = function(self, op, ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ // Ensure we have no options
+ options = options || {};
+
+ // No server returned we had an error
+ if(self.s.replicaSetState.primary == null) {
+ return callback(new MongoError("no primary server found"));
+ }
+
+ // Execute the command
+ self.s.replicaSetState.primary[op](ns, ops, options, callback);
+}
+
+/**
+ * Insert one or more documents
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of documents to insert
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+ReplSet.prototype.insert = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.s.replicaSetState.hasPrimary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('insert', ns, ops, options, callback);
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'insert', ns, ops, options, callback);
+}
+
+/**
+ * Perform one or more update operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of updates
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+ReplSet.prototype.update = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.s.replicaSetState.hasPrimary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('update', ns, ops, options, callback);
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'update', ns, ops, options, callback);
+}
+
+/**
+ * Perform one or more remove operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of removes
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+ReplSet.prototype.remove = function(ns, ops, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+
+ // Not connected but we have a disconnecthandler
+ if(!this.s.replicaSetState.hasPrimary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('remove', ns, ops, options, callback);
+ }
+
+ // Execute write operation
+ executeWriteOperation(this, 'remove', ns, ops, options, callback);
+}
+
+/**
+ * Execute a command
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {object} cmd The command hash
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Connection} [options.connection] Specify connection object to execute command against
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+ReplSet.prototype.command = function(ns, cmd, options, callback) {
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ if(this.state == DESTROYED) return callback(new MongoError(f('topology was destroyed')));
+ var self = this;
+
+ // Establish readPreference
+ var readPreference = options.readPreference ? options.readPreference : ReadPreference.primary;
+
+ // If the readPreference is primary and we have no primary, store it
+ if(readPreference.preference == 'primary' && !this.s.replicaSetState.hasPrimary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('command', ns, cmd, options, callback);
+ } else if(readPreference.preference == 'secondary' && !this.s.replicaSetState.hasSecondary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('command', ns, cmd, options, callback);
+ } else if(readPreference.preference != 'primary' && !this.s.replicaSetState.hasSecondary() && !this.s.replicaSetState.hasPrimary() && this.s.disconnectHandler != null) {
+ return this.s.disconnectHandler.add('command', ns, cmd, options, callback);
+ }
+
+ // Pick a server
+ var server = this.s.replicaSetState.pickServer(readPreference);
+ // We received an error, return it
+ if(!(server instanceof Server)) return callback(server);
+ // Emit debug event
+ if(self.s.debug) self.emit('pickedServer', ReadPreference.primary, server);
+
+ // No server returned we had an error
+ if(server == null) {
+ return callback(new MongoError(f("no server found that matches the provided readPreference %s", readPreference)));
+ }
+
+ // Execute the command
+ server.command(ns, cmd, options, callback);
+}
+
+/**
+ * Authenticate using a specified mechanism
+ * @method
+ * @param {string} mechanism The Auth mechanism we are invoking
+ * @param {string} db The db we are invoking the mechanism against
+ * @param {...object} param Parameters for the specific mechanism
+ * @param {authResultCallback} callback A callback function
+ */
+ReplSet.prototype.auth = function(mechanism, db) {
+ var allArgs = Array.prototype.slice.call(arguments, 0).slice(0);
+ var self = this;
+ var args = Array.prototype.slice.call(arguments, 2);
+ var callback = args.pop();
+
+ // If we don't have the mechanism fail
+ if(this.authProviders[mechanism] == null && mechanism != 'default') {
+ return callback(new MongoError(f("auth provider %s does not exist", mechanism)));
+ }
+
+ // Are we already authenticating, throw
+ if(this.authenticating) {
+ return callback(new MongoError('authentication or logout allready in process'));
+ }
+
+ // Topology is not connected, save the call in the provided store to be
+ // Executed at some point when the handler deems it's reconnected
+ if(!self.s.replicaSetState.hasPrimary() && self.s.disconnectHandler != null) {
+ return self.s.disconnectHandler.add('auth', db, allArgs, {}, callback);
+ }
+
+ // Set to authenticating
+ this.authenticating = true;
+ // All errors
+ var errors = [];
+
+ // Get all the servers
+ var servers = this.s.replicaSetState.allServers();
+ // No servers return
+ if(servers.length == 0) {
+ this.authenticating = false;
+ callback(null, true);
+ }
+
+ // Authenticate
+ function auth(server) {
+ // Arguments without a callback
+ var argsWithoutCallback = [mechanism, db].concat(args.slice(0));
+ // Create arguments
+ var finalArguments = argsWithoutCallback.concat([function(err, r) {
+ count = count - 1;
+ // Save all the errors
+ if(err) errors.push({name: server.name, err: err});
+ // We are done
+ if(count == 0) {
+ // Auth is done
+ self.authenticating = false;
+
+ // Return the auth error
+ if(errors.length) return callback(MongoError.create({
+ message: 'authentication fail', errors: errors
+ }), false);
+
+ // Successfully authenticated session
+ callback(null, self);
+ }
+ }]);
+
+ if(!server.lastIsMaster().arbiterOnly) {
+ // Execute the auth only against non arbiter servers
+ server.auth.apply(server, finalArguments);
+ } else {
+ // If we are authenticating against an arbiter just ignore it
+ finalArguments.pop()(null);
+ }
+ }
+
+ // Get total count
+ var count = servers.length;
+ // Authenticate against all servers
+ while(servers.length > 0) {
+ auth(servers.shift());
+ }
+}
+
+/**
+ * Logout from a database
+ * @method
+ * @param {string} db The db we are logging out from
+ * @param {authResultCallback} callback A callback function
+ */
+ReplSet.prototype.logout = function(dbName, callback) {
+ var self = this;
+ // Are we authenticating or logging out, throw
+ if(this.authenticating) {
+ throw new MongoError('authentication or logout allready in process');
+ }
+
+ // Ensure no new members are processed while logging out
+ this.authenticating = true;
+
+ // Remove from all auth providers (avoid any reaplication of the auth details)
+ var providers = Object.keys(this.authProviders);
+ for(var i = 0; i < providers.length; i++) {
+ this.authProviders[providers[i]].logout(dbName);
+ }
+
+ // Now logout all the servers
+ var servers = this.s.replicaSetState.allServers();
+ var count = servers.length;
+ if(count == 0) return callback();
+ var errors = [];
+
+ // Execute logout on all server instances
+ for(var i = 0; i < servers.length; i++) {
+ servers[i].logout(dbName, function(err) {
+ count = count - 1;
+ if(err) errors.push({name: server.name, err: err});
+
+ if(count == 0) {
+ // Do not block new operations
+ self.authenticating = false;
+ // If we have one or more errors
+ if(errors.length) return callback(MongoError.create({
+ message: f('logout failed against db %s', dbName), errors: errors
+ }), false);
+
+ // No errors
+ callback();
+ }
+ });
+ }
+}
+
+/**
+ * Perform one or more remove operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {{object}|{Long}} cmd Can be either a command returning a cursor or a cursorId
+ * @param {object} [options.batchSize=0] Batchsize for the operation
+ * @param {array} [options.documents=[]] Initial documents list for cursor
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+ReplSet.prototype.cursor = function(ns, cmd, cursorOptions) {
+ cursorOptions = cursorOptions || {};
+ var FinalCursor = cursorOptions.cursorFactory || this.s.Cursor;
+ return new FinalCursor(this.s.bson, ns, cmd, cursorOptions, this, this.s.options);
+}
+
+/**
+ * A replset connect event, used to verify that the connection is up and running
+ *
+ * @event ReplSet#connect
+ * @type {ReplSet}
+ */
+
+/**
+ * A replset reconnect event, used to verify that the topology reconnected
+ *
+ * @event ReplSet#reconnect
+ * @type {ReplSet}
+ */
+
+/**
+ * A replset fullsetup event, used to signal that all topology members have been contacted.
+ *
+ * @event ReplSet#fullsetup
+ * @type {ReplSet}
+ */
+
+/**
+ * A replset all event, used to signal that all topology members have been contacted.
+ *
+ * @event ReplSet#all
+ * @type {ReplSet}
+ */
+
+/**
+ * A replset failed event, used to signal that initial replset connection failed.
+ *
+ * @event ReplSet#failed
+ * @type {ReplSet}
+ */
+
+/**
+ * A server member left the replicaset
+ *
+ * @event ReplSet#left
+ * @type {function}
+ * @param {string} type The type of member that left (primary|secondary|arbiter)
+ * @param {Server} server The server object that left
+ */
+
+/**
+ * A server member joined the replicaset
+ *
+ * @event ReplSet#joined
+ * @type {function}
+ * @param {string} type The type of member that joined (primary|secondary|arbiter)
+ * @param {Server} server The server object that joined
+ */
+
+/**
+ * A server opening SDAM monitoring event
+ *
+ * @event ReplSet#serverOpening
+ * @type {object}
+ */
+
+/**
+ * A server closed SDAM monitoring event
+ *
+ * @event ReplSet#serverClosed
+ * @type {object}
+ */
+
+/**
+ * A server description SDAM change monitoring event
+ *
+ * @event ReplSet#serverDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * A topology open SDAM event
+ *
+ * @event ReplSet#topologyOpening
+ * @type {object}
+ */
+
+/**
+ * A topology closed SDAM event
+ *
+ * @event ReplSet#topologyClosed
+ * @type {object}
+ */
+
+/**
+ * A topology structure SDAM change event
+ *
+ * @event ReplSet#topologyDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatStarted SDAM event
+ *
+ * @event ReplSet#serverHeartbeatStarted
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatFailed SDAM event
+ *
+ * @event ReplSet#serverHeartbeatFailed
+ * @type {object}
+ */
+
+/**
+ * A topology serverHeartbeatSucceeded SDAM change event
+ *
+ * @event ReplSet#serverHeartbeatSucceeded
+ * @type {object}
+ */
+
+module.exports = ReplSet;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset_state.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset_state.js
new file mode 100644
index 0000000..600e554
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/replset_state.js
@@ -0,0 +1,948 @@
+"use strict"
+
+var inherits = require('util').inherits,
+ f = require('util').format,
+ EventEmitter = require('events').EventEmitter,
+ Logger = require('../connection/logger'),
+ ObjectId = require('bson').ObjectId,
+ ReadPreference = require('./read_preference'),
+ MongoError = require('../error');
+
+var TopologyType = {
+ 'Single': 'Single', 'ReplicaSetNoPrimary': 'ReplicaSetNoPrimary',
+ 'ReplicaSetWithPrimary': 'ReplicaSetWithPrimary', 'Sharded': 'Sharded',
+ 'Unknown': 'Unknown'
+};
+
+var ServerType = {
+ 'Standalone': 'Standalone', 'Mongos': 'Mongos', 'PossiblePrimary': 'PossiblePrimary',
+ 'RSPrimary': 'RSPrimary', 'RSSecondary': 'RSSecondary', 'RSArbiter': 'RSArbiter',
+ 'RSOther': 'RSOther', 'RSGhost': 'RSGhost', 'Unknown': 'Unknown'
+};
+
+var ReplSetState = function(options) {
+ options = options || {};
+ // Add event listener
+ EventEmitter.call(this);
+ // Topology state
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ this.setName = options.setName;
+
+ // Server set
+ this.set = {};
+
+ // Unpacked options
+ this.id = options.id;
+ this.setName = options.setName;
+
+ // Replicaset logger
+ this.logger = options.logger || Logger('ReplSet', options);
+
+ // Server selection index
+ this.index = 0;
+ // Acceptable latency
+ this.acceptableLatency = options.acceptableLatency || 15;
+
+ // heartbeatFrequencyMS
+ this.heartbeatFrequencyMS = options.heartbeatFrequencyMS || 10000;
+
+ // Server side
+ this.primary = null;
+ this.secondaries = [];
+ this.arbiters = [];
+ this.passives = [];
+ this.ghosts = [];
+ // Current unknown hosts
+ this.unknownServers = [];
+ // In set status
+ this.set = {};
+ // Status
+ this.maxElectionId = null;
+ this.maxSetVersion = 0;
+ // Description of the Replicaset
+ this.replicasetDescription = {
+ "topologyType": "Unknown", "servers": []
+ };
+}
+
+inherits(ReplSetState, EventEmitter);
+
+ReplSetState.prototype.hasPrimaryAndSecondary = function(server) {
+ return this.primary != null && this.secondaries.length > 0;
+}
+
+ReplSetState.prototype.hasPrimary = function(server) {
+ return this.primary != null;
+}
+
+ReplSetState.prototype.hasSecondary = function(server) {
+ return this.secondaries.length > 0;
+}
+
+ReplSetState.prototype.allServers = function(options) {
+ options = options || {};
+ var servers = this.primary ? [this.primary] : [];
+ servers = servers.concat(this.secondaries);
+ if(!options.ignoreArbiters) servers = servers.concat(this.arbiters);
+ servers = servers.concat(this.passives);
+ return servers;
+}
+
+ReplSetState.prototype.destroy = function() {
+ // Destroy all sockets
+ if(this.primary) this.primary.destroy();
+ this.secondaries.forEach(function(x) { x.destroy(); });
+ this.arbiters.forEach(function(x) { x.destroy(); });
+ this.passives.forEach(function(x) { x.destroy(); });
+ this.ghosts.forEach(function(x) { x.destroy(); });
+ // Clear out the complete state
+ this.secondaries = [];
+ this.arbiters = [];
+ this.passives = [];
+ this.ghosts = [];
+ this.unknownServers = [];
+ this.set = {};
+}
+
+ReplSetState.prototype.remove = function(server, options) {
+ options = options || {};
+
+ // Only remove if the current server is not connected
+ var servers = this.primary ? [this.primary] : [];
+ servers = servers.concat(this.secondaries);
+ servers = servers.concat(this.arbiters);
+ servers = servers.concat(this.passives);
+
+ // Check if it's active and this is just a failed connection attempt
+ for(var i = 0; i < servers.length; i++) {
+ if(!options.force && servers[i].equals(server) && servers[i].isConnected && servers[i].isConnected()) {
+ return;
+ }
+ }
+
+ // If we have it in the set remove it
+ if(this.set[server.name]) {
+ this.set[server.name].type = ServerType.Unknown;
+ this.set[server.name].electionId = null;
+ this.set[server.name].setName = null;
+ this.set[server.name].setVersion = null;
+ }
+
+ // Remove type
+ var removeType = null;
+
+ // Remove from any lists
+ if(this.primary && this.primary.equals(server)) {
+ this.primary = null;
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ removeType = 'primary';
+ }
+
+ // Remove from any other server lists
+ removeType = removeFrom(server, this.secondaries) ? 'secondary' : removeType;
+ removeType = removeFrom(server, this.arbiters) ? 'arbiter' : removeType;
+ removeType = removeFrom(server, this.passives) ? 'secondary' : removeType;
+ removeFrom(server, this.ghosts);
+ removeFrom(server, this.unknownServers);
+
+ // Do we have a removeType
+ if(removeType) {
+ this.emit('left', removeType, server);
+ }
+}
+
+ReplSetState.prototype.update = function(server) {
+ var self = this;
+ // Get the current ismaster
+ var ismaster = server.lastIsMaster();
+
+ //
+ // Add any hosts
+ //
+ if(ismaster) {
+ // Join all the possible new hosts
+ var hosts = Array.isArray(ismaster.hosts) ? ismaster.hosts : [];
+ hosts = hosts.concat(Array.isArray(ismaster.arbiters) ? ismaster.arbiters : []);
+ hosts = hosts.concat(Array.isArray(ismaster.passives) ? ismaster.passives : []);
+
+ // Add all hosts as unknownServers
+ for(var i = 0; i < hosts.length; i++) {
+ // Add to the list of unknown server
+ if(this.unknownServers.indexOf(hosts[i]) == -1
+ && (!this.set[hosts[i]] || this.set[hosts[i]].type == ServerType.Unknown)) {
+ this.unknownServers.push(hosts[i]);
+ }
+
+ if(!this.set[hosts[i]]) {
+ this.set[hosts[i]] = {
+ type: ServerType.Unknown,
+ electionId: null,
+ setName: null,
+ setVersion: null
+ }
+ }
+ }
+ }
+
+ //
+ // Unknown server
+ //
+ if(!ismaster && !inList(ismaster, server, this.unknownServers)) {
+ self.set[server.name] = {
+ type: ServerType.Unknown, setVersion: null, electionId: null, setName: null
+ }
+ // Update set information about the server instance
+ self.set[server.name].type = ServerType.Unknown;
+ self.set[server.name].electionId = ismaster ? ismaster.electionId : ismaster;
+ self.set[server.name].setName = ismaster ? ismaster.setName : ismaster;
+ self.set[server.name].setVersion = ismaster ? ismaster.setVersion : ismaster;
+
+ if(self.unknownServers.indexOf(server.name) == -1) {
+ self.unknownServers.push(server.name);
+ }
+
+ // Set the topology
+ return false;
+ }
+
+ //
+ // Is this a mongos
+ //
+ if(ismaster && ismaster.msg == 'isdbgrid') {
+ return false;
+ }
+
+ // A RSOther instance
+ if((ismaster.setName && ismaster.hidden)
+ || (ismaster.setName && !ismaster.ismaster && !ismaster.secondary && !ismaster.arbiterOnly && !ismaster.passive)) {
+ self.set[server.name] = {
+ type: ServerType.RSOther, setVersion: null,
+ electionId: null, setName: ismaster.setName
+ }
+ // Set the topology
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ return false;
+ }
+
+ // A RSGhost instance
+ if(ismaster.isreplicaset) {
+ self.set[server.name] = {
+ type: ServerType.RSGhost, setVersion: null,
+ electionId: null, setName: null
+ }
+
+ // Set the topology
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+
+ // Set the topology
+ return false;
+ }
+
+ //
+ // Standalone server, destroy and return
+ //
+ if(ismaster && ismaster.ismaster && !ismaster.setName) {
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.Unknown;
+ this.remove(server, {force:true});
+ return false;
+ }
+
+ //
+ // Server in maintanance mode
+ //
+ if(ismaster && !ismaster.ismaster && !ismaster.secondary && !ismaster.arbiterOnly) {
+ this.remove(server, {force:true});
+ return false;
+ }
+
+ //
+ // If the .me field does not match the passed in server
+ //
+ if(ismaster.me && ismaster.me != server.name) {
+ if(this.logger.isWarn()) {
+ this.logger.warn(f('the seedlist server was removed due to its address %s not matching its ismaster.me address %s', server.name, ismaster.me));
+ }
+
+ // Set the type of topology we have
+ if(this.primary && !this.primary.equals(server)) {
+ this.topologyType = TopologyType.ReplicaSetWithPrimary;
+ } else {
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ }
+
+ return false;
+ }
+
+ //
+ // Primary handling
+ //
+ if(!this.primary && ismaster.ismaster && ismaster.setName) {
+ var ismasterElectionId = server.lastIsMaster().electionId;
+ if(this.setName && this.setName != ismaster.setName) {
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ return new MongoError(f('setName from ismaster does not match provided connection setName [%s] != [%s]', ismaster.setName, this.setName));
+ }
+
+ if(!this.maxElectionId && ismasterElectionId) {
+ this.maxElectionId = ismasterElectionId;
+ } else if(this.maxElectionId && ismasterElectionId) {
+ var result = compareObjectIds(this.maxElectionId, ismasterElectionId);
+ // Get the electionIds
+ var ismasterSetVersion = server.lastIsMaster().setVersion;
+
+ // if(result == 1 || result == 0) {
+ if(result == 1) {
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ return false;
+ } else if(result == 0 && ismasterSetVersion) {
+ if(ismasterSetVersion < this.maxSetVersion) {
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ return false;
+ }
+ }
+
+ this.maxSetVersion = ismasterSetVersion;
+ this.maxElectionId = ismasterElectionId;
+ }
+
+ self.primary = server;
+ self.set[server.name] = {
+ type: ServerType.RSPrimary,
+ setVersion: ismaster.setVersion,
+ electionId: ismaster.electionId,
+ setName: ismaster.setName
+ }
+
+ // Set the topology
+ this.topologyType = TopologyType.ReplicaSetWithPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ removeFrom(server, self.unknownServers);
+ removeFrom(server, self.secondaries);
+ removeFrom(server, self.passives);
+ self.emit('joined', 'primary', server);
+ emitTopologyDescriptionChanged(self);
+ return true;
+ } else if(ismaster.ismaster && ismaster.setName) {
+ // Get the electionIds
+ var currentElectionId = self.set[self.primary.name].electionId;
+ var currentSetVersion = self.set[self.primary.name].setVersion;
+ var currentSetName = self.set[self.primary.name].setName;
+ var ismasterElectionId = server.lastIsMaster().electionId;
+ var ismasterSetVersion = server.lastIsMaster().setVersion;
+ var ismasterSetName = server.lastIsMaster().setName;
+
+ // Is it the same server instance
+ if(this.primary.equals(server)
+ && currentSetName == ismasterSetName) {
+ return false;
+ }
+
+ // If we do not have the same rs name
+ if(currentSetName && currentSetName != ismasterSetName) {
+ if(!this.primary.equals(server)) {
+ this.topologyType = TopologyType.ReplicaSetWithPrimary;
+ } else {
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ }
+
+ return false;
+ }
+
+ // Check if we need to replace the server
+ if(currentElectionId && ismasterElectionId) {
+ var result = compareObjectIds(currentElectionId, ismasterElectionId);
+
+ if(result == 1) {
+ return false;
+ } else if(result == 0 && (currentSetVersion > ismasterSetVersion)) {
+ return false;
+ }
+ } else if(!currentElectionId && ismasterElectionId
+ && ismasterSetVersion) {
+ if(ismasterSetVersion < this.maxSetVersion) {
+ return false;
+ }
+ }
+
+ if(!this.maxElectionId && ismasterElectionId) {
+ this.maxElectionId = ismasterElectionId;
+ } else if(this.maxElectionId && ismasterElectionId) {
+ var result = compareObjectIds(this.maxElectionId, ismasterElectionId);
+
+ if(result == 1) {
+ return false;
+ } else if(result == 0 && currentSetVersion && ismasterSetVersion) {
+ if(ismasterSetVersion < this.maxSetVersion) {
+ return false;
+ }
+ } else {
+ if(ismasterSetVersion < this.maxSetVersion) {
+ return false;
+ }
+ }
+
+ this.maxElectionId = ismasterElectionId;
+ this.maxSetVersion = ismasterSetVersion;
+ } else {
+ this.maxSetVersion = ismasterSetVersion;
+ }
+
+ // Modify the entry to unknown
+ self.set[self.primary.name] = {
+ type: ServerType.Unknown, setVersion: null,
+ electionId: null, setName: null
+ }
+
+ // Signal primary left
+ self.emit('left', 'primary', this.primary);
+ // Destroy the instance
+ self.primary.destroy();
+ // Set the new instance
+ self.primary = server;
+ // Set the set information
+ self.set[server.name] = {
+ type: ServerType.RSPrimary, setVersion: ismaster.setVersion,
+ electionId: ismaster.electionId, setName: ismaster.setName
+ }
+
+ // Set the topology
+ this.topologyType = TopologyType.ReplicaSetWithPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ removeFrom(server, self.unknownServers);
+ removeFrom(server, self.secondaries);
+ removeFrom(server, self.passives);
+ self.emit('joined', 'primary', server);
+ emitTopologyDescriptionChanged(self);
+ return true;
+ }
+
+ // A possible instance
+ if(!this.primary && ismaster.primary) {
+ self.set[ismaster.primary] = {
+ type: ServerType.PossiblePrimary, setVersion: null,
+ electionId: null, setName: null
+ }
+ }
+
+ //
+ // Secondary handling
+ //
+ if(ismaster.secondary && ismaster.setName
+ && !inList(ismaster, server, this.secondaries)
+ && this.setName && this.setName == ismaster.setName) {
+ addToList(self, ServerType.RSSecondary, ismaster, server, this.secondaries);
+ // Set the topology
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ removeFrom(server, self.unknownServers);
+
+ // Remove primary
+ if(this.primary && this.primary.name == server.name) {
+ server.destroy();
+ this.primary = null;
+ self.emit('left', 'primary', server);
+ }
+
+ self.emit('joined', 'secondary', server);
+ emitTopologyDescriptionChanged(self);
+ return true;
+ }
+
+ //
+ // Arbiter handling
+ //
+ if(ismaster.arbiterOnly && ismaster.setName
+ && !inList(ismaster, server, this.arbiters)
+ && this.setName && this.setName == ismaster.setName) {
+ addToList(self, ServerType.RSArbiter, ismaster, server, this.arbiters);
+ // Set the topology
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ removeFrom(server, self.unknownServers);
+ self.emit('joined', 'arbiter', server);
+ emitTopologyDescriptionChanged(self);
+ return true;
+ }
+
+ //
+ // Passive handling
+ //
+ if(ismaster.passive && ismaster.setName
+ && !inList(ismaster, server, this.passives)
+ && this.setName && this.setName == ismaster.setName) {
+ addToList(self, ServerType.RSSecondary, ismaster, server, this.passives);
+ // Set the topology
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ if(ismaster.setName) this.setName = ismaster.setName;
+ removeFrom(server, self.unknownServers);
+
+ // Remove primary
+ if(this.primary && this.primary.name == server.name) {
+ server.destroy();
+ this.primary = null;
+ self.emit('left', 'primary', server);
+ }
+
+ self.emit('joined', 'secondary', server);
+ emitTopologyDescriptionChanged(self);
+ return true;
+ }
+
+ //
+ // Remove the primary
+ //
+ if(this.set[server.name] && this.set[server.name].type == ServerType.RSPrimary) {
+ self.emit('left', 'primary', this.primary);
+ this.primary.destroy();
+ this.primary = null;
+ this.topologyType = TopologyType.ReplicaSetNoPrimary;
+ return false;
+ }
+
+ this.topologyType = this.primary ? TopologyType.ReplicaSetWithPrimary : TopologyType.ReplicaSetNoPrimary;
+ return false;
+}
+
+/**
+ * Recalculate single server max staleness
+ * @method
+ */
+ReplSetState.prototype.updateServerMaxStaleness = function(server, haInterval) {
+ // Locate the max secondary lastwrite
+ var max = 0;
+ // Go over all secondaries
+ for(var i = 0; i < this.secondaries.length; i++) {
+ max = Math.max(max, this.secondaries[i].lastWriteDate);
+ }
+
+ // Perform this servers staleness calculation
+ if(server.ismaster.maxWireVersion >= 5
+ && server.ismaster.secondary
+ && this.hasPrimary()) {
+ server.staleness = (server.lastUpdateTime - server.lastWriteDate)
+ - (this.primary.lastUpdateTime - this.primary.lastWriteDate)
+ + haInterval;
+ } else if(server.ismaster.maxWireVersion >= 5
+ && server.ismaster.secondary){
+ server.staleness = max - server.lastWriteDate + haInterval;
+ }
+}
+
+/**
+ * Recalculate all the stalness values for secodaries
+ * @method
+ */
+ReplSetState.prototype.updateSecondariesMaxStaleness = function(haInterval) {
+ for(var i = 0; i < this.secondaries.length; i++) {
+ this.updateServerMaxStaleness(this.secondaries[i], haInterval);
+ }
+}
+
+/**
+ * Pick a server by the passed in ReadPreference
+ * @method
+ * @param {ReadPreference} readPreference The ReadPreference instance to use
+ */
+ReplSetState.prototype.pickServer = function(readPreference) {
+ // If no read Preference set to primary by default
+ readPreference = readPreference || ReadPreference.primary;
+
+ // maxStalenessMS is not allowed with a primary read
+ if(readPreference.preference == 'primary' && readPreference.maxStalenessMS) {
+ return new MongoError('primary readPreference incompatible with maxStalenessMS');
+ }
+
+ // Check if we have any non compatible servers for maxStalenessMS
+ var allservers = this.primary ? [this.primary] : [];
+ allservers = allservers.concat(this.secondaries);
+
+ // Does any of the servers not support the right wire protocol version
+ // for maxStalenessMS when maxStalenessMS specified on readPreference. Then error out
+ if(readPreference.maxStalenessMS) {
+ for(var i = 0; i < allservers.length; i++) {
+ if(allservers[i].ismaster.maxWireVersion < 5) {
+ return new MongoError('maxStalenessMS not supported by at least one of the replicaset members');
+ }
+ }
+ }
+
+ // Do we have the nearest readPreference
+ if(readPreference.preference == 'nearest' && !readPreference.maxStalenessMS) {
+ return pickNearest(this, readPreference);
+ } else if(readPreference.preference == 'nearest' && readPreference.maxStalenessMS) {
+ return pickNearestMaxStalenessMS(this, readPreference);
+ }
+
+ // Get all the secondaries
+ var secondaries = this.secondaries;
+
+ // Check if we can satisfy and of the basic read Preferences
+ if(readPreference.equals(ReadPreference.secondary)
+ && secondaries.length == 0) {
+ return new MongoError("no secondary server available");
+ }
+
+ if(readPreference.equals(ReadPreference.secondaryPreferred)
+ && secondaries.length == 0
+ && this.primary == null) {
+ return new MongoError("no secondary or primary server available");
+ }
+
+ if(readPreference.equals(ReadPreference.primary)
+ && this.primary == null) {
+ return new MongoError("no primary server available");
+ }
+
+ // Secondary preferred or just secondaries
+ if(readPreference.equals(ReadPreference.secondaryPreferred)
+ || readPreference.equals(ReadPreference.secondary)) {
+
+ if(secondaries.length > 0 && !readPreference.maxStalenessMS) {
+ // Pick nearest of any other servers available
+ var server = pickNearest(this, readPreference);
+ // No server in the window return primary
+ if(server) {
+ return server;
+ }
+ } else if(secondaries.length > 0 && readPreference.maxStalenessMS) {
+ // Pick nearest of any other servers available
+ var server = pickNearestMaxStalenessMS(this, readPreference);
+ // No server in the window return primary
+ if(server) {
+ return server;
+ }
+ }
+
+ if(readPreference.equals(ReadPreference.secondaryPreferred)){
+ return this.primary;
+ }
+
+ return null;
+ }
+
+ // Primary preferred
+ if(readPreference.equals(ReadPreference.primaryPreferred)) {
+ var server = null;
+
+ // We prefer the primary if it's available
+ if(this.primary) {
+ return this.primary;
+ }
+
+ // Pick a secondary
+ if(secondaries.length > 0 && !readPreference.maxStalenessMS) {
+ server = pickNearest(this, readPreference);
+ } else if(secondaries.length > 0 && readPreference.maxStalenessMS) {
+ server = pickNearestMaxStalenessMS(this, readPreference);
+ }
+
+ // Did we find a server
+ if(server) return server;
+ }
+
+ // Return the primary
+ return this.primary;
+}
+
+//
+// Filter serves by tags
+var filterByTags = function(readPreference, servers) {
+ if(readPreference.tags == null) return servers;
+ var filteredServers = [];
+ var tagsArray = Array.isArray(readPreference.tags) ? readPreference.tags : [readPreference.tags];
+
+ // Iterate over the tags
+ for(var j = 0; j < tagsArray.length; j++) {
+ var tags = tagsArray[j];
+
+ // Iterate over all the servers
+ for(var i = 0; i < servers.length; i++) {
+ var serverTag = servers[i].lastIsMaster().tags || {};
+
+ // Did we find the a matching server
+ var found = true;
+ // Check if the server is valid
+ for(var name in tags) {
+ if(serverTag[name] != tags[name]) found = false;
+ }
+
+ // Add to candidate list
+ if(found) {
+ filteredServers.push(servers[i]);
+ }
+ }
+
+ // We found servers by the highest priority
+ if(found) break;
+ }
+
+ // Returned filtered servers
+ return filteredServers;
+}
+
+function pickNearestMaxStalenessMS(self, readPreference) {
+ // Only get primary and secondaries as seeds
+ var seeds = {};
+ var servers = [];
+ var heartbeatFrequencyMS = self.heartbeatFrequencyMS;
+
+ // Check if the maxStalenessMS > heartbeatFrequencyMS * 2
+ if(readPreference.maxStalenessMS < (heartbeatFrequencyMS * 2)) {
+ return new MongoError('maxStalenessMS must be at least twice the haInterval');
+ }
+
+ // Add primary to list if not a secondary read preference
+ if(self.primary && readPreference.preference != 'secondary') {
+ servers.push(self.primary);
+ }
+
+ // Add all the secondaries
+ for(var i = 0; i < self.secondaries.length; i++) {
+ servers.push(self.secondaries[i]);
+ }
+
+ // Filter by tags
+ servers = filterByTags(readPreference, servers);
+
+ //
+ // Locate lowest time (picked servers are lowest time + acceptable Latency margin)
+ // var lowest = servers.length > 0 ? servers[0].lastIsMasterMS : 0;
+
+ // Filter by latency
+ servers = servers.filter(function(s) {
+ return s.staleness <= readPreference.maxStalenessMS;
+ });
+
+ // Sort by time
+ servers.sort(function(a, b) {
+ // return a.time > b.time;
+ return a.lastIsMasterMS > b.lastIsMasterMS
+ });
+
+ // No servers, default to primary
+ if(servers.length == 0) {
+ return null
+ }
+
+ // Ensure index does not overflow the number of available servers
+ self.index = self.index % servers.length;
+
+ // Get the server
+ var server = servers[self.index];
+ // Add to the index
+ self.index = self.index + 1;
+ // Return the first server of the sorted and filtered list
+ return server;
+}
+
+function pickNearest(self, readPreference) {
+ // Only get primary and secondaries as seeds
+ var seeds = {};
+ var servers = [];
+
+ // Add primary to list if not a secondary read preference
+ if(self.primary && readPreference.preference != 'secondary') {
+ servers.push(self.primary);
+ }
+
+ // Add all the secondaries
+ for(var i = 0; i < self.secondaries.length; i++) {
+ servers.push(self.secondaries[i]);
+ }
+
+ // Filter by tags
+ servers = filterByTags(readPreference, servers);
+
+ // Sort by time
+ servers.sort(function(a, b) {
+ // return a.time > b.time;
+ return a.lastIsMasterMS > b.lastIsMasterMS
+ });
+
+ // Locate lowest time (picked servers are lowest time + acceptable Latency margin)
+ var lowest = servers.length > 0 ? servers[0].lastIsMasterMS : 0;
+
+ // Filter by latency
+ servers = servers.filter(function(s) {
+ return s.lastIsMasterMS <= lowest + self.acceptableLatency;
+ });
+
+ // No servers, default to primary
+ if(servers.length == 0) {
+ return null
+ }
+
+ // Ensure index does not overflow the number of available servers
+ self.index = self.index % servers.length;
+ // Get the server
+ var server = servers[self.index];
+ // Add to the index
+ self.index = self.index + 1;
+ // Return the first server of the sorted and filtered list
+ return server;
+}
+
+function inList(ismaster, server, list) {
+ for(var i = 0; i < list.length; i++) {
+ if(list[i].name == server.name) return true;
+ }
+
+ return false;
+}
+
+function addToList(self, type, ismaster, server, list) {
+ // Update set information about the server instance
+ self.set[server.name].type = type;
+ self.set[server.name].electionId = ismaster ? ismaster.electionId : ismaster;
+ self.set[server.name].setName = ismaster ? ismaster.setName : ismaster;
+ self.set[server.name].setVersion = ismaster ? ismaster.setVersion : ismaster;
+ // Add to the list
+ list.push(server);
+}
+
+function compareObjectIds(id1, id2) {
+ var a = new Buffer(id1.toHexString(), 'hex');
+ var b = new Buffer(id2.toHexString(), 'hex');
+
+ if(a === b) {
+ return 0;
+ }
+
+ if(typeof Buffer.compare === 'function') {
+ return Buffer.compare(a, b);
+ }
+
+ var x = a.length;
+ var y = b.length;
+ var len = Math.min(x, y);
+
+ for (var i = 0; i < len; i++) {
+ if (a[i] !== b[i]) {
+ break;
+ }
+ }
+
+ if (i !== len) {
+ x = a[i];
+ y = b[i];
+ }
+
+ return x < y ? -1 : y < x ? 1 : 0;
+}
+
+function removeFrom(server, list) {
+ for(var i = 0; i < list.length; i++) {
+ if(list[i].equals && list[i].equals(server)) {
+ list.splice(i, 1);
+ return true;
+ } else if(typeof list[i] == 'string' && list[i] == server.name) {
+ list.splice(i, 1);
+ return true;
+ }
+ }
+
+ return false;
+}
+
+function emitTopologyDescriptionChanged(self) {
+ if(self.listeners('topologyDescriptionChanged').length > 0) {
+ var topology = 'Unknown';
+ var setName = self.setName;
+
+ if(self.hasPrimaryAndSecondary()) {
+ topology = 'ReplicaSetWithPrimary';
+ } else if(!self.hasPrimary() && self.hasSecondary()) {
+ topology = 'ReplicaSetNoPrimary';
+ }
+
+ // Generate description
+ var description = {
+ topologyType: topology,
+ setName: setName,
+ servers: []
+ }
+
+ // Add the primary to the list
+ if(self.hasPrimary()) {
+ var desc = self.primary.getDescription();
+ desc.type = 'RSPrimary';
+ description.servers.push(desc);
+ }
+
+ // Add all the secondaries
+ description.servers = description.servers.concat(self.secondaries.map(function(x) {
+ var description = x.getDescription();
+ description.type = 'RSSecondary';
+ return description;
+ }));
+
+ // Add all the arbiters
+ description.servers = description.servers.concat(self.arbiters.map(function(x) {
+ var description = x.getDescription();
+ description.type = 'RSArbiter';
+ return description;
+ }));
+
+ // Add all the passives
+ description.servers = description.servers.concat(self.passives.map(function(x) {
+ var description = x.getDescription();
+ description.type = 'RSSecondary';
+ return description;
+ }));
+
+ // Create the result
+ var result = {
+ topologyId: self.id,
+ previousDescription: self.replicasetDescription,
+ newDescription: description,
+ diff: diff(self.replicasetDescription, description)
+ };
+
+ // Emit the topologyDescription change
+ self.emit('topologyDescriptionChanged', result);
+
+ // Set the new description
+ self.replicasetDescription = description;
+ }
+}
+
+function diff(previous, current) {
+ // Difference document
+ var diff = {
+ servers: []
+ }
+
+ // Previous entry
+ if(!previous) {
+ previous = { servers: [] };
+ }
+
+ // Got through all the servers
+ for(var i = 0; i < previous.servers.length; i++) {
+ var prevServer = previous.servers[i];
+
+ // Go through all current servers
+ for(var j = 0; j < current.servers.length; j++) {
+ var currServer = current.servers[j];
+
+ // Matching server
+ if(prevServer.address === currServer.address) {
+ // We had a change in state
+ if(prevServer.type != currServer.type) {
+ diff.servers.push({
+ address: prevServer.address,
+ from: prevServer.type,
+ to: currServer.type
+ });
+ }
+ }
+ }
+ }
+
+ // Return difference
+ return diff;
+}
+
+module.exports = ReplSetState;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js
new file mode 100644
index 0000000..266b02f
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js
@@ -0,0 +1,842 @@
+"use strict"
+
+var inherits = require('util').inherits,
+ f = require('util').format,
+ EventEmitter = require('events').EventEmitter,
+ BSON = require('bson').native().BSON,
+ ReadPreference = require('./read_preference'),
+ Logger = require('../connection/logger'),
+ debugOptions = require('../connection/utils').debugOptions,
+ Pool = require('../connection/pool'),
+ Query = require('../connection/commands').Query,
+ MongoError = require('../error'),
+ PreTwoSixWireProtocolSupport = require('../wireprotocol/2_4_support'),
+ TwoSixWireProtocolSupport = require('../wireprotocol/2_6_support'),
+ ThreeTwoWireProtocolSupport = require('../wireprotocol/3_2_support'),
+ BasicCursor = require('../cursor'),
+ sdam = require('./shared'),
+ assign = require('./shared').assign,
+ createClientInfo = require('./shared').createClientInfo;
+
+// Used for filtering out fields for loggin
+var debugFields = ['reconnect', 'reconnectTries', 'reconnectInterval', 'emitError', 'cursorFactory', 'host'
+ , 'port', 'size', 'keepAlive', 'keepAliveInitialDelay', 'noDelay', 'connectionTimeout', 'checkServerIdentity'
+ , 'socketTimeout', 'singleBufferSerializtion', 'ssl', 'ca', 'cert', 'key', 'rejectUnauthorized', 'promoteLongs', 'promoteValues'
+ , 'promoteBuffers', 'servername'];
+
+// Server instance id
+var id = 0;
+var serverAccounting = false;
+var servers = {};
+
+/**
+ * Creates a new Server instance
+ * @class
+ * @param {boolean} [options.reconnect=true] Server will attempt to reconnect on loss of connection
+ * @param {number} [options.reconnectTries=30] Server attempt to reconnect #times
+ * @param {number} [options.reconnectInterval=1000] Server will wait # milliseconds between retries
+ * @param {number} [options.monitoring=true] Enable the server state monitoring (calling ismaster at monitoringInterval)
+ * @param {number} [options.monitoringInterval=5000] The interval of calling ismaster when monitoring is enabled.
+ * @param {Cursor} [options.cursorFactory=Cursor] The cursor factory class used for all query cursors
+ * @param {string} options.host The server host
+ * @param {number} options.port The server port
+ * @param {number} [options.size=5] Server connection pool size
+ * @param {boolean} [options.keepAlive=true] TCP Connection keep alive enabled
+ * @param {number} [options.keepAliveInitialDelay=0] Initial delay before TCP keep alive enabled
+ * @param {boolean} [options.noDelay=true] TCP Connection no delay
+ * @param {number} [options.connectionTimeout=0] TCP Connection timeout setting
+ * @param {number} [options.socketTimeout=0] TCP Socket timeout setting
+ * @param {boolean} [options.ssl=false] Use SSL for connection
+ * @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
+ * @param {Buffer} [options.ca] SSL Certificate store binary buffer
+ * @param {Buffer} [options.cert] SSL Certificate binary buffer
+ * @param {Buffer} [options.key] SSL Key file binary buffer
+ * @param {string} [options.passphrase] SSL Certificate pass phrase
+ * @param {boolean} [options.rejectUnauthorized=true] Reject unauthorized server certificates
+ * @param {string} [options.servername=null] String containing the server name requested via TLS SNI.
+ * @param {boolean} [options.promoteLongs=true] Convert Long values from the db into Numbers if they fit into 53 bits
+ * @param {boolean} [options.promoteValues=true] Promotes BSON values to native types where possible, set to false to only receive wrapper types.
+ * @param {boolean} [options.promoteBuffers=false] Promotes Binary BSON values to native Node Buffers.
+ * @param {string} [options.appname=null] Application name, passed in on ismaster call and logged in mongod server logs. Maximum size 128 bytes.
+ * @param {boolean} [options.domainsEnabled=false] Enable the wrapping of the callback in the current domain, disabled by default to avoid perf hit.
+ * @return {Server} A cursor instance
+ * @fires Server#connect
+ * @fires Server#close
+ * @fires Server#error
+ * @fires Server#timeout
+ * @fires Server#parseError
+ * @fires Server#reconnect
+ * @fires Server#reconnectFailed
+ * @fires Server#serverHeartbeatStarted
+ * @fires Server#serverHeartbeatSucceeded
+ * @fires Server#serverHeartbeatFailed
+ * @fires Server#topologyOpening
+ * @fires Server#topologyClosed
+ * @fires Server#topologyDescriptionChanged
+ */
+var Server = function(options) {
+ options = options || {};
+
+ // Add event listener
+ EventEmitter.call(this);
+
+ // Server instance id
+ this.id = id++;
+
+ // Reconnect retries
+ var reconnectTries = options.reconnectTries || 30;
+
+ // Internal state
+ this.s = {
+ // Options
+ options: options,
+ // Logger
+ logger: Logger('Server', options),
+ // Factory overrides
+ Cursor: options.cursorFactory || BasicCursor,
+ // BSON instance
+ bson: options.bson || new BSON(),
+ // Pool
+ pool: null,
+ // Disconnect handler
+ disconnectHandler: options.disconnectHandler,
+ // Monitor thread (keeps the connection alive)
+ monitoring: typeof options.monitoring == 'boolean' ? options.monitoring : true,
+ // Is the server in a topology
+ inTopology: typeof options.inTopology == 'boolean' ? options.inTopology : false,
+ // Monitoring timeout
+ monitoringInterval: typeof options.monitoringInterval == 'number'
+ ? options.monitoringInterval
+ : 5000,
+ // Topology id
+ topologyId: -1
+ }
+
+ // Curent ismaster
+ this.ismaster = null;
+ // Current ping time
+ this.lastIsMasterMS = -1;
+ // The monitoringProcessId
+ this.monitoringProcessId = null;
+ // Initial connection
+ this.initalConnect = true;
+ // Wire protocol handler, default to oldest known protocol handler
+ // this gets changed when the first ismaster is called.
+ this.wireProtocolHandler = new PreTwoSixWireProtocolSupport();
+ // Default type
+ this._type = 'server';
+ // Set the client info
+ this.clientInfo = createClientInfo(options);
+
+ // Max Stalleness values
+ // last time we updated the ismaster state
+ this.lastUpdateTime = 0;
+ // Last write time
+ this.lastWriteDate = 0;
+ // Stalleness
+ this.staleness = 0;
+}
+
+inherits(Server, EventEmitter);
+
+Object.defineProperty(Server.prototype, 'type', {
+ enumerable:true, get: function() { return this._type; }
+});
+
+Server.enableServerAccounting = function() {
+ serverAccounting = true;
+ servers = {};
+}
+
+Server.disableServerAccounting = function() {
+ serverAccounting = false;
+}
+
+Server.servers = function() {
+ return servers;
+}
+
+Object.defineProperty(Server.prototype, 'name', {
+ enumerable:true,
+ get: function() { return this.s.options.host + ":" + this.s.options.port; }
+});
+
+function configureWireProtocolHandler(self, ismaster) {
+ // 3.2 wire protocol handler
+ if(ismaster.maxWireVersion >= 4) {
+ return new ThreeTwoWireProtocolSupport(new TwoSixWireProtocolSupport());
+ }
+
+ // 2.6 wire protocol handler
+ if(ismaster.maxWireVersion >= 2) {
+ return new TwoSixWireProtocolSupport();
+ }
+
+ // 2.4 or earlier wire protocol handler
+ return new PreTwoSixWireProtocolSupport();
+}
+
+function disconnectHandler(self, type, ns, cmd, options, callback) {
+ // Topology is not connected, save the call in the provided store to be
+ // Executed at some point when the handler deems it's reconnected
+ if(!self.s.pool.isConnected() && self.s.disconnectHandler != null && !options.monitoring) {
+ self.s.disconnectHandler.add(type, ns, cmd, options, callback);
+ return true;
+ }
+
+ // If we have no connection error
+ if(!self.s.pool.isConnected()) {
+ callback(MongoError.create(f("no connection available to server %s", self.name)));
+ return true;
+ }
+}
+
+function monitoringProcess(self) {
+ return function() {
+ // Pool was destroyed do not continue process
+ if(self.s.pool.isDestroyed()) return;
+ // Emit monitoring Process event
+ self.emit('monitoring', self);
+ // Perform ismaster call
+ // Query options
+ var queryOptions = { numberToSkip: 0, numberToReturn: -1, checkKeys: false, slaveOk: true };
+ // Create a query instance
+ var query = new Query(self.s.bson, 'admin.$cmd', {ismaster:true}, queryOptions);
+ // Get start time
+ var start = new Date().getTime();
+ // Execute the ismaster query
+ self.s.pool.write(query, function(err, result) {
+ // Set initial lastIsMasterMS
+ self.lastIsMasterMS = new Date().getTime() - start;
+ if(self.s.pool.isDestroyed()) return;
+ // Update the ismaster view if we have a result
+ if(result) {
+ self.ismaster = result.result;
+ }
+ // Re-schedule the monitoring process
+ self.monitoringProcessId = setTimeout(monitoringProcess(self), self.s.monitoringInterval);
+ });
+ }
+}
+
+var eventHandler = function(self, event) {
+ return function(err) {
+ // Log information of received information if in info mode
+ if(self.s.logger.isInfo()) {
+ var object = err instanceof MongoError ? JSON.stringify(err) : {}
+ self.s.logger.info(f('server %s fired event %s out with message %s'
+ , self.name, event, object));
+ }
+
+ // Handle connect event
+ if(event == 'connect') {
+ // Issue an ismaster command at connect
+ // Query options
+ var queryOptions = { numberToSkip: 0, numberToReturn: -1, checkKeys: false, slaveOk: true };
+ // Create a query instance
+ var query = new Query(self.s.bson, 'admin.$cmd', {ismaster:true, client: self.clientInfo}, queryOptions);
+ // Get start time
+ var start = new Date().getTime();
+ // Execute the ismaster query
+ self.s.pool.write(query, function(err, result) {
+ // Set initial lastIsMasterMS
+ self.lastIsMasterMS = new Date().getTime() - start;
+ if(err) {
+ self.destroy();
+ if(self.listeners('error').length > 0) self.emit('error', err);
+ return;
+ }
+
+ // Ensure no error emitted after initial connect when reconnecting
+ self.initalConnect = false;
+ // Save the ismaster
+ self.ismaster = result.result;
+
+ // It's a proxy change the type so
+ // the wireprotocol will send $readPreference
+ if(self.ismaster.msg == 'isdbgrid') {
+ self._type = 'mongos';
+ }
+ // Add the correct wire protocol handler
+ self.wireProtocolHandler = configureWireProtocolHandler(self, self.ismaster);
+ // Have we defined self monitoring
+ if(self.s.monitoring) {
+ self.monitoringProcessId = setTimeout(monitoringProcess(self), self.s.monitoringInterval);
+ }
+
+ // Emit server description changed if something listening
+ sdam.emitServerDescriptionChanged(self, {
+ address: self.name, arbiters: [], hosts: [], passives: [], type: !self.s.inTopology ? 'Standalone' : sdam.getTopologyType(self)
+ });
+
+ // Emit topology description changed if something listening
+ sdam.emitTopologyDescriptionChanged(self, {
+ topologyType: 'Single', servers: [{address: self.name, arbiters: [], hosts: [], passives: [], type: 'Standalone'}]
+ });
+
+ // Log the ismaster if available
+ if(self.s.logger.isInfo()) {
+ self.s.logger.info(f('server %s connected with ismaster [%s]', self.name, JSON.stringify(self.ismaster)));
+ }
+
+ // Emit connect
+ self.emit('connect', self);
+ });
+ } else if(event == 'error' || event == 'parseError'
+ || event == 'close' || event == 'timeout' || event == 'reconnect'
+ || event == 'attemptReconnect' || 'reconnectFailed') {
+
+ // Remove server instance from accounting
+ if(serverAccounting && ['close', 'timeout', 'error', 'parseError', 'reconnectFailed'].indexOf(event) != -1) {
+ // Emit toplogy opening event if not in topology
+ if(!self.s.inTopology) {
+ self.emit('topologyOpening', { topologyId: self.id });
+ }
+
+ delete servers[self.id];
+ }
+
+ // Reconnect failed return error
+ if(event == 'reconnectFailed') {
+ self.emit('reconnectFailed', err);
+ // Emit error if any listeners
+ if(self.listeners('error').length > 0) {
+ self.emit('error', err);
+ }
+ // Terminate
+ return;
+ }
+
+ // On first connect fail
+ if(self.s.pool.state == 'disconnected' && self.initalConnect && ['close', 'timeout', 'error', 'parseError'].indexOf(event) != -1) {
+ self.initalConnect = false;
+ return self.emit('error', new MongoError(f('failed to connect to server [%s] on first connect', self.name)));
+ }
+
+ // Reconnect event, emit the server
+ if(event == 'reconnect') {
+ return self.emit(event, self);
+ }
+
+ // Emit the event
+ self.emit(event, err);
+ }
+ }
+}
+
+/**
+ * Initiate server connect
+ * @method
+ * @param {array} [options.auth=null] Array of auth options to apply on connect
+ */
+Server.prototype.connect = function(options) {
+ var self = this;
+ options = options || {};
+
+ // Set the connections
+ if(serverAccounting) servers[this.id] = this;
+
+ // Do not allow connect to be called on anything that's not disconnected
+ if(self.s.pool && !self.s.pool.isDisconnected() && !self.s.pool.isDestroyed()) {
+ throw MongoError.create(f('server instance in invalid state %s', self.s.state));
+ }
+
+ // Create a pool
+ self.s.pool = new Pool(assign(self.s.options, options, {bson: this.s.bson}));
+
+ // Set up listeners
+ self.s.pool.on('close', eventHandler(self, 'close'));
+ self.s.pool.on('error', eventHandler(self, 'error'));
+ self.s.pool.on('timeout', eventHandler(self, 'timeout'));
+ self.s.pool.on('parseError', eventHandler(self, 'parseError'));
+ self.s.pool.on('connect', eventHandler(self, 'connect'));
+ self.s.pool.on('reconnect', eventHandler(self, 'reconnect'));
+ self.s.pool.on('reconnectFailed', eventHandler(self, 'reconnectFailed'));
+
+ // Emit toplogy opening event if not in topology
+ if(!self.s.inTopology) {
+ this.emit('topologyOpening', { topologyId: self.id });
+ }
+
+ // Emit opening server event
+ self.emit('serverOpening', {
+ topologyId: self.s.topologyId != -1 ? self.s.topologyId : self.id,
+ address: self.name
+ });
+
+ // Connect with optional auth settings
+ if(options.auth) {
+ self.s.pool.connect.apply(self.s.pool, options.auth);
+ } else {
+ self.s.pool.connect();
+ }
+}
+
+/**
+ * Get the server description
+ * @method
+ * @return {object}
+*/
+Server.prototype.getDescription = function() {
+ var ismaster = this.ismaster || {};
+ var description = {
+ type: sdam.getTopologyType(this),
+ address: this.name,
+ };
+
+ // Add fields if available
+ if(ismaster.hosts) description.hosts = ismaster.hosts;
+ if(ismaster.arbiters) description.arbiters = ismaster.arbiters;
+ if(ismaster.passives) description.passives = ismaster.passives;
+ if(ismaster.setName) description.setName = ismaster.setName;
+ return description;
+}
+
+/**
+ * Returns the last known ismaster document for this server
+ * @method
+ * @return {object}
+ */
+Server.prototype.lastIsMaster = function() {
+ return this.ismaster;
+}
+
+/**
+ * Unref all connections belong to this server
+ * @method
+ */
+Server.prototype.unref = function() {
+ this.s.pool.unref();
+}
+
+/**
+ * Figure out if the server is connected
+ * @method
+ * @return {boolean}
+ */
+Server.prototype.isConnected = function() {
+ if(!this.s.pool) return false;
+ return this.s.pool.isConnected();
+}
+
+/**
+ * Figure out if the server instance was destroyed by calling destroy
+ * @method
+ * @return {boolean}
+ */
+Server.prototype.isDestroyed = function() {
+ if(!this.s.pool) return false;
+ return this.s.pool.isDestroyed();
+}
+
+function basicWriteValidations(self, options) {
+ if(!self.s.pool) return MongoError.create('server instance is not connected');
+ if(self.s.pool.isDestroyed()) return MongoError.create('server instance pool was destroyed');
+}
+
+function basicReadValidations(self, options) {
+ basicWriteValidations(self, options);
+
+ if(options.readPreference && !(options.readPreference instanceof ReadPreference)) {
+ throw new Error("readPreference must be an instance of ReadPreference");
+ }
+}
+
+/**
+ * Execute a command
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {object} cmd The command hash
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {Boolean} [options.fullResult=false] Return the full envelope instead of just the result document.
+ * @param {opResultCallback} callback A callback function
+ */
+Server.prototype.command = function(ns, cmd, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ var result = basicReadValidations(self, options);
+ if(result) return callback(result);
+
+ // Debug log
+ if(self.s.logger.isDebug()) self.s.logger.debug(f('executing command [%s] against %s', JSON.stringify({
+ ns: ns, cmd: cmd, options: debugOptions(debugFields, options)
+ }), self.name));
+
+ // If we are not connected or have a disconnectHandler specified
+ if(disconnectHandler(self, 'command', ns, cmd, options, callback)) return;
+
+ // Check if we have collation support
+ if(this.ismaster && this.ismaster.maxWireVersion < 5 && cmd.collation) {
+ return callback(new MongoError(f('server %s does not support collation', this.name)));
+ }
+
+ // Query options
+ var queryOptions = {
+ numberToSkip: 0,
+ numberToReturn: -1,
+ checkKeys: typeof options.checkKeys == 'boolean' ? options.checkKeys: false,
+ serializeFunctions: typeof options.serializeFunctions == 'boolean' ? options.serializeFunctions : false,
+ ignoreUndefined: typeof options.ignoreUndefined == 'boolean' ? options.ignoreUndefined : false
+ };
+
+ // Create a query instance
+ var query = new Query(self.s.bson, ns, cmd, queryOptions);
+ // Set slave OK of the query
+ query.slaveOk = options.readPreference ? options.readPreference.slaveOk() : false;
+
+ // Write options
+ var writeOptions = {
+ raw: typeof options.raw == 'boolean' ? options.raw : false,
+ promoteLongs: typeof options.promoteLongs == 'boolean' ? options.promoteLongs : true,
+ promoteValues: typeof options.promoteValues == 'boolean' ? options.promoteValues : true,
+ promoteBuffers: typeof options.promoteBuffers == 'boolean' ? options.promoteBuffers : false,
+ command: true,
+ monitoring: typeof options.monitoring == 'boolean' ? options.monitoring : false,
+ fullResult: typeof options.fullResult == 'boolean' ? options.fullResult : false,
+ requestId: query.requestId
+ };
+
+ // Write the operation to the pool
+ self.s.pool.write(query, writeOptions, callback);
+}
+
+/**
+ * Insert one or more documents
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of documents to insert
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Server.prototype.insert = function(ns, ops, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ var result = basicWriteValidations(self, options);
+ if(result) return callback(result);
+
+ // If we are not connected or have a disconnectHandler specified
+ if(disconnectHandler(self, 'insert', ns, ops, options, callback)) return;
+
+ // Setup the docs as an array
+ ops = Array.isArray(ops) ? ops : [ops];
+
+ // Execute write
+ return self.wireProtocolHandler.insert(self.s.pool, self.ismaster, ns, self.s.bson, ops, options, callback);
+}
+
+/**
+ * Perform one or more update operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of updates
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Server.prototype.update = function(ns, ops, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ var result = basicWriteValidations(self, options);
+ if(result) return callback(result);
+
+ // If we are not connected or have a disconnectHandler specified
+ if(disconnectHandler(self, 'update', ns, ops, options, callback)) return;
+
+ // Check if we have collation support
+ if(this.ismaster && this.ismaster.maxWireVersion < 5 && options.collation) {
+ return callback(new MongoError(f('server %s does not support collation', this.name)));
+ }
+
+ // Setup the docs as an array
+ ops = Array.isArray(ops) ? ops : [ops];
+ // Execute write
+ return self.wireProtocolHandler.update(self.s.pool, self.ismaster, ns, self.s.bson, ops, options, callback);
+}
+
+/**
+ * Perform one or more remove operations
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {array} ops An array of removes
+ * @param {boolean} [options.ordered=true] Execute in order or out of order
+ * @param {object} [options.writeConcern={}] Write concern for the operation
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Server.prototype.remove = function(ns, ops, options, callback) {
+ var self = this;
+ if(typeof options == 'function') callback = options, options = {}, options = options || {};
+ var result = basicWriteValidations(self, options);
+ if(result) return callback(result);
+
+ // If we are not connected or have a disconnectHandler specified
+ if(disconnectHandler(self, 'remove', ns, ops, options, callback)) return;
+
+ // Check if we have collation support
+ if(this.ismaster && this.ismaster.maxWireVersion < 5 && options.collation) {
+ return callback(new MongoError(f('server %s does not support collation', this.name)));
+ }
+
+ // Setup the docs as an array
+ ops = Array.isArray(ops) ? ops : [ops];
+ // Execute write
+ return self.wireProtocolHandler.remove(self.s.pool, self.ismaster, ns, self.s.bson, ops, options, callback);
+}
+
+/**
+ * Get a new cursor
+ * @method
+ * @param {string} ns The MongoDB fully qualified namespace (ex: db1.collection1)
+ * @param {{object}|{Long}} cmd Can be either a command returning a cursor or a cursorId
+ * @param {object} [options.batchSize=0] Batchsize for the operation
+ * @param {array} [options.documents=[]] Initial documents list for cursor
+ * @param {ReadPreference} [options.readPreference] Specify read preference if command supports it
+ * @param {Boolean} [options.serializeFunctions=false] Specify if functions on an object should be serialized.
+ * @param {Boolean} [options.ignoreUndefined=false] Specify if the BSON serializer should ignore undefined fields.
+ * @param {opResultCallback} callback A callback function
+ */
+Server.prototype.cursor = function(ns, cmd, cursorOptions) {
+ var s = this.s;
+ cursorOptions = cursorOptions || {};
+ // Set up final cursor type
+ var FinalCursor = cursorOptions.cursorFactory || s.Cursor;
+ // Return the cursor
+ return new FinalCursor(s.bson, ns, cmd, cursorOptions, this, s.options);
+}
+
+/**
+ * Logout from a database
+ * @method
+ * @param {string} db The db we are logging out from
+ * @param {authResultCallback} callback A callback function
+ */
+Server.prototype.logout = function(dbName, callback) {
+ this.s.pool.logout(dbName, callback);
+}
+
+/**
+ * Authenticate using a specified mechanism
+ * @method
+ * @param {string} mechanism The Auth mechanism we are invoking
+ * @param {string} db The db we are invoking the mechanism against
+ * @param {...object} param Parameters for the specific mechanism
+ * @param {authResultCallback} callback A callback function
+ */
+Server.prototype.auth = function(mechanism, db) {
+ var self = this;
+
+ // If we have the default mechanism we pick mechanism based on the wire
+ // protocol max version. If it's >= 3 then scram-sha1 otherwise mongodb-cr
+ if(mechanism == 'default' && self.ismaster && self.ismaster.maxWireVersion >= 3) {
+ mechanism = 'scram-sha-1';
+ } else if(mechanism == 'default') {
+ mechanism = 'mongocr';
+ }
+
+ // Slice all the arguments off
+ var args = Array.prototype.slice.call(arguments, 0);
+ // Set the mechanism
+ args[0] = mechanism;
+ // Get the callback
+ var callback = args[args.length - 1];
+
+ // If we are not connected or have a disconnectHandler specified
+ if(disconnectHandler(self, 'auth', db, args, {}, callback)) {
+ return;
+ }
+
+ // Do not authenticate if we are an arbiter
+ if(this.lastIsMaster() && this.lastIsMaster().arbiterOnly) {
+ return callback(null, true);
+ }
+
+ // Apply the arguments to the pool
+ self.s.pool.auth.apply(self.s.pool, args);
+}
+
+/**
+ * Compare two server instances
+ * @method
+ * @param {Server} server Server to compare equality against
+ * @return {boolean}
+ */
+Server.prototype.equals = function(server) {
+ if(typeof server == 'string') return this.name == server;
+ if(server.name) return this.name == server.name;
+ return false;
+}
+
+/**
+ * All raw connections
+ * @method
+ * @return {Connection[]}
+ */
+Server.prototype.connections = function() {
+ return this.s.pool.allConnections();
+}
+
+/**
+ * Get server
+ * @method
+ * @return {Server}
+ */
+Server.prototype.getServer = function() {
+ return this;
+}
+
+/**
+ * Get connection
+ * @method
+ * @return {Connection}
+ */
+Server.prototype.getConnection = function() {
+ return this.s.pool.get();
+}
+
+var listeners = ['close', 'error', 'timeout', 'parseError', 'connect'];
+
+/**
+ * Destroy the server connection
+ * @method
+ * @param {boolean} [options.emitClose=false] Emit close event on destroy
+ * @param {boolean} [options.emitDestroy=false] Emit destroy event on destroy
+ * @param {boolean} [options.force=false] Force destroy the pool
+ */
+Server.prototype.destroy = function(options) {
+ options = options || {};
+ var self = this;
+
+ // Set the connections
+ if(serverAccounting) delete servers[this.id];
+
+ // Destroy the monitoring process if any
+ if(this.monitoringProcessId) {
+ clearTimeout(this.monitoringProcessId);
+ }
+
+ // Emit close event
+ if(options.emitClose) {
+ self.emit('close', self);
+ }
+
+ // Emit destroy event
+ if(options.emitDestroy) {
+ self.emit('destroy', self);
+ }
+
+ // Remove all listeners
+ listeners.forEach(function(event) {
+ self.s.pool.removeAllListeners(event);
+ });
+
+ // Emit opening server event
+ if(self.listeners('serverClosed').length > 0) self.emit('serverClosed', {
+ topologyId: self.s.topologyId != -1 ? self.s.topologyId : self.id, address: self.name
+ });
+
+ // Emit toplogy opening event if not in topology
+ if(self.listeners('topologyClosed').length > 0 && !self.s.inTopology) {
+ self.emit('topologyClosed', { topologyId: self.id });
+ }
+
+ if(self.s.logger.isDebug()) {
+ self.s.logger.debug(f('destroy called on server %s', self.name));
+ }
+
+ // Destroy the pool
+ this.s.pool.destroy(options.force);
+}
+
+/**
+ * A server connect event, used to verify that the connection is up and running
+ *
+ * @event Server#connect
+ * @type {Server}
+ */
+
+/**
+ * A server reconnect event, used to verify that the server topology has reconnected
+ *
+ * @event Server#reconnect
+ * @type {Server}
+ */
+
+/**
+ * A server opening SDAM monitoring event
+ *
+ * @event Server#serverOpening
+ * @type {object}
+ */
+
+/**
+ * A server closed SDAM monitoring event
+ *
+ * @event Server#serverClosed
+ * @type {object}
+ */
+
+/**
+ * A server description SDAM change monitoring event
+ *
+ * @event Server#serverDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * A topology open SDAM event
+ *
+ * @event Server#topologyOpening
+ * @type {object}
+ */
+
+/**
+ * A topology closed SDAM event
+ *
+ * @event Server#topologyClosed
+ * @type {object}
+ */
+
+/**
+ * A topology structure SDAM change event
+ *
+ * @event Server#topologyDescriptionChanged
+ * @type {object}
+ */
+
+/**
+ * Server reconnect failed
+ *
+ * @event Server#reconnectFailed
+ * @type {Error}
+ */
+
+/**
+ * Server connection pool closed
+ *
+ * @event Server#close
+ * @type {object}
+ */
+
+/**
+ * Server connection pool caused an error
+ *
+ * @event Server#error
+ * @type {Error}
+ */
+
+/**
+ * Server destroyed was called
+ *
+ * @event Server#destroy
+ * @type {Server}
+ */
+
+module.exports = Server;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/shared.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/shared.js
new file mode 100644
index 0000000..8506255
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/shared.js
@@ -0,0 +1,225 @@
+"use strict"
+
+var os = require('os'),
+ f = require('util').format;
+
+/**
+ * Emit event if it exists
+ * @method
+ */
+function emitSDAMEvent(self, event, description) {
+ if(self.listeners(event).length > 0) {
+ self.emit(event, description);
+ }
+}
+
+// Get package.json variable
+var driverVersion = require(__dirname + '/../../package.json').version;
+var nodejsversion = f('Node.js %s, %s', process.version, os.endianness());
+var type = os.type();
+var name = process.platform;
+var architecture = process.arch;
+var release = os.release();
+
+function createClientInfo(options) {
+ // Build default client information
+ var clientInfo = options.clientInfo ? clone(options.clientInfo) : {
+ driver: {
+ name: "nodejs-core",
+ version: driverVersion
+ },
+ os: {
+ type: type,
+ name: name,
+ architecture: architecture,
+ version: release
+ }
+ }
+
+ // Is platform specified
+ if(clientInfo.platform && clientInfo.platform.indexOf('mongodb-core') == -1) {
+ clientInfo.platform = f('%s, mongodb-core: %s', clientInfo.platform, driverVersion);
+ } else if(!clientInfo.platform){
+ clientInfo.platform = nodejsversion;
+ }
+
+ // Do we have an application specific string
+ if(options.appname) {
+ // Cut at 128 bytes
+ var buffer = new Buffer(options.appname);
+ // Return the truncated appname
+ var appname = buffer.length > 128 ? buffer.slice(0, 128).toString('utf8') : options.appname;
+ // Add to the clientInfo
+ clientInfo.application = { name: appname };
+ }
+
+ return clientInfo;
+}
+
+function clone(object) {
+ return JSON.parse(JSON.stringify(object));
+}
+
+var getPreviousDescription = function(self) {
+ if(!self.s.serverDescription) {
+ self.s.serverDescription = {
+ address: self.name,
+ arbiters: [], hosts: [], passives: [], type: 'Unknown'
+ }
+ }
+
+ return self.s.serverDescription;
+}
+
+var emitServerDescriptionChanged = function(self, description) {
+ if(self.listeners('serverDescriptionChanged').length > 0) {
+ // Emit the server description changed events
+ self.emit('serverDescriptionChanged', {
+ topologyId: self.s.topologyId != -1 ? self.s.topologyId : self.id, address: self.name,
+ previousDescription: getPreviousDescription(self),
+ newDescription: description
+ });
+
+ self.s.serverDescription = description;
+ }
+}
+
+var getPreviousTopologyDescription = function(self) {
+ if(!self.s.topologyDescription) {
+ self.s.topologyDescription = {
+ topologyType: 'Unknown',
+ servers: [{
+ address: self.name, arbiters: [], hosts: [], passives: [], type: 'Unknown'
+ }]
+ }
+ }
+
+ return self.s.topologyDescription;
+}
+
+var emitTopologyDescriptionChanged = function(self, description) {
+ if(self.listeners('topologyDescriptionChanged').length > 0) {
+ // Emit the server description changed events
+ self.emit('topologyDescriptionChanged', {
+ topologyId: self.s.topologyId != -1 ? self.s.topologyId : self.id, address: self.name,
+ previousDescription: getPreviousTopologyDescription(self),
+ newDescription: description
+ });
+
+ self.s.serverDescription = description;
+ }
+}
+
+var changedIsMaster = function(self, currentIsmaster, ismaster) {
+ var currentType = getTopologyType(self, currentIsmaster);
+ var newType = getTopologyType(self, ismaster);
+ if(newType != currentType) return true;
+ return false;
+}
+
+var getTopologyType = function(self, ismaster) {
+ if(!ismaster) {
+ ismaster = self.ismaster;
+ }
+
+ if(!ismaster) return 'Unknown';
+ if(ismaster.ismaster && !ismaster.hosts) return 'Standalone';
+ if(ismaster.ismaster && ismaster.msg == 'isdbgrid') return 'Mongos';
+ if(ismaster.ismaster) return 'RSPrimary';
+ if(ismaster.secondary) return 'RSSecondary';
+ if(ismaster.arbiterOnly) return 'RSArbiter';
+ return 'Unknown';
+}
+
+var inquireServerState = function(self) {
+ return function(callback) {
+ if(self.s.state == 'destroyed') return;
+ // Record response time
+ var start = new Date().getTime();
+
+ // emitSDAMEvent
+ emitSDAMEvent(self, 'serverHeartbeatStarted', { connectionId: self.name });
+
+ // Attempt to execute ismaster command
+ self.command('admin.$cmd', { ismaster:true }, { monitoring:true }, function(err, r) {
+ if(!err) {
+ // Legacy event sender
+ self.emit('ismaster', r, self);
+
+ // Calculate latencyMS
+ var latencyMS = new Date().getTime() - start;
+
+ // Server heart beat event
+ emitSDAMEvent(self, 'serverHeartbeatSucceeded', { durationMS: latencyMS, reply: r.result, connectionId: self.name });
+
+ // Did the server change
+ if(changedIsMaster(self, self.s.ismaster, r.result)) {
+ // Emit server description changed if something listening
+ emitServerDescriptionChanged(self, {
+ address: self.name, arbiters: [], hosts: [], passives: [], type: !self.s.inTopology ? 'Standalone' : getTopologyType(self)
+ });
+ }
+
+ // Updat ismaster view
+ self.s.ismaster = r.result;
+
+ // Set server response time
+ self.s.isMasterLatencyMS = latencyMS;
+ } else {
+ emitSDAMEvent(self, 'serverHeartbeatFailed', { durationMS: latencyMS, failure: err, connectionId: self.name });
+ }
+
+ // Peforming an ismaster monitoring callback operation
+ if(typeof callback == 'function') {
+ return callback(err, r);
+ }
+
+ // Perform another sweep
+ self.s.inquireServerStateTimeout = setTimeout(inquireServerState(self), self.s.haInterval);
+ });
+ };
+}
+
+// Object.assign method or polyfille
+var assign = Object.assign ? Object.assign : function assign(target, firstSource) {
+ if (target === undefined || target === null) {
+ throw new TypeError('Cannot convert first argument to object');
+ }
+
+ var to = Object(target);
+ for (var i = 1; i < arguments.length; i++) {
+ var nextSource = arguments[i];
+ if (nextSource === undefined || nextSource === null) {
+ continue;
+ }
+
+ var keysArray = Object.keys(Object(nextSource));
+ for (var nextIndex = 0, len = keysArray.length; nextIndex < len; nextIndex++) {
+ var nextKey = keysArray[nextIndex];
+ var desc = Object.getOwnPropertyDescriptor(nextSource, nextKey);
+ if (desc !== undefined && desc.enumerable) {
+ to[nextKey] = nextSource[nextKey];
+ }
+ }
+ }
+ return to;
+}
+
+//
+// Clone the options
+var cloneOptions = function(options) {
+ var opts = {};
+ for(var name in options) {
+ opts[name] = options[name];
+ }
+ return opts;
+}
+
+module.exports.inquireServerState = inquireServerState
+module.exports.getTopologyType = getTopologyType;
+module.exports.emitServerDescriptionChanged = emitServerDescriptionChanged;
+module.exports.emitTopologyDescriptionChanged = emitTopologyDescriptionChanged;
+module.exports.cloneOptions = cloneOptions;
+module.exports.assign = assign;
+module.exports.createClientInfo = createClientInfo;
+module.exports.clone = clone;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_4_support.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_4_support.js
new file mode 100644
index 0000000..abfeb97
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_4_support.js
@@ -0,0 +1,569 @@
+"use strict";
+
+var Insert = require('./commands').Insert
+ , Update = require('./commands').Update
+ , Remove = require('./commands').Remove
+ , Query = require('../connection/commands').Query
+ , copy = require('../connection/utils').copy
+ , KillCursor = require('../connection/commands').KillCursor
+ , GetMore = require('../connection/commands').GetMore
+ , Query = require('../connection/commands').Query
+ , ReadPreference = require('../topologies/read_preference')
+ , f = require('util').format
+ , CommandResult = require('../connection/command_result')
+ , MongoError = require('../error')
+ , Long = require('bson').Long
+ , getReadPreference = require('./shared').getReadPreference;
+
+// Write concern fields
+var writeConcernFields = ['w', 'wtimeout', 'j', 'fsync'];
+
+var WireProtocol = function() {}
+
+//
+// Needs to support legacy mass insert as well as ordered/unordered legacy
+// emulation
+//
+WireProtocol.prototype.insert = function(pool, ismaster, ns, bson, ops, options, callback) {
+ options = options || {};
+ // Default is ordered execution
+ var ordered = typeof options.ordered == 'boolean' ? options.ordered : true;
+ var legacy = typeof options.legacy == 'boolean' ? options.legacy : false;
+ ops = Array.isArray(ops) ? ops :[ops];
+
+ // If we have more than a 1000 ops fails
+ if(ops.length > 1000) return callback(new MongoError("exceeded maximum write batch size of 1000"));
+
+ // Write concern
+ var writeConcern = options.writeConcern || {w:1};
+
+ // We are unordered
+ if(!ordered || writeConcern.w == 0) {
+ return executeUnordered('insert', Insert, ismaster, ns, bson, pool, ops, options, callback);
+ }
+
+ return executeOrdered('insert', Insert, ismaster, ns, bson, pool, ops, options, callback);
+}
+
+WireProtocol.prototype.update = function(pool, ismaster, ns, bson, ops, options, callback) {
+ options = options || {};
+ // Default is ordered execution
+ var ordered = typeof options.ordered == 'boolean' ? options.ordered : true;
+ ops = Array.isArray(ops) ? ops :[ops];
+
+ // Write concern
+ var writeConcern = options.writeConcern || {w:1};
+
+ // We are unordered
+ if(!ordered || writeConcern.w == 0) {
+ return executeUnordered('update', Update, ismaster, ns, bson, pool, ops, options, callback);
+ }
+
+ return executeOrdered('update', Update, ismaster, ns, bson, pool, ops, options, callback);
+}
+
+WireProtocol.prototype.remove = function(pool, ismaster, ns, bson, ops, options, callback) {
+ options = options || {};
+ // Default is ordered execution
+ var ordered = typeof options.ordered == 'boolean' ? options.ordered : true;
+ ops = Array.isArray(ops) ? ops :[ops];
+
+ // Write concern
+ var writeConcern = options.writeConcern || {w:1};
+
+ // We are unordered
+ if(!ordered || writeConcern.w == 0) {
+ return executeUnordered('remove', Remove, ismaster, ns, bson, pool, ops, options, callback);
+ }
+
+ return executeOrdered('remove', Remove, ismaster, ns, bson, pool, ops, options, callback);
+}
+
+WireProtocol.prototype.killCursor = function(bson, ns, cursorId, pool, callback) {
+ // Create a kill cursor command
+ var killCursor = new KillCursor(bson, [cursorId]);
+ // Execute the kill cursor command
+ if(pool && pool.isConnected()) {
+ pool.write(killCursor, {
+ immediateRelease:true, noResponse: true
+ });
+ }
+
+ // Callback
+ if(typeof callback == 'function') callback(null, null);
+}
+
+WireProtocol.prototype.getMore = function(bson, ns, cursorState, batchSize, raw, connection, options, callback) {
+ // Create getMore command
+ var getMore = new GetMore(bson, ns, cursorState.cursorId, {numberToReturn: batchSize});
+
+ // Query callback
+ var queryCallback = function(err, result) {
+ if(err) return callback(err);
+ // Get the raw message
+ var r = result.message;
+
+ // If we have a timed out query or a cursor that was killed
+ if((r.responseFlags & (1 << 0)) != 0) {
+ return callback(new MongoError("cursor does not exist, was killed or timed out"), null);
+ }
+
+ // Ensure we have a Long valie cursor id
+ var cursorId = typeof r.cursorId == 'number'
+ ? Long.fromNumber(r.cursorId)
+ : r.cursorId;
+
+ // Set all the values
+ cursorState.documents = r.documents;
+ cursorState.cursorId = cursorId;
+
+ // Return
+ callback(null, null, r.connection);
+ }
+
+ // If we have a raw query decorate the function
+ if(raw) {
+ queryCallback.raw = raw;
+ }
+
+ // Check if we need to promote longs
+ if(typeof cursorState.promoteLongs == 'boolean') {
+ queryCallback.promoteLongs = cursorState.promoteLongs;
+ }
+
+ if(typeof cursorState.promoteValues == 'boolean') {
+ queryCallback.promoteValues = cursorState.promoteValues;
+ }
+
+ if(typeof cursorState.promoteBuffers == 'boolean') {
+ queryCallback.promoteBuffers = cursorState.promoteBuffers;
+ }
+
+ // Write out the getMore command
+ connection.write(getMore, queryCallback);
+}
+
+WireProtocol.prototype.command = function(bson, ns, cmd, cursorState, topology, options) {
+ // Establish type of command
+ if(cmd.find) {
+ return setupClassicFind(bson, ns, cmd, cursorState, topology, options)
+ } else if(cursorState.cursorId != null) {
+ } else if(cmd) {
+ return setupCommand(bson, ns, cmd, cursorState, topology, options);
+ } else {
+ throw new MongoError(f("command %s does not return a cursor", JSON.stringify(cmd)));
+ }
+}
+
+//
+// Execute a find command
+var setupClassicFind = function(bson, ns, cmd, cursorState, topology, options) {
+ // Ensure we have at least some options
+ options = options || {};
+ // Get the readPreference
+ var readPreference = getReadPreference(cmd, options);
+ // Set the optional batchSize
+ cursorState.batchSize = cmd.batchSize || cursorState.batchSize;
+ var numberToReturn = 0;
+
+ // Unpack the limit and batchSize values
+ if(cursorState.limit == 0) {
+ numberToReturn = cursorState.batchSize;
+ } else if(cursorState.limit < 0 || cursorState.limit < cursorState.batchSize || (cursorState.limit > 0 && cursorState.batchSize == 0)) {
+ numberToReturn = cursorState.limit;
+ } else {
+ numberToReturn = cursorState.batchSize;
+ }
+
+ var numberToSkip = cursorState.skip || 0;
+ // Build actual find command
+ var findCmd = {};
+ // Using special modifier
+ var usesSpecialModifier = false;
+
+ // We have a Mongos topology, check if we need to add a readPreference
+ if(topology.type == 'mongos' && readPreference) {
+ findCmd['$readPreference'] = readPreference.toJSON();
+ usesSpecialModifier = true;
+ }
+
+ // Add special modifiers to the query
+ if(cmd.sort) findCmd['orderby'] = cmd.sort, usesSpecialModifier = true;
+ if(cmd.hint) findCmd['$hint'] = cmd.hint, usesSpecialModifier = true;
+ if(cmd.snapshot) findCmd['$snapshot'] = cmd.snapshot, usesSpecialModifier = true;
+ if(cmd.returnKey) findCmd['$returnKey'] = cmd.returnKey, usesSpecialModifier = true;
+ if(cmd.maxScan) findCmd['$maxScan'] = cmd.maxScan, usesSpecialModifier = true;
+ if(cmd.min) findCmd['$min'] = cmd.min, usesSpecialModifier = true;
+ if(cmd.max) findCmd['$max'] = cmd.max, usesSpecialModifier = true;
+ if(cmd.showDiskLoc) findCmd['$showDiskLoc'] = cmd.showDiskLoc, usesSpecialModifier = true;
+ if(cmd.comment) findCmd['$comment'] = cmd.comment, usesSpecialModifier = true;
+ if(cmd.maxTimeMS) findCmd['$maxTimeMS'] = cmd.maxTimeMS, usesSpecialModifier = true;
+
+ if(cmd.explain) {
+ // nToReturn must be 0 (match all) or negative (match N and close cursor)
+ // nToReturn > 0 will give explain results equivalent to limit(0)
+ numberToReturn = -Math.abs(cmd.limit || 0);
+ usesSpecialModifier = true;
+ findCmd['$explain'] = true;
+ }
+
+ // If we have a special modifier
+ if(usesSpecialModifier) {
+ findCmd['$query'] = cmd.query;
+ } else {
+ findCmd = cmd.query;
+ }
+
+ // Throw on majority readConcern passed in
+ if(cmd.readConcern && cmd.readConcern.level != 'local') {
+ throw new MongoError(f('server find command does not support a readConcern level of %s', cmd.readConcern.level));
+ }
+
+ // Remove readConcern, ensure no failing commands
+ if(cmd.readConcern) {
+ cmd = copy(cmd);
+ delete cmd['readConcern'];
+ }
+
+ // Set up the serialize and ignoreUndefined fields
+ var serializeFunctions = typeof options.serializeFunctions == 'boolean'
+ ? options.serializeFunctions : false;
+ var ignoreUndefined = typeof options.ignoreUndefined == 'boolean'
+ ? options.ignoreUndefined : false;
+
+ // Build Query object
+ var query = new Query(bson, ns, findCmd, {
+ numberToSkip: numberToSkip, numberToReturn: numberToReturn
+ , checkKeys: false, returnFieldSelector: cmd.fields
+ , serializeFunctions: serializeFunctions, ignoreUndefined: ignoreUndefined
+ });
+
+ // Set query flags
+ query.slaveOk = readPreference.slaveOk();
+
+ // Set up the option bits for wire protocol
+ if(typeof cmd.tailable == 'boolean') query.tailable = cmd.tailable;
+ if(typeof cmd.oplogReplay == 'boolean') query.oplogReplay = cmd.oplogReplay;
+ if(typeof cmd.noCursorTimeout == 'boolean') query.noCursorTimeout = cmd.noCursorTimeout;
+ if(typeof cmd.awaitData == 'boolean') query.awaitData = cmd.awaitData;
+ if(typeof cmd.partial == 'boolean') query.partial = cmd.partial;
+ // Return the query
+ return query;
+}
+
+//
+// Set up a command cursor
+var setupCommand = function(bson, ns, cmd, cursorState, topology, options) {
+ // Set empty options object
+ options = options || {}
+ // Get the readPreference
+ var readPreference = getReadPreference(cmd, options);
+ // Final query
+ var finalCmd = {};
+ for(var name in cmd) {
+ finalCmd[name] = cmd[name];
+ }
+
+ // Build command namespace
+ var parts = ns.split(/\./);
+
+ // Throw on majority readConcern passed in
+ if(cmd.readConcern && cmd.readConcern.level != 'local') {
+ throw new MongoError(f('server %s command does not support a readConcern level of %s', JSON.stringify(cmd), cmd.readConcern.level));
+ }
+
+ // Remove readConcern, ensure no failing commands
+ if(cmd.readConcern) delete cmd['readConcern'];
+
+ // Serialize functions
+ var serializeFunctions = typeof options.serializeFunctions == 'boolean'
+ ? options.serializeFunctions : false;
+
+ // Set up the serialize and ignoreUndefined fields
+ var ignoreUndefined = typeof options.ignoreUndefined == 'boolean'
+ ? options.ignoreUndefined : false;
+
+ // We have a Mongos topology, check if we need to add a readPreference
+ if(topology.type == 'mongos'
+ && readPreference
+ && readPreference.preference != 'primary') {
+ finalCmd = {
+ '$query': finalCmd,
+ '$readPreference': readPreference.toJSON()
+ };
+ }
+
+ // Build Query object
+ var query = new Query(bson, f('%s.$cmd', parts.shift()), finalCmd, {
+ numberToSkip: 0, numberToReturn: -1
+ , checkKeys: false, serializeFunctions: serializeFunctions
+ , ignoreUndefined: ignoreUndefined
+ });
+
+ // Set query flags
+ query.slaveOk = readPreference.slaveOk();
+
+ // Return the query
+ return query;
+}
+
+var hasWriteConcern = function(writeConcern) {
+ if(writeConcern.w
+ || writeConcern.wtimeout
+ || writeConcern.j == true
+ || writeConcern.fsync == true
+ || Object.keys(writeConcern).length == 0) {
+ return true;
+ }
+ return false;
+}
+
+var cloneWriteConcern = function(writeConcern) {
+ var wc = {};
+ if(writeConcern.w != null) wc.w = writeConcern.w;
+ if(writeConcern.wtimeout != null) wc.wtimeout = writeConcern.wtimeout;
+ if(writeConcern.j != null) wc.j = writeConcern.j;
+ if(writeConcern.fsync != null) wc.fsync = writeConcern.fsync;
+ return wc;
+}
+
+//
+// Aggregate up all the results
+//
+var aggregateWriteOperationResults = function(opType, ops, results, connection) {
+ var finalResult = { ok: 1, n: 0 }
+ if(opType == 'update') {
+ finalResult.nModified = 0;
+ n: 0;
+ };
+
+ // Map all the results coming back
+ for(var i = 0; i < results.length; i++) {
+ var result = results[i];
+ var op = ops[i];
+
+ if((result.upserted || (result.updatedExisting == false)) && finalResult.upserted == null) {
+ finalResult.upserted = [];
+ }
+
+ // Push the upserted document to the list of upserted values
+ if(result.upserted) {
+ finalResult.upserted.push({index: i, _id: result.upserted});
+ }
+
+ // We have an upsert where we passed in a _id
+ if(result.updatedExisting == false && result.n == 1 && result.upserted == null) {
+ finalResult.upserted.push({index: i, _id: op.q._id});
+ } else if(result.updatedExisting == true) {
+ finalResult.nModified += result.n;
+ }
+
+ // We have an insert command
+ if(result.ok == 1 && opType == 'insert' && result.err == null) {
+ finalResult.n = finalResult.n + 1;
+ }
+
+ // We have a command error
+ if(result != null && result.ok == 0 || result.err || result.errmsg) {
+ if(result.ok == 0) finalResult.ok = 0;
+ finalResult.code = result.code;
+ finalResult.errmsg = result.errmsg || result.err || result.errMsg;
+
+ // Check if we have a write error
+ if(result.code == 11000
+ || result.code == 11001
+ || result.code == 12582
+ || result.code == 16544
+ || result.code == 16538
+ || result.code == 16542
+ || result.code == 14
+ || result.code == 13511) {
+ if(finalResult.writeErrors == null) finalResult.writeErrors = [];
+ finalResult.writeErrors.push({
+ index: i
+ , code: result.code
+ , errmsg: result.errmsg || result.err || result.errMsg
+ });
+ } else {
+ finalResult.writeConcernError = {
+ code: result.code
+ , errmsg: result.errmsg || result.err || result.errMsg
+ }
+ }
+ } else if(typeof result.n == 'number') {
+ finalResult.n += result.n;
+ } else {
+ finalResult.n += 1;
+ }
+
+ // Result as expected
+ if(result != null && result.lastOp) finalResult.lastOp = result.lastOp;
+ }
+
+ // Return finalResult aggregated results
+ return new CommandResult(finalResult, connection);
+}
+
+//
+// Execute all inserts in an ordered manner
+//
+var executeOrdered = function(opType ,command, ismaster, ns, bson, pool, ops, options, callback) {
+ var _ops = ops.slice(0);
+ // Collect all the getLastErrors
+ var getLastErrors = [];
+ // Execute an operation
+ var executeOp = function(list, _callback) {
+ // No more items in the list
+ if(list.length == 0) {
+ return process.nextTick(function() {
+ _callback(null, aggregateWriteOperationResults(opType, ops, getLastErrors, null));
+ });
+ }
+
+ // Get the first operation
+ var doc = list.shift();
+ // Create an insert command
+ var op = new command(Query.getRequestId(), ismaster, bson, ns, [doc], options);
+ // Write concern
+ var optionWriteConcern = options.writeConcern || {w:1};
+ // Final write concern
+ var writeConcern = cloneWriteConcern(optionWriteConcern);
+
+ // Get the db name
+ var db = ns.split('.').shift();
+
+ try {
+ // Add binary message to list of commands to execute
+ var commands = [op];
+
+ // Add getLastOrdered
+ var getLastErrorCmd = {getlasterror: 1};
+ // Merge all the fields
+ for(var i = 0; i < writeConcernFields.length; i++) {
+ if(writeConcern[writeConcernFields[i]] != null) {
+ getLastErrorCmd[writeConcernFields[i]] = writeConcern[writeConcernFields[i]];
+ }
+ }
+
+ // Create a getLastError command
+ var getLastErrorOp = new Query(bson, f("%s.$cmd", db), getLastErrorCmd, {numberToReturn: -1});
+ // Add getLastError command to list of ops to execute
+ commands.push(getLastErrorOp);
+
+ // getLastError callback
+ var getLastErrorCallback = function(err, result) {
+ if(err) return callback(err);
+ // Get the document
+ var doc = result.result;
+ // Save the getLastError document
+ getLastErrors.push(doc);
+
+ // If we have an error terminate
+ if(doc.ok == 0 || doc.err || doc.errmsg) {
+ return callback(null, aggregateWriteOperationResults(opType, ops, getLastErrors, result.connection));
+ }
+
+ // Execute the next op in the list
+ executeOp(list, callback);
+ }
+
+ // Write both commands out at the same time
+ pool.write(commands, getLastErrorCallback);
+ } catch(err) {
+ if(typeof err == 'string') err = new MongoError(err);
+ // We have a serialization error, rewrite as a write error to have same behavior as modern
+ // write commands
+ getLastErrors.push({ ok: 1, errmsg: err.message, code: 14 });
+ // Return due to an error
+ process.nextTick(function() {
+ _callback(null, aggregateWriteOperationResults(opType, ops, getLastErrors, null));
+ });
+ }
+ }
+
+ // Execute the operations
+ executeOp(_ops, callback);
+}
+
+var executeUnordered = function(opType, command, ismaster, ns, bson, pool, ops, options, callback) {
+ // Total operations to write
+ var totalOps = ops.length;
+ // Collect all the getLastErrors
+ var getLastErrors = [];
+ // Write concern
+ var optionWriteConcern = options.writeConcern || {w:1};
+ // Final write concern
+ var writeConcern = cloneWriteConcern(optionWriteConcern);
+ // Driver level error
+ var error;
+
+ // Execute all the operations
+ for(var i = 0; i < ops.length; i++) {
+ // Create an insert command
+ var op = new command(Query.getRequestId(), ismaster, bson, ns, [ops[i]], options);
+ // Get db name
+ var db = ns.split('.').shift();
+
+ try {
+ // Add binary message to list of commands to execute
+ var commands = [op];
+
+ // If write concern 0 don't fire getLastError
+ if(hasWriteConcern(writeConcern)) {
+ var getLastErrorCmd = {getlasterror: 1};
+ // Merge all the fields
+ for(var j = 0; j < writeConcernFields.length; j++) {
+ if(writeConcern[writeConcernFields[j]] != null)
+ getLastErrorCmd[writeConcernFields[j]] = writeConcern[writeConcernFields[j]];
+ }
+
+ // Create a getLastError command
+ var getLastErrorOp = new Query(bson, f("%s.$cmd", db), getLastErrorCmd, {numberToReturn: -1});
+ // Add getLastError command to list of ops to execute
+ commands.push(getLastErrorOp);
+
+ // Give the result from getLastError the right index
+ var callbackOp = function(_index) {
+ return function(err, result) {
+ if(err) error = err;
+ // Update the number of operations executed
+ totalOps = totalOps - 1;
+ // Save the getLastError document
+ if(!err) getLastErrors[_index] = result.result;
+ // Check if we are done
+ if(totalOps == 0) {
+ process.nextTick(function() {
+ if(error) return callback(error);
+ callback(null, aggregateWriteOperationResults(opType, ops, getLastErrors, result.connection));
+ });
+ }
+ }
+ }
+
+ // Write both commands out at the same time
+ pool.write(commands, callbackOp(i));
+ } else {
+ pool.write(commands, {immediateRelease:true, noResponse:true});
+ }
+ } catch(err) {
+ if(typeof err == 'string') err = new MongoError(err);
+ // Update the number of operations executed
+ totalOps = totalOps - 1;
+ // We have a serialization error, rewrite as a write error to have same behavior as modern
+ // write commands
+ getLastErrors[i] = { ok: 1, errmsg: err.message, code: 14 };
+ // Check if we are done
+ if(totalOps == 0) {
+ callback(null, aggregateWriteOperationResults(opType, ops, getLastErrors, null));
+ }
+ }
+ }
+
+ // Empty w:0 return
+ if(writeConcern
+ && writeConcern.w == 0 && callback) {
+ callback(null, new CommandResult({ok:1}, null));
+ }
+}
+
+module.exports = WireProtocol;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_6_support.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_6_support.js
new file mode 100644
index 0000000..8e83eb4
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/2_6_support.js
@@ -0,0 +1,332 @@
+"use strict";
+
+var Insert = require('./commands').Insert
+ , Update = require('./commands').Update
+ , Remove = require('./commands').Remove
+ , Query = require('../connection/commands').Query
+ , copy = require('../connection/utils').copy
+ , KillCursor = require('../connection/commands').KillCursor
+ , GetMore = require('../connection/commands').GetMore
+ , Query = require('../connection/commands').Query
+ , ReadPreference = require('../topologies/read_preference')
+ , f = require('util').format
+ , CommandResult = require('../connection/command_result')
+ , MongoError = require('../error')
+ , Long = require('bson').Long
+ , getReadPreference = require('./shared').getReadPreference;
+
+var WireProtocol = function() {}
+
+//
+// Execute a write operation
+var executeWrite = function(pool, bson, type, opsField, ns, ops, options, callback) {
+ if(ops.length == 0) throw new MongoError("insert must contain at least one document");
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ options = options || {};
+ }
+
+ // Split the ns up to get db and collection
+ var p = ns.split(".");
+ var d = p.shift();
+ // Options
+ var ordered = typeof options.ordered == 'boolean' ? options.ordered : true;
+ var writeConcern = options.writeConcern;
+
+ // return skeleton
+ var writeCommand = {};
+ writeCommand[type] = p.join('.');
+ writeCommand[opsField] = ops;
+ writeCommand.ordered = ordered;
+
+ // Did we specify a write concern
+ if(writeConcern && Object.keys(writeConcern).length > 0) {
+ writeCommand.writeConcern = writeConcern;
+ }
+
+ // Do we have bypassDocumentValidation set, then enable it on the write command
+ if(typeof options.bypassDocumentValidation == 'boolean') {
+ writeCommand.bypassDocumentValidation = options.bypassDocumentValidation;
+ }
+
+ // Options object
+ var opts = { command: true };
+ var queryOptions = { checkKeys : false, numberToSkip: 0, numberToReturn: 1 };
+ if(type == 'insert') queryOptions.checkKeys = true;
+ // Ensure we support serialization of functions
+ if(options.serializeFunctions) queryOptions.serializeFunctions = options.serializeFunctions;
+ // Do not serialize the undefined fields
+ if(options.ignoreUndefined) queryOptions.ignoreUndefined = options.ignoreUndefined;
+
+ try {
+ // Create write command
+ var cmd = new Query(bson, f("%s.$cmd", d), writeCommand, queryOptions);
+ // Execute command
+ pool.write(cmd, opts, callback);
+ } catch(err) {
+ callback(err);
+ }
+}
+
+//
+// Needs to support legacy mass insert as well as ordered/unordered legacy
+// emulation
+//
+WireProtocol.prototype.insert = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'insert', 'documents', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.update = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'update', 'updates', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.remove = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'delete', 'deletes', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.killCursor = function(bson, ns, cursorId, pool, callback) {
+ // Create a kill cursor command
+ var killCursor = new KillCursor(bson, [cursorId]);
+ // Execute the kill cursor command
+ if(pool && pool.isConnected()) {
+ pool.write(killCursor, {
+ immediateRelease:true, noResponse: true
+ });
+ }
+
+ // Callback
+ if(typeof callback == 'function') callback(null, null);
+}
+
+WireProtocol.prototype.getMore = function(bson, ns, cursorState, batchSize, raw, connection, options, callback) {
+ // Create getMore command
+ var getMore = new GetMore(bson, ns, cursorState.cursorId, {numberToReturn: batchSize});
+
+ // Query callback
+ var queryCallback = function(err, result) {
+ if(err) return callback(err);
+ // Get the raw message
+ var r = result.message;
+
+ // If we have a timed out query or a cursor that was killed
+ if((r.responseFlags & (1 << 0)) != 0) {
+ return callback(new MongoError("cursor does not exist, was killed or timed out"), null);
+ }
+
+ // Ensure we have a Long valie cursor id
+ var cursorId = typeof r.cursorId == 'number'
+ ? Long.fromNumber(r.cursorId)
+ : r.cursorId;
+
+ // Set all the values
+ cursorState.documents = r.documents;
+ cursorState.cursorId = cursorId;
+
+ // Return
+ callback(null, null, r.connection);
+ }
+
+ // If we have a raw query decorate the function
+ if(raw) {
+ queryCallback.raw = raw;
+ }
+
+ // Check if we need to promote longs
+ if(typeof cursorState.promoteLongs == 'boolean') {
+ queryCallback.promoteLongs = cursorState.promoteLongs;
+ }
+
+ if(typeof cursorState.promoteValues == 'boolean') {
+ queryCallback.promoteValues = cursorState.promoteValues;
+ }
+
+ if(typeof cursorState.promoteBuffers == 'boolean') {
+ queryCallback.promoteBuffers = cursorState.promoteBuffers;
+ }
+
+ // Write out the getMore command
+ connection.write(getMore, queryCallback);
+}
+
+WireProtocol.prototype.command = function(bson, ns, cmd, cursorState, topology, options) {
+ // Establish type of command
+ if(cmd.find) {
+ return setupClassicFind(bson, ns, cmd, cursorState, topology, options)
+ } else if(cursorState.cursorId != null) {
+ } else if(cmd) {
+ return setupCommand(bson, ns, cmd, cursorState, topology, options);
+ } else {
+ throw new MongoError(f("command %s does not return a cursor", JSON.stringify(cmd)));
+ }
+}
+
+//
+// Execute a find command
+var setupClassicFind = function(bson, ns, cmd, cursorState, topology, options) {
+ // Ensure we have at least some options
+ options = options || {};
+ // Get the readPreference
+ var readPreference = getReadPreference(cmd, options);
+ // Set the optional batchSize
+ cursorState.batchSize = cmd.batchSize || cursorState.batchSize;
+ var numberToReturn = 0;
+
+ // Unpack the limit and batchSize values
+ if(cursorState.limit == 0) {
+ numberToReturn = cursorState.batchSize;
+ } else if(cursorState.limit < 0 || cursorState.limit < cursorState.batchSize || (cursorState.limit > 0 && cursorState.batchSize == 0)) {
+ numberToReturn = cursorState.limit;
+ } else {
+ numberToReturn = cursorState.batchSize;
+ }
+
+ var numberToSkip = cursorState.skip || 0;
+ // Build actual find command
+ var findCmd = {};
+ // Using special modifier
+ var usesSpecialModifier = false;
+
+ // We have a Mongos topology, check if we need to add a readPreference
+ if(topology.type == 'mongos' && readPreference) {
+ findCmd['$readPreference'] = readPreference.toJSON();
+ usesSpecialModifier = true;
+ }
+
+ // Add special modifiers to the query
+ if(cmd.sort) findCmd['orderby'] = cmd.sort, usesSpecialModifier = true;
+ if(cmd.hint) findCmd['$hint'] = cmd.hint, usesSpecialModifier = true;
+ if(cmd.snapshot) findCmd['$snapshot'] = cmd.snapshot, usesSpecialModifier = true;
+ if(cmd.returnKey) findCmd['$returnKey'] = cmd.returnKey, usesSpecialModifier = true;
+ if(cmd.maxScan) findCmd['$maxScan'] = cmd.maxScan, usesSpecialModifier = true;
+ if(cmd.min) findCmd['$min'] = cmd.min, usesSpecialModifier = true;
+ if(cmd.max) findCmd['$max'] = cmd.max, usesSpecialModifier = true;
+ if(cmd.showDiskLoc) findCmd['$showDiskLoc'] = cmd.showDiskLoc, usesSpecialModifier = true;
+ if(cmd.comment) findCmd['$comment'] = cmd.comment, usesSpecialModifier = true;
+ if(cmd.maxTimeMS) findCmd['$maxTimeMS'] = cmd.maxTimeMS, usesSpecialModifier = true;
+
+ if(cmd.explain) {
+ // nToReturn must be 0 (match all) or negative (match N and close cursor)
+ // nToReturn > 0 will give explain results equivalent to limit(0)
+ numberToReturn = -Math.abs(cmd.limit || 0);
+ usesSpecialModifier = true;
+ findCmd['$explain'] = true;
+ }
+
+ // If we have a special modifier
+ if(usesSpecialModifier) {
+ findCmd['$query'] = cmd.query;
+ } else {
+ findCmd = cmd.query;
+ }
+
+ // Throw on majority readConcern passed in
+ if(cmd.readConcern && cmd.readConcern.level != 'local') {
+ throw new MongoError(f('server find command does not support a readConcern level of %s', cmd.readConcern.level));
+ }
+
+ // Remove readConcern, ensure no failing commands
+ if(cmd.readConcern) {
+ cmd = copy(cmd);
+ delete cmd['readConcern'];
+ }
+
+ // Serialize functions
+ var serializeFunctions = typeof options.serializeFunctions == 'boolean'
+ ? options.serializeFunctions : false;
+ var ignoreUndefined = typeof options.ignoreUndefined == 'boolean'
+ ? options.ignoreUndefined : false;
+
+ // Build Query object
+ var query = new Query(bson, ns, findCmd, {
+ numberToSkip: numberToSkip, numberToReturn: numberToReturn
+ , checkKeys: false, returnFieldSelector: cmd.fields
+ , serializeFunctions: serializeFunctions
+ , ignoreUndefined: ignoreUndefined
+ });
+
+ // Set query flags
+ query.slaveOk = readPreference.slaveOk();
+
+ // Set up the option bits for wire protocol
+ if(typeof cmd.tailable == 'boolean') {
+ query.tailable = cmd.tailable;
+ }
+
+ if(typeof cmd.oplogReplay == 'boolean') {
+ query.oplogReplay = cmd.oplogReplay;
+ }
+
+ if(typeof cmd.noCursorTimeout == 'boolean') {
+ query.noCursorTimeout = cmd.noCursorTimeout;
+ }
+
+ if(typeof cmd.awaitData == 'boolean') {
+ query.awaitData = cmd.awaitData;
+ }
+
+ if(typeof cmd.partial == 'boolean') {
+ query.partial = cmd.partial;
+ }
+
+ // Return the query
+ return query;
+}
+
+//
+// Set up a command cursor
+var setupCommand = function(bson, ns, cmd, cursorState, topology, options) {
+ // Set empty options object
+ options = options || {}
+ // Get the readPreference
+ var readPreference = getReadPreference(cmd, options);
+
+ // Final query
+ var finalCmd = {};
+ for(var name in cmd) {
+ finalCmd[name] = cmd[name];
+ }
+
+ // Build command namespace
+ var parts = ns.split(/\./);
+
+ // Serialize functions
+ var serializeFunctions = typeof options.serializeFunctions == 'boolean'
+ ? options.serializeFunctions : false;
+
+ var ignoreUndefined = typeof options.ignoreUndefined == 'boolean'
+ ? options.ignoreUndefined : false;
+
+ // Throw on majority readConcern passed in
+ if(cmd.readConcern && cmd.readConcern.level != 'local') {
+ throw new MongoError(f('server %s command does not support a readConcern level of %s', JSON.stringify(cmd), cmd.readConcern.level));
+ }
+
+ // Remove readConcern, ensure no failing commands
+ if(cmd.readConcern) delete cmd['readConcern'];
+
+ // We have a Mongos topology, check if we need to add a readPreference
+ if(topology.type == 'mongos'
+ && readPreference
+ && readPreference.preference != 'primary') {
+ finalCmd = {
+ '$query': finalCmd,
+ '$readPreference': readPreference.toJSON()
+ };
+ }
+
+ // Build Query object
+ var query = new Query(bson, f('%s.$cmd', parts.shift()), finalCmd, {
+ numberToSkip: 0, numberToReturn: -1
+ , checkKeys: false, serializeFunctions: serializeFunctions
+ , ignoreUndefined: ignoreUndefined
+ });
+
+ // Set query flags
+ query.slaveOk = readPreference.slaveOk();
+
+ // Return the query
+ return query;
+}
+
+module.exports = WireProtocol;
diff --git a/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/3_2_support.js b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/3_2_support.js
new file mode 100644
index 0000000..b57048d
--- /dev/null
+++ b/common/src/main/webapp/usageguide/appserver/node_modules/mongoose/node_modules/mongodb/node_modules/mongodb-core/lib/wireprotocol/3_2_support.js
@@ -0,0 +1,541 @@
+"use strict";
+
+var Insert = require('./commands').Insert
+ , Update = require('./commands').Update
+ , Remove = require('./commands').Remove
+ , Query = require('../connection/commands').Query
+ , copy = require('../connection/utils').copy
+ , KillCursor = require('../connection/commands').KillCursor
+ , GetMore = require('../connection/commands').GetMore
+ , Query = require('../connection/commands').Query
+ , ReadPreference = require('../topologies/read_preference')
+ , f = require('util').format
+ , CommandResult = require('../connection/command_result')
+ , MongoError = require('../error')
+ , Long = require('bson').Long
+ , getReadPreference = require('./shared').getReadPreference;
+
+var WireProtocol = function(legacyWireProtocol) {
+ this.legacyWireProtocol = legacyWireProtocol;
+}
+
+//
+// Execute a write operation
+var executeWrite = function(pool, bson, type, opsField, ns, ops, options, callback) {
+ if(ops.length == 0) throw new MongoError("insert must contain at least one document");
+ if(typeof options == 'function') {
+ callback = options;
+ options = {};
+ options = options || {};
+ }
+
+ // Split the ns up to get db and collection
+ var p = ns.split(".");
+ var d = p.shift();
+ // Options
+ var ordered = typeof options.ordered == 'boolean' ? options.ordered : true;
+ var writeConcern = options.writeConcern;
+
+ // return skeleton
+ var writeCommand = {};
+ writeCommand[type] = p.join('.');
+ writeCommand[opsField] = ops;
+ writeCommand.ordered = ordered;
+
+ // Did we specify a write concern
+ if(writeConcern && Object.keys(writeConcern).length > 0) {
+ writeCommand.writeConcern = writeConcern;
+ }
+
+ // If we have collation passed in
+ if(options.collation) {
+ for(var i = 0; i < writeCommand[opsField].length; i++) {
+ if(!writeCommand[opsField][i].collation) {
+ writeCommand[opsField][i].collation = options.collation;
+ }
+ }
+ }
+
+ // Do we have bypassDocumentValidation set, then enable it on the write command
+ if(typeof options.bypassDocumentValidation == 'boolean') {
+ writeCommand.bypassDocumentValidation = options.bypassDocumentValidation;
+ }
+
+ // Options object
+ var opts = { command: true };
+ var queryOptions = { checkKeys : false, numberToSkip: 0, numberToReturn: 1 };
+ if(type == 'insert') queryOptions.checkKeys = true;
+
+ // Ensure we support serialization of functions
+ if(options.serializeFunctions) queryOptions.serializeFunctions = options.serializeFunctions;
+ // Do not serialize the undefined fields
+ if(options.ignoreUndefined) queryOptions.ignoreUndefined = options.ignoreUndefined;
+
+ try {
+ // Create write command
+ var cmd = new Query(bson, f("%s.$cmd", d), writeCommand, queryOptions);
+ // Execute command
+ pool.write(cmd, opts, callback);
+ } catch(err) {
+ callback(err);
+ }
+}
+
+//
+// Needs to support legacy mass insert as well as ordered/unordered legacy
+// emulation
+//
+WireProtocol.prototype.insert = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'insert', 'documents', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.update = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'update', 'updates', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.remove = function(pool, ismaster, ns, bson, ops, options, callback) {
+ executeWrite(pool, bson, 'delete', 'deletes', ns, ops, options, callback);
+}
+
+WireProtocol.prototype.killCursor = function(bson, ns, cursorId, pool, callback) {
+ // Build command namespace
+ var parts = ns.split(/\./);
+ // Command namespace
+ var commandns = f('%s.$cmd', parts.shift());
+ // Create getMore command
+ var killcursorCmd = {
+ killCursors: parts.join('.'),
+ cursors: [cursorId]
+ }
+
+ // Build Query object
+ var query = new Query(bson, commandns, killcursorCmd, {
+ numberToSkip: 0, numberToReturn: -1
+ , checkKeys: false, returnFieldSelector: null
+ });
+
+ // Set query flags
+ query.slaveOk = true;
+
+ // Kill cursor callback
+ var killCursorCallback = function(err, result) {
+ if(err) {
+ if(typeof callback != 'function') return;
+ return callback(err);
+ }
+
+ // Result
+ var r = result.message;
+ // If we have a timed out query or a cursor that was killed
+ if((r.responseFlags & (1 << 0)) != 0) {
+ if(typeof callback != 'function') return;
+ return callback(new MongoError("cursor killed or timed out"), null);
+ }
+
+ if(!Array.isArray(r.documents) || r.documents.length == 0) {
+ if(typeof callback != 'function') return;
+ return callback(new MongoError(f('invalid killCursors result returned for cursor id %s', cursorState.cursorId)));
+ }
+
+ // Return the result
+ if(typeof callback == 'function') {
+ callback(null, r.documents[0]);
+ }
+ }
+
+ // Execute the kill cursor command
+ if(pool && pool.isConnected()) {
+ pool.write(query, {
+ command: true
+ }, killCursorCallback);
+ }
+}
+
+WireProtocol.prototype.getMore = function(bson, ns, cursorState, batchSize, raw, connection, options, callback) {
+ options = options || {};
+ // Build command namespace
+ var parts = ns.split(/\./);
+ // Command namespace
+ var commandns = f('%s.$cmd', parts.shift());
+
+ // Check if we have an maxTimeMS set
+ var maxTimeMS = typeof cursorState.cmd.maxTimeMS == 'number' ? cursorState.cmd.maxTimeMS : 3000;
+
+ // Create getMore command
+ var getMoreCmd = {
+ getMore: cursorState.cursorId,
+ collection: parts.join('.'),
+ batchSize: Math.abs(batchSize)
+ }
+
+ if(cursorState.cmd.tailable
+ && typeof cursorState.cmd.maxAwaitTimeMS == 'number') {
+ getMoreCmd.maxTimeMS = cursorState.cmd.maxAwaitTimeMS;
+ }
+
+ // Build Query object
+ var query = new Query(bson, commandns, getMoreCmd, {
+ numberToSkip: 0, numberToReturn: -1
+ , checkKeys: false, returnFieldSelector: null
+ });
+
+ // Set query flags
+ query.slaveOk = true;
+
+ // Query callback
+ var queryCallback = function(err, result) {
+ if(err) return callback(err);
+ // Get the raw message
+ var r = result.message;
+
+ // If we have a timed out query or a cursor that was killed
+ if((r.responseFlags & (1 << 0)) != 0) {
+ return callback(new MongoError("cursor killed or timed out"), null);
+ }
+
+ // Raw, return all the extracted documents
+ if(raw) {
+ cursorState.documents = r.documents;
+ cursorState.cursorId = r.cursorId;
+ return callback(null, r.documents);
+ }
+
+ // We have an error detected
+ if(r.documents[0].ok == 0) {
+ return callback(MongoError.create(r.documents[0]));
+ }
+
+ // Ensure we have a Long valid cursor id
+ var cursorId = typeof r.documents[0].cursor.id == 'number'
+ ? Long.fromNumber(r.documents[0].cursor.id)
+ : r.documents[0].cursor.id;
+
+ // Set all the values
+ cursorState.documents = r.documents[0].cursor.nextBatch;
+ cursorState.cursorId = cursorId;
+
+ // Return the result
+ callback(null, r.documents[0], r.connection);
+ }
+
+ // Query options
+ var queryOptions = { command: true };
+
+ // If we have a raw query decorate the function
+ if(raw) {
+ queryOptions.raw = raw;
+ }
+
+ // Add the result field needed
+ queryOptions.documentsReturnedIn = 'nextBatch';
+
+ // Check if we need to promote longs
+ if(typeof cursorState.promoteLongs == 'boolean') {
+ queryOptions.promoteLongs = cursorState.promoteLongs;
+ }
+
+ if(typeof cursorState.promoteValues == 'boolean') {
+ queryCallback.promoteValues = cursorState.promoteValues;
+ }
+
+ if(typeof cursorState.promoteBuffers == 'boolean') {
+ queryCallback.promoteBuffers = cursorState.promoteBuffers;
+ }
+
+ // Write out the getMore command
+ connection.write(query, queryOptions, queryCallback);
+}
+
+WireProtocol.prototype.command = function(bson, ns, cmd, cursorState, topology, options) {
+ // Establish type of command
+ if(cmd.find) {
+ // Create the find command
+ var query = executeFindCommand(bson, ns, cmd, cursorState, topology, options)
+ // Mark the cmd as virtual
+ cmd.virtual = false;
+ // Signal the documents are in the firstBatch value
+ query.documentsReturnedIn = 'firstBatch';
+ // Return the query
+ return query;
+ } else if(cursorState.cursorId != null) {
+ } else if(cmd) {
+ return setupCommand(bson, ns, cmd, cursorState, topology, options);
+ } else {
+ throw new MongoError(f("command %s does not return a cursor", JSON.stringify(cmd)));
+ }
+}
+
+// // Command
+// {
+// find: ns
+// , query: