mgo

package module
v0.0.0-...-66e4474 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 23, 2020 License: BSD-2-Clause Imports: 27 Imported by: 0

README

Build Status GoDoc

The MongoDB driver for Go

This fork has had a few improvements by ourselves as well as several PR's merged from the original mgo repo that are currently awaiting review. Changes are mostly geared towards performance improvements and bug fixes, though a few new features have been added.

Further PR's (with tests) are welcome, but please maintain backwards compatibility.

Detailed documentation of the API is available at GoDoc.

A sub-package that implements the BSON specification is also included, and may be used independently of the driver.

Supported Versions

mgo is known to work well on (and has integration tests against) MongoDB v3.0, 3.2, 3.4 and 3.6.

MongoDB 4.0 is currently experimental - we would happily accept PRs to help improve support!

Changes

  • Fixes attempting to authenticate before every query (details)
  • Removes bulk update / delete batch size limitations (details)
  • Adds native support for time.Duration marshalling (details)
  • Reduce memory footprint / garbage collection pressure by reusing buffers (details, more)
  • Support majority read concerns (details)
  • Improved connection handling (details)
  • Hides SASL warnings (details)
  • Support for partial indexes (details)
  • Fixes timezone handling (details)
  • Integration tests run against MongoDB 3.2 & 3.4 releases (details, more, more)
  • Improved multi-document transaction performance (details, more, more)
  • Fixes cursor timeouts (details)
  • Support index hints and timeouts for count queries (details)
  • Don't panic when handling indexed int64 fields (details)
  • Supports dropping all indexes on a collection (details)
  • Annotates log entries/profiler output with optional appName on 3.4+ (details)
  • Support for read-only views in 3.4+ (details)
  • Support for collations in 3.4+ (details, more)
  • Provide BSON constants for convenience/sanity (details)
  • Consistently unmarshal time.Time values as UTC (details)
  • Enforces best practise coding guidelines (details)
  • GetBSON correctly handles structs with both fields and pointers (details)
  • Improved bson.Raw unmarshalling performance (details)
  • Minimise socket connection timeouts due to excessive locking (details)
  • Natively support X509 client authentication (details)
  • Gracefully recover from a temporarily unreachable server (details)
  • Use JSON tags when no explicit BSON are tags set (details)
  • Support $changeStream tailing on 3.6+ (details)
  • Fix deadlock in cluster synchronisation (details)
  • Implement maxIdleTimeout for pooled connections (details)
  • Connection pool waiting improvements (details)
  • Fixes BSON encoding for $in and friends (details)
  • Add BSON stream encoders (details)
  • Add integer map key support in the BSON encoder (details)
  • Support aggregation collations (details)
  • Support encoding of inline struct references (details)
  • Improved windows test harness (details)
  • Improved type and nil handling in the BSON codec (details, more)
  • Separated network read/write timeouts (details)
  • Expanded dial string configuration options (details)
  • Implement MongoTimestamp (details)
  • Support setting writeConcern for findAndModify operations (details)
  • Add ssl to the dial string options (details)

Thanks to

  • @aksentyev
  • @bachue
  • @bozaro
  • @BenLubar
  • @carldunham
  • @carter2000
  • @cedric-cordenier
  • @cezarsa
  • @DaytonG
  • @ddspog
  • @drichelson
  • @dvic
  • @eaglerayp
  • @feliixx
  • @fmpwizard
  • @gazoon
  • @gedge
  • @gnawux
  • @idy
  • @jameinel
  • @jefferickson
  • @johnlawsharrison
  • @KJTsanaktsidis
  • @larrycinnabar
  • @mapete94
  • @maxnoel
  • @mcspring
  • @Mei-Zhao
  • @peterdeka
  • @Reenjii
  • @roobre
  • @smoya
  • @steve-gray
  • @tbruyelle
  • @wgallagher

Documentation

Overview

Package mgo (pronounced as "mango") offers a rich MongoDB driver for Go.

Detailed documentation of the API is available at GoDoc:

https://godoc.org/github.com/globalsign/mgo

Usage of the driver revolves around the concept of sessions. To get started, obtain a session using the Dial function:

session, err := mgo.Dial(url)

This will establish one or more connections with the cluster of servers defined by the url parameter. From then on, the cluster may be queried with multiple consistency rules (see SetMode) and documents retrieved with statements such as:

c := session.DB(database).C(collection)
err := c.Find(query).One(&result)

New sessions are typically created by calling session.Copy on the initial session obtained at dial time. These new sessions will share the same cluster information and connection pool, and may be easily handed into other methods and functions for organizing logic. Every session created must have its Close method called at the end of its life time, so its resources may be put back in the pool or collected, depending on the case.

There is a sub-package that provides support for BSON, which can be used by itself as well:

https://godoc.org/github.com/globalsign/mgo/bson

For more details, see the documentation for the types and methods.

Index

Examples

Constants

View Source
const (
	Default      = "default"
	UpdateLookup = "updateLookup"
)

Variables

View Source
var (
	// ErrNotFound error returned when a document could not be found
	ErrNotFound = errors.New("not found")
	// ErrCursor error returned when trying to retrieve documents from
	// an invalid cursor
	ErrCursor = errors.New("invalid cursor")
)

Functions

func IsDup

func IsDup(err error) bool

IsDup returns whether err informs of a duplicate key error because a primary key index or a secondary unique index already has an entry with the given value.

func ResetStats

func ResetStats()

ResetStats reset Stats to the previous database state

func SetDebug

func SetDebug(debug bool)

SetDebug enable the delivery of debug messages to the logger. Only meaningful if a logger is also set.

func SetLogger

func SetLogger(logger logLogger)

SetLogger specify the *log.Logger object where log messages should be sent to.

func SetStats

func SetStats(enabled bool)

SetStats enable database state monitoring

Types

type BuildInfo

type BuildInfo struct {
	Version        string
	VersionArray   []int  `bson:"versionArray"` // On MongoDB 2.0+; assembled from Version otherwise
	GitVersion     string `bson:"gitVersion"`
	OpenSSLVersion string `bson:"OpenSSLVersion"`
	SysInfo        string `bson:"sysInfo"` // Deprecated and empty on MongoDB 3.2+.
	Bits           int
	Debug          bool
	MaxObjectSize  int `bson:"maxBsonObjectSize"`
}

The BuildInfo type encapsulates details about the running MongoDB server.

Note that the VersionArray field was introduced in MongoDB 2.0+, but it is internally assembled from the Version information for previous versions. In both cases, VersionArray is guaranteed to have at least 4 entries.

func (*BuildInfo) VersionAtLeast

func (bi *BuildInfo) VersionAtLeast(version ...int) bool

VersionAtLeast returns whether the BuildInfo version is greater than or equal to the provided version number. If more than one number is provided, numbers will be considered as major, minor, and so on.

type Bulk

type Bulk struct {
	// contains filtered or unexported fields
}

Bulk represents an operation that can be prepared with several orthogonal changes before being delivered to the server.

MongoDB servers older than version 2.6 do not have proper support for bulk operations, so the driver attempts to map its API as much as possible into the functionality that works. In particular, in those releases updates and removals are sent individually, and inserts are sent in bulk but have suboptimal error reporting compared to more recent versions of the server. See the documentation of BulkErrorCase for details on that.

Relevant documentation:

http://blog.mongodb.org/post/84922794768/mongodbs-new-bulk-api

func (*Bulk) Insert

func (b *Bulk) Insert(docs ...interface{})

Insert queues up the provided documents for insertion.

func (*Bulk) Remove

func (b *Bulk) Remove(selectors ...interface{})

Remove queues up the provided selectors for removing matching documents. Each selector will remove only a single matching document.

func (*Bulk) RemoveAll

func (b *Bulk) RemoveAll(selectors ...interface{})

RemoveAll queues up the provided selectors for removing all matching documents. Each selector will remove all matching documents.

func (*Bulk) Run

func (b *Bulk) Run() (*BulkResult, error)

Run runs all the operations queued up.

If an error is reported on an unordered bulk operation, the error value may be an aggregation of all issues observed. As an exception to that, Insert operations running on MongoDB versions prior to 2.6 will report the last error only due to a limitation in the wire protocol.

func (*Bulk) Unordered

func (b *Bulk) Unordered()

Unordered puts the bulk operation in unordered mode.

In unordered mode the indvidual operations may be sent out of order, which means latter operations may proceed even if prior ones have failed.

func (*Bulk) Update

func (b *Bulk) Update(pairs ...interface{})

Update queues up the provided pairs of updating instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair matches exactly one document for updating at most.

func (*Bulk) UpdateAll

func (b *Bulk) UpdateAll(pairs ...interface{})

UpdateAll queues up the provided pairs of updating instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair updates all documents matching the selector.

func (*Bulk) Upsert

func (b *Bulk) Upsert(pairs ...interface{})

Upsert queues up the provided pairs of upserting instructions. The first element of each pair selects which documents must be updated, and the second element defines how to update it. Each pair matches exactly one document for updating at most.

type BulkError

type BulkError struct {
	// contains filtered or unexported fields
}

BulkError holds an error returned from running a Bulk operation. Individual errors may be obtained and inspected via the Cases method.

func (*BulkError) Cases

func (e *BulkError) Cases() []BulkErrorCase

Cases returns all individual errors found while attempting the requested changes.

See the documentation of BulkErrorCase for limitations in older MongoDB releases.

func (*BulkError) Error

func (e *BulkError) Error() string

type BulkErrorCase

type BulkErrorCase struct {
	Index int // Position of operation that failed, or -1 if unknown.
	Err   error
}

BulkErrorCase holds an individual error found while attempting a single change within a bulk operation, and the position in which it was enqueued.

MongoDB servers older than version 2.6 do not have proper support for bulk operations, so the driver attempts to map its API as much as possible into the functionality that works. In particular, only the last error is reported for bulk inserts and without any positional information, so the Index field is set to -1 in these cases.

type BulkResult

type BulkResult struct {
	Matched  int
	Modified int // Available only for MongoDB 2.6+
	// contains filtered or unexported fields
}

BulkResult holds the results for a bulk operation.

type Change

type Change struct {
	Update    interface{} // The update document
	Upsert    bool        // Whether to insert in case the document isn't found
	Remove    bool        // Whether to remove the document found rather than updating
	ReturnNew bool        // Should the modified document be returned rather than the old one
}

Change holds fields for running a findAndModify MongoDB command via the Query.Apply method.

type ChangeInfo

type ChangeInfo struct {
	// Updated reports the number of existing documents modified.
	// Due to server limitations, this reports the same value as the Matched field when
	// talking to MongoDB <= 2.4 and on Upsert and Apply (findAndModify) operations.
	Updated    int
	Removed    int         // Number of documents removed
	Matched    int         // Number of documents matched but not necessarily changed
	UpsertedId interface{} // Upserted _id field, when not explicitly provided
}

ChangeInfo holds details about the outcome of an update operation.

type ChangeStream

type ChangeStream struct {
	// contains filtered or unexported fields
}

func (*ChangeStream) Close

func (changeStream *ChangeStream) Close() error

Close kills the server cursor used by the iterator, if any, and returns nil if no errors happened during iteration, or the actual error otherwise.

func (*ChangeStream) Err

func (changeStream *ChangeStream) Err() error

Err returns nil if no errors happened during iteration, or the actual error otherwise.

func (*ChangeStream) Next

func (changeStream *ChangeStream) Next(result interface{}) bool

Next retrieves the next document from the change stream, blocking if necessary. Next returns true if a document was successfully unmarshalled into result, and false if an error occured. When Next returns false, the Err method should be called to check what error occurred during iteration. If there were no events available (ErrNotFound), the Err method returns nil so the user can retry the invocaton.

For example:

pipeline := []bson.M{}

changeStream := collection.Watch(pipeline, ChangeStreamOptions{})
for changeStream.Next(&changeDoc) {
    fmt.Printf("Change: %v\n", changeDoc)
}

if err := changeStream.Close(); err != nil {
    return err
}

If the pipeline used removes the _id field from the result, Next will error because the _id field is needed to resume iteration when an error occurs.

func (*ChangeStream) ResumeToken

func (changeStream *ChangeStream) ResumeToken() *bson.Raw

ResumeToken returns a copy of the current resume token held by the change stream. This token should be treated as an opaque token that can be provided to instantiate a new change stream.

func (*ChangeStream) Timeout

func (changeStream *ChangeStream) Timeout() bool

Timeout returns true if the last call of Next returned false because of an iterator timeout.

type ChangeStreamOptions

type ChangeStreamOptions struct {

	// FullDocument controls the amount of data that the server will return when
	// returning a changes document.
	FullDocument FullDocument

	// ResumeAfter specifies the logical starting point for the new change stream.
	ResumeAfter *bson.Raw

	// MaxAwaitTimeMS specifies the maximum amount of time for the server to wait
	// on new documents to satisfy a change stream query.
	MaxAwaitTimeMS time.Duration

	// BatchSize specifies the number of documents to return per batch.
	BatchSize int
}

type Collation

type Collation struct {
	// Locale defines the collation locale.
	Locale string `bson:"locale"`

	// CaseFirst may be set to "upper" or "lower" to define whether
	// to have uppercase or lowercase items first. Default is "off".
	CaseFirst string `bson:"caseFirst,omitempty"`

	// Strength defines the priority of comparison properties, as follows:
	//
	//   1 (primary)    - Strongest level, denote difference between base characters
	//   2 (secondary)  - Accents in characters are considered secondary differences
	//   3 (tertiary)   - Upper and lower case differences in characters are
	//                    distinguished at the tertiary level
	//   4 (quaternary) - When punctuation is ignored at level 1-3, an additional
	//                    level can be used to distinguish words with and without
	//                    punctuation. Should only be used if ignoring punctuation
	//                    is required or when processing Japanese text.
	//   5 (identical)  - When all other levels are equal, the identical level is
	//                    used as a tiebreaker. The Unicode code point values of
	//                    the NFD form of each string are compared at this level,
	//                    just in case there is no difference at levels 1-4
	//
	// Strength defaults to 3.
	Strength int `bson:"strength,omitempty"`

	// Alternate controls whether spaces and punctuation are considered base characters.
	// May be set to "non-ignorable" (spaces and punctuation considered base characters)
	// or "shifted" (spaces and punctuation not considered base characters, and only
	// distinguished at strength > 3). Defaults to "non-ignorable".
	Alternate string `bson:"alternate,omitempty"`

	// MaxVariable defines which characters are affected when the value for Alternate is
	// "shifted". It may be set to "punct" to affect punctuation or spaces, or "space" to
	// affect only spaces.
	MaxVariable string `bson:"maxVariable,omitempty"`

	// Normalization defines whether text is normalized into Unicode NFD.
	Normalization bool `bson:"normalization,omitempty"`

	// CaseLevel defines whether to turn case sensitivity on at strength 1 or 2.
	CaseLevel bool `bson:"caseLevel,omitempty"`

	// NumericOrdering defines whether to order numbers based on numerical
	// order and not collation order.
	NumericOrdering bool `bson:"numericOrdering,omitempty"`

	// Backwards defines whether to have secondary differences considered in reverse order,
	// as done in the French language.
	Backwards bool `bson:"backwards,omitempty"`
}

Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.

type Collection

type Collection struct {
	Database *Database
	Name     string // "collection"
	FullName string // "db.collection"
}

Collection stores documents

Relevant documentation:

https://docs.mongodb.com/manual/core/databases-and-collections/#collections

func (*Collection) Bulk

func (c *Collection) Bulk() *Bulk

Bulk returns a value to prepare the execution of a bulk operation.

func (*Collection) Count

func (c *Collection) Count() (n int, err error)

Count returns the total number of documents in the collection.

func (*Collection) Create

func (c *Collection) Create(info *CollectionInfo) error

Create explicitly creates the c collection with details of info. MongoDB creates collections automatically on use, so this method is only necessary when creating collection with non-default characteristics, such as capped collections.

Relevant documentation:

http://www.mongodb.org/display/DOCS/createCollection+Command
http://www.mongodb.org/display/DOCS/Capped+Collections

func (*Collection) DropAllIndexes

func (c *Collection) DropAllIndexes() error

DropAllIndexes drops all the indexes from the c collection

func (*Collection) DropCollection

func (c *Collection) DropCollection() error

DropCollection removes the entire collection including all of its documents.

func (*Collection) DropIndex

func (c *Collection) DropIndex(key ...string) error

DropIndex drops the index with the provided key from the c collection.

See EnsureIndex for details on the accepted key variants.

For example:

err1 := collection.DropIndex("firstField", "-secondField")
err2 := collection.DropIndex("customIndexName")

func (*Collection) DropIndexName

func (c *Collection) DropIndexName(name string) error

DropIndexName removes the index with the provided index name.

For example:

err := collection.DropIndex("customIndexName")

func (*Collection) EnsureIndex

func (c *Collection) EnsureIndex(index Index) error

EnsureIndex ensures an index with the given key exists, creating it with the provided parameters if necessary. EnsureIndex does not modify a previously existent index with a matching key. The old index must be dropped first instead.

Once EnsureIndex returns successfully, following requests for the same index will not contact the server unless Collection.DropIndex is used to drop the same index, or Session.ResetIndexCache is called.

For example:

index := Index{
    Key: []string{"lastname", "firstname"},
    Unique: true,
    DropDups: true,
    Background: true, // See notes.
    Sparse: true,
}
err := collection.EnsureIndex(index)

The Key value determines which fields compose the index. The index ordering will be ascending by default. To obtain an index with a descending order, the field name should be prefixed by a dash (e.g. []string{"-time"}). It can also be optionally prefixed by an index kind, as in "$text:summary" or "$2d:-point". The key string format is:

[$<kind>:][-]<field name>

If the Unique field is true, the index must necessarily contain only a single document per Key. With DropDups set to true, documents with the same key as a previously indexed one will be dropped rather than an error returned.

If Background is true, other connections will be allowed to proceed using the collection without the index while it's being built. Note that the session executing EnsureIndex will be blocked for as long as it takes for the index to be built.

If Sparse is true, only documents containing the provided Key fields will be included in the index. When using a sparse index for sorting, only indexed documents will be returned.

If ExpireAfter is non-zero, the server will periodically scan the collection and remove documents containing an indexed time.Time field with a value older than ExpireAfter. See the documentation for details:

http://docs.mongodb.org/manual/tutorial/expire-data

Other kinds of indexes are also supported through that API. Here is an example:

index := Index{
    Key: []string{"$2d:loc"},
    Bits: 26,
}
err := collection.EnsureIndex(index)

The example above requests the creation of a "2d" index for the "loc" field.

The 2D index bounds may be changed using the Min and Max attributes of the Index value. The default bound setting of (-180, 180) is suitable for latitude/longitude pairs.

The Bits parameter sets the precision of the 2D geohash values. If not provided, 26 bits are used, which is roughly equivalent to 1 foot of precision for the default (-180, 180) index bounds.

Relevant documentation:

http://www.mongodb.org/display/DOCS/Indexes
http://www.mongodb.org/display/DOCS/Indexing+Advice+and+FAQ
http://www.mongodb.org/display/DOCS/Indexing+as+a+Background+Operation
http://www.mongodb.org/display/DOCS/Geospatial+Indexing
http://www.mongodb.org/display/DOCS/Multikeys

func (*Collection) EnsureIndexKey

func (c *Collection) EnsureIndexKey(key ...string) error

EnsureIndexKey ensures an index with the given key exists, creating it if necessary.

This example:

err := collection.EnsureIndexKey("a", "b")

Is equivalent to:

err := collection.EnsureIndex(mgo.Index{Key: []string{"a", "b"}})

See the EnsureIndex method for more details.

func (*Collection) Find

func (c *Collection) Find(query interface{}) *Query

Find prepares a query using the provided document. The document may be a map or a struct value capable of being marshalled with bson. The map may be a generic one using interface{} for its key and/or values, such as bson.M, or it may be a properly typed map. Providing nil as the document is equivalent to providing an empty document such as bson.M{}.

Further details of the query may be tweaked using the resulting Query value, and then executed to retrieve results using methods such as One, For, Iter, or Tail.

In case the resulting document includes a field named $err or errmsg, which are standard ways for MongoDB to return query errors, the returned err will be set to a *QueryError value including the Err message and the Code. In those cases, the result argument is still unmarshalled into with the received document so that any other custom values may be obtained if desired.

Relevant documentation:

http://www.mongodb.org/display/DOCS/Querying
http://www.mongodb.org/display/DOCS/Advanced+Queries

func (*Collection) FindId

func (c *Collection) FindId(id interface{}) *Query

FindId is a convenience helper equivalent to:

query := collection.Find(bson.M{"_id": id})

See the Find method for more details.

func (*Collection) Indexes

func (c *Collection) Indexes() (indexes []Index, err error)

Indexes returns a list of all indexes for the collection.

See the EnsureIndex method for more details on indexes.

func (*Collection) Insert

func (c *Collection) Insert(docs ...interface{}) error

Insert inserts one or more documents in the respective collection. In case the session is in safe mode (see the SetSafe method) and an error happens while inserting the provided documents, the returned error will be of type *LastError.

func (*Collection) NewIter

func (c *Collection) NewIter(session *Session, firstBatch []