Commit 5852edc1 authored by nick@chromium.org's avatar nick@chromium.org

Initial commit of sync engine code to browser/sync.

The code is not built on any platform yet.  That will arrive
as a subsequent checkin.

This is an implementation of the interface exposed earlier
through syncapi.h.  It is the client side of a sync
protocol that lets users sync their browser data
(currently, just bookmarks) with their Google Account.

Table of contents:

browser/sync/
  protocol - The protocol definition, and
             other definitions necessary to connect to
             the service.
  syncable/ - defines a data model for syncable objects,
              and provides a sqlite-based backing store
              for this model.
  engine/ - includes the core sync logic, including commiting
            changes to the server, downloading changes from
            the server, resolving conflicts, other parts of
            the sync algorithm.
  engine/net - parts of the sync engine focused on the
               business of talking to the server.  Some of
               this is binds a generic "server connection"
               interface to a concrete implementation
               provided by Chromium.
  notifier - the part of the syncer focused on the business
             of sending and receiving xmpp notifications.
             Notifications are used instead of polling to
             achieve very low latency change propagation.
  util - not necessarily sync specific utility code.  Much
         of this is scaffolding which should either be
         replaced by, or merged with, the utility code
         in base/.

BUG=none
TEST=this code includes its own suite of unit tests.

Review URL: http://codereview.chromium.org/194065

git-svn-id: svn://svn.chromium.org/chrome/trunk/src@25850 0039d316-1c4b-4281-b951-d872f2087c98
parent f6059e37
This diff is collapsed.
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// The all status object watches various sync engine components and aggregates
// the status of all of them into one place.
//
#ifndef CHROME_BROWSER_SYNC_ENGINE_ALL_STATUS_H_
#define CHROME_BROWSER_SYNC_ENGINE_ALL_STATUS_H_
#include <map>
#include "base/atomicops.h"
#include "base/scoped_ptr.h"
#include "chrome/browser/sync/engine/syncer_status.h"
#include "chrome/browser/sync/util/event_sys.h"
#include "chrome/browser/sync/util/pthread_helpers.h"
namespace browser_sync {
class AuthWatcher;
class GaiaAuthenticator;
class ScopedStatusLockWithNotify;
class ServerConnectionManager;
class Syncer;
class SyncerThread;
class TalkMediator;
struct AllStatusEvent;
struct AuthWatcherEvent;
struct GaiaAuthEvent;
struct ServerConnectionEvent;
struct SyncerEvent;
struct TalkMediatorEvent;
class AllStatus {
friend class ScopedStatusLockWithNotify;
public:
typedef EventChannel<AllStatusEvent, PThreadMutex> Channel;
// Status of the entire sync process distilled into a single enum.
enum SyncStatus {
// Can't connect to server, but there are no pending changes in
// our local dataase.
OFFLINE,
// Can't connect to server, and there are pending changes in our
// local cache.
OFFLINE_UNSYNCED,
// Connected and syncing.
SYNCING,
// Connected, no pending changes.
READY,
// Internal sync error.
CONFLICT,
// Can't connect to server, and we haven't completed the initial
// sync yet. So there's nothing we can do but wait for the server.
OFFLINE_UNUSABLE,
// For array sizing, etc.
ICON_STATUS_COUNT
};
struct Status {
SyncStatus icon;
int unsynced_count;
int conflicting_count;
bool syncing;
bool authenticated; // Successfully authenticated via gaia
// True if we have received at least one good reply from the server.
bool server_up;
bool server_reachable;
// True after a client has done a first sync.
bool initial_sync_ended;
// True if any syncer is stuck.
bool syncer_stuck;
// True if any syncer is stopped because of server issues.
bool server_broken;
// True only if the notification listener has subscribed.
bool notifications_enabled;
// Notifications counters updated by the actions in synapi.
int notifications_received;
int notifications_sent;
// The max number of consecutive errors from any component.
int max_consecutive_errors;
bool disk_full;
// Contains current transfer item meta handle
int64 current_item_meta_handle;
// The next two values will be equal if all updates have been received.
// total updates available.
int64 updates_available;
// total updates received.
int64 updates_received;
};
// Maximum interval for exponential backoff.
static const int kMaxBackoffSeconds = 60 * 60 * 4; // 4 hours.
AllStatus();
~AllStatus();
void WatchConnectionManager(ServerConnectionManager* conn_mgr);
void HandleServerConnectionEvent(const ServerConnectionEvent& event);
// Both WatchAuthenticator/HandleGaiaAuthEvent and WatchAuthWatcher/
// HandleAuthWatcherEventachieve have the same goal; use only one of the
// following two. (The AuthWatcher is watched under Windows; the
// GaiaAuthenticator is watched under Mac/Linux.)
void WatchAuthenticator(GaiaAuthenticator* gaia);
void HandleGaiaAuthEvent(const GaiaAuthEvent& event);
void WatchAuthWatcher(AuthWatcher* auth_watcher);
void HandleAuthWatcherEvent(const AuthWatcherEvent& event);
void WatchSyncerThread(SyncerThread* syncer_thread);
void HandleSyncerEvent(const SyncerEvent& event);
void WatchTalkMediator(
const browser_sync::TalkMediator* talk_mediator);
void HandleTalkMediatorEvent(
const browser_sync::TalkMediatorEvent& event);
// Returns a string description of the SyncStatus (currently just the ascii
// version of the enum). Will LOG(FATAL) if the status us out of range.
static const char* GetSyncStatusString(SyncStatus status);
Channel* channel() const { return channel_; }
Status status() const;
// DDOS avoidance function. The argument and return value is in seconds
static int GetRecommendedDelaySeconds(int base_delay_seconds);
// This uses AllStatus' max_consecutive_errors as the error count
int GetRecommendedDelay(int base_delay) const;
protected:
typedef PThreadScopedLock<PThreadMutex> MutexLock;
typedef std::map<Syncer*, EventListenerHookup*> Syncers;
// Examines syncer to calculate syncing and the unsynced count,
// and returns a Status with new values.
Status CalcSyncing() const;
Status CalcSyncing(const SyncerEvent& event) const;
Status CreateBlankStatus() const;
// Examines status to see what has changed, updates old_status in place.
int CalcStatusChanges(Status* old_status);
Status status_;
Channel* const channel_;
scoped_ptr<EventListenerHookup> conn_mgr_hookup_;
scoped_ptr<EventListenerHookup> gaia_hookup_;
scoped_ptr<EventListenerHookup> authwatcher_hookup_;
scoped_ptr<EventListenerHookup> syncer_thread_hookup_;
scoped_ptr<EventListenerHookup> diskfull_hookup_;
scoped_ptr<EventListenerHookup> talk_mediator_hookup_;
mutable PThreadMutex mutex_; // Protects all data members.
};
struct AllStatusEvent {
enum { // A bit mask of which members have changed.
SHUTDOWN = 0x0000,
ICON = 0x0001,
UNSYNCED_COUNT = 0x0002,
AUTHENTICATED = 0x0004,
SYNCING = 0x0008,
SERVER_UP = 0x0010,
NOTIFICATIONS_ENABLED = 0x0020,
INITIAL_SYNC_ENDED = 0x0080,
SERVER_REACHABLE = 0x0100,
DISK_FULL = 0x0200,
OVER_QUOTA = 0x0400,
NOTIFICATIONS_RECEIVED = 0x0800,
NOTIFICATIONS_SENT = 0x1000,
TRASH_WARNING = 0x40000,
};
int what_changed;
AllStatus::Status status;
typedef AllStatusEvent EventType;
static inline bool IsChannelShutdownEvent(const AllStatusEvent& e) {
return SHUTDOWN == e.what_changed;
}
};
enum StatusNotifyPlan {
NOTIFY_IF_STATUS_CHANGED,
// A small optimization, don't do the big compare when we know
// nothing has changed.
DONT_NOTIFY,
};
class ScopedStatusLockWithNotify {
public:
explicit ScopedStatusLockWithNotify(AllStatus* allstatus);
~ScopedStatusLockWithNotify();
// Defaults to true, but can be explicitly reset so we don't have to
// do the big compare in the destructor. Small optimization.
inline void set_notify_plan(StatusNotifyPlan plan) { plan_ = plan; }
void NotifyOverQuota();
protected:
AllStatusEvent event_;
AllStatus* const allstatus_;
StatusNotifyPlan plan_;
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_ALL_STATUS_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/all_status.h"
#include "testing/gtest/include/gtest/gtest.h"
namespace browser_sync {
TEST(AllStatus, GetRecommendedDelay) {
EXPECT_LE(0, AllStatus::GetRecommendedDelaySeconds(0));
EXPECT_LE(1, AllStatus::GetRecommendedDelaySeconds(1));
EXPECT_LE(50, AllStatus::GetRecommendedDelaySeconds(50));
EXPECT_LE(10, AllStatus::GetRecommendedDelaySeconds(10));
EXPECT_EQ(AllStatus::kMaxBackoffSeconds,
AllStatus::GetRecommendedDelaySeconds(
AllStatus::kMaxBackoffSeconds));
EXPECT_EQ(AllStatus::kMaxBackoffSeconds,
AllStatus::GetRecommendedDelaySeconds(
AllStatus::kMaxBackoffSeconds+1));
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/apply_updates_command.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/engine/update_applicator.h"
#include "chrome/browser/sync/syncable/directory_manager.h"
#include "chrome/browser/sync/syncable/syncable.h"
#include "chrome/browser/sync/util/sync_types.h"
namespace browser_sync {
ApplyUpdatesCommand::ApplyUpdatesCommand() {}
ApplyUpdatesCommand::~ApplyUpdatesCommand() {}
void ApplyUpdatesCommand::ModelChangingExecuteImpl(SyncerSession *session) {
syncable::ScopedDirLookup dir(session->dirman(), session->account_name());
if (!dir.good()) {
LOG(ERROR) << "Scoped dir lookup failed!";
return;
}
syncable::WriteTransaction trans(dir, syncable::SYNCER, __FILE__, __LINE__);
syncable::Directory::UnappliedUpdateMetaHandles handles;
dir->GetUnappliedUpdateMetaHandles(&trans, &handles);
UpdateApplicator applicator(session, handles.begin(), handles.end());
while (applicator.AttemptOneApplication(&trans)) {
}
applicator.SaveProgressIntoSessionState();
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_APPLY_UPDATES_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_APPLY_UPDATES_COMMAND_H_
#include "chrome/browser/sync/engine/model_changing_syncer_command.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/util/sync_types.h"
namespace syncable {
class WriteTransaction;
class MutableEntry;
class Id;
}
namespace browser_sync {
class ApplyUpdatesCommand : public ModelChangingSyncerCommand {
public:
ApplyUpdatesCommand();
virtual ~ApplyUpdatesCommand();
virtual void ModelChangingExecuteImpl(SyncerSession *session);
private:
DISALLOW_COPY_AND_ASSIGN(ApplyUpdatesCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_APPLY_UPDATES_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/apply_updates_command.h"
#include "chrome/browser/sync/engine/sync_cycle_state.h"
#include "chrome/browser/sync/engine/sync_process_state.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/syncable/directory_manager.h"
#include "chrome/browser/sync/syncable/syncable.h"
#include "chrome/browser/sync/syncable/syncable_id.h"
#include "chrome/browser/sync/util/character_set_converters.h"
#include "chrome/test/sync/engine/test_directory_setter_upper.h"
#include "testing/gtest/include/gtest/gtest.h"
using std::string;
using syncable::ScopedDirLookup;
using syncable::WriteTransaction;
using syncable::ReadTransaction;
using syncable::MutableEntry;
using syncable::Entry;
using syncable::Id;
using syncable::UNITTEST;
namespace browser_sync {
// A test fixture for tests exercising ApplyUpdatesCommand.
class ApplyUpdatesCommandTest : public testing::Test {
protected:
ApplyUpdatesCommandTest() : next_revision_(1) {}
virtual ~ApplyUpdatesCommandTest() {}
virtual void SetUp() {
syncdb_.SetUp();
}
virtual void TearDown() {
syncdb_.TearDown();
}
protected:
// Create a new unapplied update.
void CreateUnappliedNewItemWithParent(const string& item_id,
const string& parent_id) {
ScopedDirLookup dir(syncdb_.manager(), syncdb_.name());
ASSERT_TRUE(dir.good());
WriteTransaction trans(dir, UNITTEST, __FILE__, __LINE__);
MutableEntry entry(&trans, syncable::CREATE_NEW_UPDATE_ITEM,
Id::CreateFromServerId(item_id));
ASSERT_TRUE(entry.good());
PathString name;
AppendUTF8ToPathString(item_id, &name);
entry.Put(syncable::SERVER_VERSION, next_revision_++);
entry.Put(syncable::IS_UNAPPLIED_UPDATE, true);
entry.Put(syncable::SERVER_NAME, name);
entry.Put(syncable::SERVER_NON_UNIQUE_NAME, name);
entry.Put(syncable::SERVER_PARENT_ID, Id::CreateFromServerId(parent_id));
entry.Put(syncable::SERVER_IS_DIR, true);
}
TestDirectorySetterUpper syncdb_;
ApplyUpdatesCommand apply_updates_command_;
private:
int64 next_revision_;
DISALLOW_COPY_AND_ASSIGN(ApplyUpdatesCommandTest);
};
TEST_F(ApplyUpdatesCommandTest, Simple) {
string root_server_id = syncable::kNullId.GetServerId();
CreateUnappliedNewItemWithParent("parent", root_server_id);
CreateUnappliedNewItemWithParent("child", "parent");
SyncCycleState cycle_state;
SyncProcessState process_state(syncdb_.manager(), syncdb_.name(),
NULL, NULL, NULL, NULL);
SyncerSession session(&cycle_state, &process_state);
apply_updates_command_.ModelChangingExecuteImpl(&session);
EXPECT_EQ(2, cycle_state.AppliedUpdatesSize())
<< "All updates should have been attempted";
EXPECT_EQ(0, process_state.ConflictingItemsSize())
<< "Simple update shouldn't result in conflicts";
EXPECT_EQ(0, process_state.BlockedItemsSize())
<< "Blocked items shouldn't be possible under any circumstances";
EXPECT_EQ(2, cycle_state.SuccessfullyAppliedUpdateCount())
<< "All items should have been successfully applied";
}
TEST_F(ApplyUpdatesCommandTest, UpdateWithChildrenBeforeParents) {
// Set a bunch of updates which are difficult to apply in the order
// they're received due to dependencies on other unseen items.
string root_server_id = syncable::kNullId.GetServerId();
CreateUnappliedNewItemWithParent("a_child_created_first", "parent");
CreateUnappliedNewItemWithParent("x_child_created_first", "parent");
CreateUnappliedNewItemWithParent("parent", root_server_id);
CreateUnappliedNewItemWithParent("a_child_created_second", "parent");
CreateUnappliedNewItemWithParent("x_child_created_second", "parent");
SyncCycleState cycle_state;
SyncProcessState process_state(syncdb_.manager(), syncdb_.name(),
NULL, NULL, NULL, NULL);
SyncerSession session(&cycle_state, &process_state);
apply_updates_command_.ModelChangingExecuteImpl(&session);
EXPECT_EQ(5, cycle_state.AppliedUpdatesSize())
<< "All updates should have been attempted";
EXPECT_EQ(0, process_state.ConflictingItemsSize())
<< "Simple update shouldn't result in conflicts, even if out-of-order";
EXPECT_EQ(0, process_state.BlockedItemsSize())
<< "Blocked items shouldn't be possible under any circumstances";
EXPECT_EQ(5, cycle_state.SuccessfullyAppliedUpdateCount())
<< "All updates should have been successfully applied";
}
TEST_F(ApplyUpdatesCommandTest, NestedItemsWithUnknownParent) {
// We shouldn't be able to do anything with either of these items.
CreateUnappliedNewItemWithParent("some_item", "unknown_parent");
CreateUnappliedNewItemWithParent("some_other_item", "some_item");
SyncCycleState cycle_state;
SyncProcessState process_state(syncdb_.manager(), syncdb_.name(),
NULL, NULL, NULL, NULL);
SyncerSession session(&cycle_state, &process_state);
apply_updates_command_.ModelChangingExecuteImpl(&session);
EXPECT_EQ(2, cycle_state.AppliedUpdatesSize())
<< "All updates should have been attempted";
EXPECT_EQ(2, process_state.ConflictingItemsSize())
<< "All updates with an unknown ancestors should be in conflict";
EXPECT_EQ(0, process_state.BlockedItemsSize())
<< "Blocked items shouldn't be possible under any circumstances";
EXPECT_EQ(0, cycle_state.SuccessfullyAppliedUpdateCount())
<< "No item with an unknown ancestor should be applied";
}
TEST_F(ApplyUpdatesCommandTest, ItemsBothKnownAndUnknown) {
// See what happens when there's a mixture of good and bad updates.
string root_server_id = syncable::kNullId.GetServerId();
CreateUnappliedNewItemWithParent("first_unknown_item", "unknown_parent");
CreateUnappliedNewItemWithParent("first_known_item", root_server_id);
CreateUnappliedNewItemWithParent("second_unknown_item", "unknown_parent");
CreateUnappliedNewItemWithParent("second_known_item", "first_known_item");
CreateUnappliedNewItemWithParent("third_known_item", "fourth_known_item");
CreateUnappliedNewItemWithParent("fourth_known_item", root_server_id);
SyncCycleState cycle_state;
SyncProcessState process_state(syncdb_.manager(), syncdb_.name(),
NULL, NULL, NULL, NULL);
SyncerSession session(&cycle_state, &process_state);
apply_updates_command_.ModelChangingExecuteImpl(&session);
EXPECT_EQ(6, cycle_state.AppliedUpdatesSize())
<< "All updates should have been attempted";
EXPECT_EQ(2, process_state.ConflictingItemsSize())
<< "The updates with unknown ancestors should be in conflict";
EXPECT_EQ(0, process_state.BlockedItemsSize())
<< "Blocked items shouldn't be possible under any circumstances";
EXPECT_EQ(4, cycle_state.SuccessfullyAppliedUpdateCount())
<< "The updates with known ancestors should be successfully applied";
}
} // namespace browser_sync
This diff is collapsed.
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// AuthWatcher watches authentication events and user open and close
// events and accordingly opens and closes shares.
#ifndef CHROME_BROWSER_SYNC_ENGINE_AUTH_WATCHER_H_
#define CHROME_BROWSER_SYNC_ENGINE_AUTH_WATCHER_H_
#include <map>
#include <string>
#include "base/atomicops.h"
#include "base/scoped_ptr.h"
#include "chrome/browser/sync/engine/net/gaia_authenticator.h"
#include "chrome/browser/sync/util/event_sys.h"
#include "chrome/browser/sync/util/pthread_helpers.h"
#include "chrome/browser/sync/util/sync_types.h"
namespace syncable {
struct DirectoryManagerEvent;
class DirectoryManager;
}
namespace browser_sync {
class AllStatus;
class AuthWatcher;
class ServerConnectionManager;
class TalkMediator;
class URLFactory;
class UserSettings;
struct ServerConnectionEvent;
struct AuthWatcherEvent {
enum WhatHappened {
AUTHENTICATION_ATTEMPT_START,
AUTHWATCHER_DESTROYED,
AUTH_SUCCEEDED,
GAIA_AUTH_FAILED,
SERVICE_USER_NOT_SIGNED_UP,
SERVICE_AUTH_FAILED,
SERVICE_CONNECTION_FAILED,
// Used in a safety check in AuthWatcher::AuthenticateWithToken()
ILLEGAL_VALUE,
};
WhatHappened what_happened;
const GaiaAuthenticator::AuthResults* auth_results;
// use AuthWatcherEvent as its own traits type in hookups.
typedef AuthWatcherEvent EventType;
static inline bool IsChannelShutdownEvent(const AuthWatcherEvent& event) {
return event.what_happened == AUTHWATCHER_DESTROYED;
}
// Used for AUTH_SUCCEEDED notification
std::string user_email;
// How was this auth attempt initiated?
enum AuthenticationTrigger {
USER_INITIATED = 0, // default value.
EXPIRED_CREDENTIALS,
};
AuthenticationTrigger trigger;
};
class AuthWatcher {
public:
// Normal progression is local -> gaia -> token
enum Status { LOCALLY_AUTHENTICATED, GAIA_AUTHENTICATED, NOT_AUTHENTICATED };
typedef syncable::DirectoryManagerEvent DirectoryManagerEvent;
typedef syncable::DirectoryManager DirectoryManager;
typedef TalkMediator TalkMediator;
AuthWatcher(DirectoryManager* dirman,
ServerConnectionManager* scm,
AllStatus* allstatus,
const std::string& user_agent,
const std::string& service_id,
const std::string& gaia_url,
UserSettings* user_settings,
GaiaAuthenticator* gaia_auth,
TalkMediator* talk_mediator);
~AuthWatcher();
// Returns true if the open share has gotten zero
// updates from the sync server (initial sync complete.)
bool LoadDirectoryListAndOpen(const PathString& login);
typedef EventChannel<AuthWatcherEvent, PThreadMutex> Channel;
inline Channel* channel() const {
return channel_.get();
}
void Authenticate(const std::string& email, const std::string& password,
const std::string& captcha_token, const std::string& captcha_value,
bool persist_creds_to_disk);
void Authenticate(const std::string& email, const std::string& password,
bool persist_creds_to_disk) {
Authenticate(email, password, "", "", persist_creds_to_disk);
}
// Retrieves an auth token for a named service for which a long-lived token
// was obtained at login time. Returns true if a long-lived token can be
// found, false otherwise.
bool GetAuthTokenForService(const std::string& service_name,
std::string* service_token);
std::string email() const;
syncable::DirectoryManager* dirman() const { return dirman_; }
ServerConnectionManager* scm() const { return scm_; }
AllStatus* allstatus() const { return allstatus_; }
UserSettings* settings() const { return user_settings_; }
Status status() const { return (Status)status_; }
void Logout();
// For synchronizing other destructors.
void WaitForAuthThreadFinish();
protected:
void Reset();
void ClearAuthenticationData();
void NotifyAuthSucceeded(const std::string& email);
bool StartNewAuthAttempt(const std::string& email,
const std::string& password,
const std::string& auth_token, const std::string& captcha_token,
const std::string& captcha_value, bool persist_creds_to_disk,
AuthWatcherEvent::AuthenticationTrigger trigger);
void HandleServerConnectionEvent(const ServerConnectionEvent& event);
void SaveUserSettings(const std::string& username,
const std::string& auth_token,
const bool save_credentials);
// These two helpers should only be called from the auth function.
// returns false iff we had problems and should try GAIA_AUTH again.
bool ProcessGaiaAuthSuccess();
void ProcessGaiaAuthFailure();
// Just checks that the user has at least one local share cache.
bool AuthenticateLocally(std::string email);
// Also checks the user's password against stored password hash.
bool AuthenticateLocally(std::string email, const std::string& password);
// Sets the trigger member of the event and sends the event on channel_.
void NotifyListeners(AuthWatcherEvent* event);
const std::string& sync_service_token() const { return sync_service_token_; }
public:
bool AuthenticateWithToken(const std::string& email,
const std::string& auth_token);
protected:
typedef PThreadScopedLock<PThreadMutex> MutexLock;
// Passed to newly created threads.
struct ThreadParams {
AuthWatcher* self;
std::string email;
std::string password;
std::string auth_token;
std::string captcha_token;
std::string captcha_value;
bool persist_creds_to_disk;
AuthWatcherEvent::AuthenticationTrigger trigger;
};
// Initial function passed to pthread_create.
static void* AuthenticationThreadStartRoutine(void* arg);
// Member function called by AuthenticationThreadStartRoutine.
void* AuthenticationThreadMain(struct ThreadParams* arg);
scoped_ptr<GaiaAuthenticator> const gaia_;
syncable::DirectoryManager* const dirman_;
ServerConnectionManager* const scm_;
scoped_ptr<EventListenerHookup> connmgr_hookup_;
AllStatus* const allstatus_;
// TODO(chron): It is incorrect to make assignments to AtomicWord.
volatile base::subtle::AtomicWord status_;
UserSettings* user_settings_;
TalkMediator* talk_mediator_; // Interface to the notifications engine.
scoped_ptr<Channel> channel_;
// We store our service token in memory as a workaround to the fact that we
// don't persist it when the user unchecks "remember me".
// We also include it on outgoing requests.
std::string sync_service_token_;
PThreadMutex mutex_;
// All members below are protected by the above mutex
pthread_t thread_;
bool thread_handle_valid_;
bool authenticating_now_;
AuthWatcherEvent::AuthenticationTrigger current_attempt_trigger_;
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_AUTH_WATCHER_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/authenticator.h"
#include "chrome/browser/sync/engine/net/gaia_authenticator.h"
#include "chrome/browser/sync/engine/net/server_connection_manager.h"
#include "chrome/browser/sync/engine/syncproto.h"
#include "chrome/browser/sync/protocol/sync.pb.h"
#include "chrome/browser/sync/util/event_sys-inl.h"
#include "chrome/browser/sync/util/user_settings.h"
namespace browser_sync {
using std::string;
Authenticator::Authenticator(ServerConnectionManager *manager,
UserSettings* settings)
: server_connection_manager_(manager), settings_(settings) {
}
Authenticator::Authenticator(ServerConnectionManager *manager)
: server_connection_manager_(manager), settings_(NULL) {
}
Authenticator::AuthenticationResult Authenticator::Authenticate() {
// TODO(sync): Pull and work with saved credentials.
return NO_SAVED_CREDENTIALS;
}
Authenticator::AuthenticationResult Authenticator::Authenticate(
string username, string password, bool save_credentials) {
// TODO(scrub): need to figure out if this routine is used anywhere other than
// the test code.
GaiaAuthenticator auth_service("ChromiumBrowser", "chromiumsync",
"https://www.google.com:443/accounts/ClientLogin");
const SignIn signin_type =
settings_->RecallSigninType(username, GMAIL_SIGNIN);
if (!auth_service.Authenticate(username, password, SAVE_IN_MEMORY_ONLY,
true, signin_type)) {
return UNSPECIFIC_ERROR_RETURN;
}
CHECK(!auth_service.auth_token().empty());
return AuthenticateToken(auth_service.auth_token());
}
COMPILE_ASSERT(sync_pb::ClientToServerResponse::ERROR_TYPE_MAX == 6,
client_to_server_response_errors_changed);
Authenticator::AuthenticationResult Authenticator::HandleSuccessfulTokenRequest(
const sync_pb::UserIdentification* user) {
display_email_ = user->has_email() ? user->email() : "";
display_name_ = user->has_display_name() ? user->display_name() : "";
obfuscated_id_ = user->has_obfuscated_id() ? user->obfuscated_id() : "";
return SUCCESS;
}
Authenticator::AuthenticationResult Authenticator::AuthenticateToken(
string auth_token) {
ClientToServerMessage client_to_server_message;
// Used to be required for all requests.
client_to_server_message.set_share("");
client_to_server_message.set_message_contents(
ClientToServerMessage::AUTHENTICATE);
string tx, rx;
client_to_server_message.SerializeToString(&tx);
HttpResponse http_response;
ServerConnectionManager::PostBufferParams params =
{ tx, &rx, &http_response };
if (!server_connection_manager_->PostBufferWithAuth(&params, auth_token)) {
LOG(WARNING) << "Error posting from authenticator:" << http_response;
return SERVICE_DOWN;
}
sync_pb::ClientToServerResponse response;
if (!response.ParseFromString(rx))
return CORRUPT_SERVER_RESPONSE;
switch (response.error_code()) {
case sync_pb::ClientToServerResponse::SUCCESS:
if (response.has_authenticate() && response.authenticate().has_user())
return HandleSuccessfulTokenRequest(&response.authenticate().user());
// TODO:(sync) make this CORRUPT_SERVER_RESPONSE when all servers are
// returning user identification at login time.
return SUCCESS;
case sync_pb::ClientToServerResponse::USER_NOT_ACTIVATED:
return USER_NOT_ACTIVATED;
case sync_pb::ClientToServerResponse::AUTH_INVALID:
case sync_pb::ClientToServerResponse::AUTH_EXPIRED:
return BAD_AUTH_TOKEN;
// should never happen (no birthday in this request).
case sync_pb::ClientToServerResponse::NOT_MY_BIRTHDAY:
// should never happen (auth isn't throttled).
case sync_pb::ClientToServerResponse::THROTTLED:
// should never happen (only for stores).
case sync_pb::ClientToServerResponse::ACCESS_DENIED:
default:
LOG(ERROR) << "Corrupt Server packet received by auth, error code " <<
response.error_code();
return CORRUPT_SERVER_RESPONSE;
}
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// The authenticator is a cross-platform class that handles authentication for
// the sync client.
//
// Current State:
// The authenticator is currently only used to authenticate tokens using the
// newer protocol buffer request.
#ifndef CHROME_BROWSER_SYNC_ENGINE_AUTHENTICATOR_H_
#define CHROME_BROWSER_SYNC_ENGINE_AUTHENTICATOR_H_
#include <string>
#include "base/basictypes.h"
#include "base/port.h"
namespace sync_pb {
class UserIdentification;
}
namespace browser_sync {
class ServerConnectionManager;
class UserSettings;
class Authenticator {
public:
// Single return enum.
enum AuthenticationResult {
SUCCESS = 0,
// We couldn't log on because we don't have saved credentials.
NO_SAVED_CREDENTIALS,
// We can't reach auth server (i.e. we're offline or server's down).
NOT_CONNECTED,
// Server's up, but we're down.
SERVICE_DOWN,
// We contacted the server, but the response didn't make sense.
CORRUPT_SERVER_RESPONSE,
// Bad username/password.
BAD_CREDENTIALS,
// Credentials are fine, but the user hasn't signed up.
USER_NOT_ACTIVATED,
// Return values for internal use.
// We will never return this to the user unless they call AuthenticateToken
// directly. Other auth functions retry and then return
// CORRUPT_SERVER_RESPONSE.
// TODO(sync): Implement retries.
BAD_AUTH_TOKEN,
// We should never return this, it's a placeholder during development.
// TODO(sync): Remove this
UNSPECIFIC_ERROR_RETURN,
};
// Constructor. This class will keep the connection authenticated.
// TODO(sync): Make it work as described.
// TODO(sync): Require a UI callback mechanism.
Authenticator(ServerConnectionManager* manager, UserSettings* settings);
// Constructor for a simple authenticator used for programmatic login from
// test programs.
explicit Authenticator(ServerConnectionManager* manager);
// This version of Authenticate tries to use saved credentials, if we have
// any.
AuthenticationResult Authenticate();
// If save_credentials is set we save the long-lived auth token to local disk.
// In all cases we save the username and password in memory (if given) so we
// can refresh the long-lived auth token if it expires.
// Also we save a 10-bit hash of the password to allow offline login.
// TODO(sync): Make it work as described.
// TODO(sync): Arguments for desired domain.
AuthenticationResult Authenticate(std::string username, std::string password,
bool save_credentials);
// A version of the auth token to authenticate cookie portion of
// authentication. It uses the new proto buffer based call instead of the HTTP
// GET based one we currently use.
// Can return one of SUCCESS, SERVICE_DOWN, CORRUPT_SERVER_RESPONSE,
// USER_NOT_ACTIVATED or BAD_AUTH_TOKEN. See above for the meaning of these
// values.
// TODO(sync): Make this function private when we're done.
AuthenticationResult AuthenticateToken(std::string auth_token);
const char * display_email() const { return display_email_.c_str(); }
const char * display_name() const { return display_name_.c_str(); }
private:
// Stores the information in the UserIdentification returned from the server.
AuthenticationResult HandleSuccessfulTokenRequest(
const sync_pb::UserIdentification* user);
// The server connection manager that we're looking after.
ServerConnectionManager* server_connection_manager_;
// Returns SUCCESS or the value that should be returned to the user.
std::string display_email_;
std::string display_name_;
std::string obfuscated_id_;
UserSettings* const settings_;
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_AUTHENTICATOR_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_BUILD_AND_PROCESS_CONFLICT_SETS_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_BUILD_AND_PROCESS_CONFLICT_SETS_COMMAND_H_
#include <vector>
#include "base/basictypes.h"
#include "chrome/browser/sync/engine/model_changing_syncer_command.h"
namespace syncable {
class BaseTransaction;
class Entry;
class Id;
class MutableEntry;
class WriteTransaction;
} // namespace syncable
namespace browser_sync {
class ConflictResolutionView;
class SyncerSession;
class BuildAndProcessConflictSetsCommand : public ModelChangingSyncerCommand {
public:
BuildAndProcessConflictSetsCommand();
virtual ~BuildAndProcessConflictSetsCommand();
virtual void ModelChangingExecuteImpl(SyncerSession *session);
private:
bool BuildAndProcessConflictSets(SyncerSession *session);
bool ProcessSingleDirectionConflictSets(
syncable::WriteTransaction* trans, SyncerSession* const session);
bool ApplyUpdatesTransactionally(
syncable::WriteTransaction* trans,
const std::vector<syncable::Id>* const update_set,
SyncerSession* const session);
void BuildAndProcessConflictSetsCommand::BuildConflictSets(
syncable::BaseTransaction* trans,
ConflictResolutionView* view);
void MergeSetsForNameClash(syncable::BaseTransaction* trans,
syncable::Entry* entry,
ConflictResolutionView* view);
void MergeSetsForIntroducedLoops(syncable::BaseTransaction* trans,
syncable::Entry* entry,
ConflictResolutionView* view);
void MergeSetsForNonEmptyDirectories(syncable::BaseTransaction* trans,
syncable::Entry* entry,
ConflictResolutionView* view);
void MergeSetsForPositionUpdate(syncable::BaseTransaction* trans,
syncable::Entry* entry,
ConflictResolutionView* view);
DISALLOW_COPY_AND_ASSIGN(BuildAndProcessConflictSetsCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_BUILD_AND_PROCESS_CONFLICT_SETS_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/build_commit_command.h"
#include <set>
#include <string>
#include <vector>
#include "chrome/browser/sync/engine/syncer_proto_util.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/engine/syncer_util.h"
#include "chrome/browser/sync/engine/syncproto.h"
#include "chrome/browser/sync/syncable/syncable.h"
#include "chrome/browser/sync/syncable/syncable_changes_version.h"
#include "chrome/browser/sync/util/character_set_converters.h"
#include "chrome/browser/sync/util/sync_types.h"
using std::set;
using std::string;
using std::vector;
using syncable::ExtendedAttribute;
using syncable::Id;
using syncable::MutableEntry;
using syncable::Name;
namespace browser_sync {
BuildCommitCommand::BuildCommitCommand() {}
BuildCommitCommand::~BuildCommitCommand() {}
void BuildCommitCommand::ExecuteImpl(SyncerSession *session) {
ClientToServerMessage message;
message.set_share(ToUTF8(session->account_name()).get_string());
message.set_message_contents(ClientToServerMessage::COMMIT);
CommitMessage* commit_message = message.mutable_commit();
commit_message->set_cache_guid(
session->write_transaction()->directory()->cache_guid());
const vector<Id>& commit_ids = session->commit_ids();
for (size_t i = 0; i < commit_ids.size(); i++) {
Id id = commit_ids[i];
SyncEntity* sync_entry =
static_cast<SyncEntity*>(commit_message->add_entries());
sync_entry->set_id(id);
MutableEntry meta_entry(session->write_transaction(),
syncable::GET_BY_ID,
id);
CHECK(meta_entry.good());
// this is the only change we make to the entry in this function.
meta_entry.Put(syncable::SYNCING, true);
Name name = meta_entry.GetName();
CHECK(!name.value().empty()); // Make sure this isn't an update.
sync_entry->set_name(ToUTF8(name.value()).get_string());
// Set the non_unique_name if we have one. If we do, the server ignores
// the |name| value (using |non_unique_name| instead), and will return
// in the CommitResponse a unique name if one is generated. Even though
// we could get away with only sending |name|, we send both because it
// may aid in logging.
if (name.value() != name.non_unique_value()) {
sync_entry->set_non_unique_name(
ToUTF8(name.non_unique_value()).get_string());
}
// deleted items with negative parent ids can be a problem so we set the
// parent to 0. (TODO(sync): Still true in protocol?
Id new_parent_id;
if (meta_entry.Get(syncable::IS_DEL) &&
!meta_entry.Get(syncable::PARENT_ID).ServerKnows()) {
new_parent_id = session->write_transaction()->root_id();
} else {
new_parent_id = meta_entry.Get(syncable::PARENT_ID);
}
sync_entry->set_parent_id(new_parent_id);
// TODO(sync): Investigate all places that think transactional commits
// actually exist.
//
// This is the only logic we'll need when transactional commits are
// moved to the server.
// If our parent has changes, send up the old one so the server can
// correctly deal with multiple parents.
if (new_parent_id != meta_entry.Get(syncable::SERVER_PARENT_ID) &&
0 != meta_entry.Get(syncable::BASE_VERSION) &&
syncable::CHANGES_VERSION != meta_entry.Get(syncable::BASE_VERSION)) {
sync_entry->set_old_parent_id(meta_entry.Get(syncable::SERVER_PARENT_ID));
}
int64 version = meta_entry.Get(syncable::BASE_VERSION);
if (syncable::CHANGES_VERSION == version || 0 == version) {
// If this CHECK triggers during unit testing, check that we haven't
// altered an item that's an unapplied update.
CHECK(!id.ServerKnows()) << meta_entry;
sync_entry->set_version(0);
} else {
CHECK(id.ServerKnows()) << meta_entry;
sync_entry->set_version(meta_entry.Get(syncable::BASE_VERSION));
}
sync_entry->set_ctime(ClientTimeToServerTime(
meta_entry.Get(syncable::CTIME)));
sync_entry->set_mtime(ClientTimeToServerTime(
meta_entry.Get(syncable::MTIME)));
set<ExtendedAttribute> extended_attributes;
meta_entry.GetAllExtendedAttributes(
session->write_transaction(), &extended_attributes);
set<ExtendedAttribute>::iterator iter;
sync_pb::ExtendedAttributes* mutable_extended_attributes =
sync_entry->mutable_extended_attributes();
for (iter = extended_attributes.begin(); iter != extended_attributes.end();
++iter) {
sync_pb::ExtendedAttributes_ExtendedAttribute *extended_attribute =
mutable_extended_attributes->add_extendedattribute();
extended_attribute->set_key(ToUTF8(iter->key()).get_string());
SyncerProtoUtil::CopyBlobIntoProtoBytes(iter->value(),
extended_attribute->mutable_value());
}
// Deletion is final on the server, let's move things and then delete them.
if (meta_entry.Get(syncable::IS_DEL)) {
sync_entry->set_deleted(true);
} else if (meta_entry.Get(syncable::IS_BOOKMARK_OBJECT)) {
sync_pb::SyncEntity_BookmarkData* bookmark =
sync_entry->mutable_bookmarkdata();
bookmark->set_bookmark_folder(meta_entry.Get(syncable::IS_DIR));
const Id& prev_id = meta_entry.Get(syncable::PREV_ID);
string prev_string = prev_id.IsRoot() ? string() : prev_id.GetServerId();
sync_entry->set_insert_after_item_id(prev_string);
if (!meta_entry.Get(syncable::IS_DIR)) {
string bookmark_url = ToUTF8(meta_entry.Get(syncable::BOOKMARK_URL));
bookmark->set_bookmark_url(bookmark_url);
SyncerProtoUtil::CopyBlobIntoProtoBytes(
meta_entry.Get(syncable::BOOKMARK_FAVICON),
bookmark->mutable_bookmark_favicon());
}
}
}
session->set_commit_message(message);
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_BUILD_COMMIT_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_BUILD_COMMIT_COMMAND_H_
#include "base/basictypes.h"
#include "chrome/browser/sync/engine/syncer_command.h"
#include "chrome/browser/sync/engine/syncer_session.h"
namespace browser_sync {
class BuildCommitCommand : public SyncerCommand {
public:
BuildCommitCommand();
virtual ~BuildCommitCommand();
virtual void ExecuteImpl(SyncerSession *session);
private:
DISALLOW_COPY_AND_ASSIGN(BuildCommitCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_BUILD_COMMIT_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/change_reorder_buffer.h"
#include <limits>
#include <queue>
#include <set>
#include <utility> // for pair<>
#include <vector>
#include "chrome/browser/sync/syncable/syncable.h"
using std::numeric_limits;
using std::pair;
using std::queue;
using std::set;
using std::vector;
namespace sync_api {
// Traversal provides a way to collect a set of nodes from the syncable
// directory structure and then traverse them, along with any intermediate
// nodes, in a top-down fashion, starting from a single common ancestor. A
// Traversal starts out empty and is grown by means of the ExpandToInclude
// method. Once constructed, the top(), begin_children(), and end_children()
// methods can be used to explore the nodes in root-to-leaf order.
class ChangeReorderBuffer::Traversal {
public:
typedef pair<int64, int64> ParentChildLink;
typedef set<ParentChildLink> LinkSet;
Traversal() : top_(kInvalidId) { }
// Expand the traversal so that it includes the node indicated by
// |child_handle|.
void ExpandToInclude(syncable::BaseTransaction* trans,
int64 child_handle) {
// If |top_| is invalid, this is the first insertion -- easy.
if (top_ == kInvalidId) {
top_ = child_handle;
return;
}
int64 node_to_include = child_handle;
while (node_to_include != kInvalidId && node_to_include != top_) {
int64 node_parent = 0;
syncable::Entry node(trans, syncable::GET_BY_HANDLE, node_to_include);
CHECK(node.good());
if (node.Get(syncable::ID).IsRoot()) {
// If we've hit the root, and the root isn't already in the tree
// (it would have to be |top_| if it were), start a new expansion
// upwards from |top_| to unite the original traversal with the
// path we just added that goes from |child_handle| to the root.
node_to_include = top_;
top_ = node.Get(syncable::META_HANDLE);
} else {
// Otherwise, get the parent ID so that we can add a ParentChildLink.
syncable::Entry parent(trans, syncable::GET_BY_ID,
node.Get(syncable::PARENT_ID));
CHECK(parent.good());
node_parent = parent.Get(syncable::META_HANDLE);
ParentChildLink link(node_parent, node_to_include);
// If the link exists in the LinkSet |links_|, we don't need to search
// any higher; we are done.
if (links_.find(link) != links_.end())
return;
// Otherwise, extend |links_|, and repeat on the parent.
links_.insert(link);
node_to_include = node_parent;
}
}
}
// Return the top node of the traversal. Use this as a starting point
// for walking the tree.
int64 top() const { return top_; }
// Return an iterator corresponding to the first child (in the traversal)
// of the node specified by |parent_id|. Iterate this return value until
// it is equal to the value returned by end_children(parent_id). The
// enumeration thus provided is unordered.
LinkSet::const_iterator begin_children(int64 parent_id) const {
return links_.upper_bound(
ParentChildLink(parent_id, numeric_limits<int64>::min()));
}
// Return an iterator corresponding to the last child in the traversal
// of the node specified by |parent_id|.
LinkSet::const_iterator end_children(int64 parent_id) const {
return begin_children(parent_id + 1);
}
private:
// The topmost point in the directory hierarchy that is in the traversal,
// and thus the first node to be traversed. If the traversal is empty,
// this is kInvalidId. If the traversal contains exactly one member, |top_|
// will be the solitary member, and |links_| will be empty.
int64 top_;
// A set of single-level links that compose the traversal below |top_|. The
// (parent, child) ordering of values enables efficient lookup of children
// given the parent handle, which is used for top-down traversal. |links_|
// is expected to be connected -- every node that appears as a parent in a
// link must either appear as a child of another link, or else be the
// topmost node, |top_|.
LinkSet links_;
DISALLOW_COPY_AND_ASSIGN(Traversal);
};
void ChangeReorderBuffer::GetAllChangesInTreeOrder(
const BaseTransaction* sync_trans,
vector<ChangeRecord>* changelist) {
syncable::BaseTransaction* trans = sync_trans->GetWrappedTrans();
// Step 1: Iterate through the operations, doing three things:
// (a) Push deleted items straight into the |changelist|.
// (b) Construct a traversal spanning all non-deleted items.
// (c) Construct a set of all parent nodes of any position changes.
set<int64> parents_of_position_changes;
Traversal traversal;
OperationMap::const_iterator i;
for (i = operations_.begin(); i != operations_.end(); ++i) {
if (i->second == OP_DELETE) {
ChangeRecord record;
record.id = i->first;
record.action = ChangeRecord::ACTION_DELETE;
changelist->push_back(record);
} else {
traversal.ExpandToInclude(trans, i->first);
if (i->second == OP_ADD ||
i->second == OP_UPDATE_POSITION_AND_PROPERTIES) {
ReadNode node(sync_trans);
CHECK(node.InitByIdLookup(i->first));
parents_of_position_changes.insert(node.GetParentId());
}
}
}
// Step 2: Breadth-first expansion of the traversal, enumerating children in
// the syncable sibling order if there were any position updates.
queue<int64> to_visit;
to_visit.push(traversal.top());
while (!to_visit.empty()) {
int64 next = to_visit.front();
to_visit.pop();
// If the node has an associated action, output a change record.
i = operations_.find(next);
if (i != operations_.end()) {
ChangeRecord record;
record.id = next;
if (i->second == OP_ADD)
record.action = ChangeRecord::ACTION_ADD;
else
record.action = ChangeRecord::ACTION_UPDATE;
changelist->push_back(record);
}
// Now add the children of |next| to |to_visit|.
if (parents_of_position_changes.find(next) ==
parents_of_position_changes.end()) {
// No order changes on this parent -- traverse only the nodes listed
// in the traversal (and not in sibling order).
Traversal::LinkSet::const_iterator j = traversal.begin_children(next);
Traversal::LinkSet::const_iterator end = traversal.end_children(next);
for (; j != end; ++j) {
CHECK(j->first == next);
to_visit.push(j->second);
}
} else {
// There were ordering changes on the children of this parent, so
// enumerate all the children in the sibling order.
syncable::Entry parent(trans, syncable::GET_BY_HANDLE, next);
syncable::Id id = trans->directory()->
GetFirstChildId(trans, parent.Get(syncable::ID));
while (!id.IsRoot()) {
syncable::Entry child(trans, syncable::GET_BY_ID, id);
CHECK(child.good());
int64 handle = child.Get(syncable::META_HANDLE);
to_visit.push(handle);
// If there is no operation on this child node, record it as as an
// update, so that the listener gets notified of all nodes in the new
// ordering.
if (operations_.find(handle) == operations_.end())
operations_[handle] = OP_UPDATE_POSITION_AND_PROPERTIES;
id = child.Get(syncable::NEXT_ID);
}
}
}
}
} // namespace sync_api
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// Defines ChangeReorderBuffer, which can be used to sort a list of item
// actions to achieve the ordering constraint required by the SyncObserver
// interface of the SyncAPI.
#ifndef CHROME_BROWSER_SYNC_ENGINE_CHANGE_REORDER_BUFFER_H_
#define CHROME_BROWSER_SYNC_ENGINE_CHANGE_REORDER_BUFFER_H_
#include <map>
#include <vector>
#include "chrome/browser/sync/engine/syncapi.h"
namespace sync_api {
// ChangeReorderBuffer is a utility type which accepts an unordered set
// of changes (via its Push methods), and yields a vector of ChangeRecords
// (via the GetAllChangesInTreeOrder method) that are in the order that
// the SyncObserver expects them to be. A buffer is initially empty.
//
// The ordering produced by ChangeReorderBuffer is as follows:
// (a) All Deleted items appear first.
// (b) For Updated and/or Added items, parents appear before their children.
// (c) When there are changes to the sibling order (this means Added items,
// or Updated items with the |position_changed| parameter set to true),
// all siblings under a parent will appear in the output, even if they
// are not explicitly pushed. The sibling order will be preserved in
// the output list -- items will appear before their sibling-order
// successors.
// (d) When there are no changes to the sibling order under a parent node,
// the sibling order is not necessarily preserved in the output for
// its children.
class ChangeReorderBuffer {
public:
typedef SyncManager::ChangeRecord ChangeRecord;
ChangeReorderBuffer() { }
// Insert an item, identified by the metahandle |id|, into the reorder
// buffer. This item will appear in the output list as an ACTION_ADD
// ChangeRecord.
void PushAddedItem(int64 id) {
operations_[id] = OP_ADD;
}
// Insert an item, identified by the metahandle |id|, into the reorder
// buffer. This item will appear in the output list as an ACTION_DELETE
// ChangeRecord.
void PushDeletedItem(int64 id) {
operations_[id] = OP_DELETE;
}
// Insert an item, identified by the metahandle |id|, into the reorder
// buffer. This item will appear in the output list as an ACTION_UPDATE
// ChangeRecord. Also, if |position_changed| is true, all siblings of this
// item will appear in the output list as well; if it wasn't explicitly
// pushed, the siblings will have an ACTION_UPDATE ChangeRecord.
void PushUpdatedItem(int64 id, bool position_changed) {
operations_[id] = position_changed ? OP_UPDATE_POSITION_AND_PROPERTIES :
OP_UPDATE_PROPERTIES_ONLY;
}
// Reset the buffer, forgetting any pushed items, so that it can be used
// again to reorder a new set of changes.
void Clear() {
operations_.clear();
}
bool IsEmpty() const {
return operations_.empty();
}
// Output a reordered list of changes to |changelist| using the items
// that were pushed into the reorder buffer. |sync_trans| is used
// to determine the ordering.
void GetAllChangesInTreeOrder(const BaseTransaction* sync_trans,
std::vector<ChangeRecord>* changelist);
private:
class Traversal;
enum Operation {
OP_ADD, // AddedItem.
OP_DELETE, // DeletedItem.
OP_UPDATE_PROPERTIES_ONLY, // UpdatedItem with position_changed=0.
OP_UPDATE_POSITION_AND_PROPERTIES, // UpdatedItem with position_changed=1.
};
typedef std::map<int64, Operation> OperationMap;
// Stores the items that have been pushed into the buffer, and the
// type of operation that was associated with them.
OperationMap operations_;
DISALLOW_COPY_AND_ASSIGN(ChangeReorderBuffer);
};
} // namespace sync_api
#endif // CHROME_BROWSER_SYNC_ENGINE_CHANGE_REORDER_BUFFER_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_CLIENT_COMMAND_CHANNEL_H_
#define CHROME_BROWSER_SYNC_ENGINE_CLIENT_COMMAND_CHANNEL_H_
#include "chrome/browser/sync/protocol/sync.pb.h"
#include "chrome/browser/sync/util/event_sys.h"
namespace browser_sync {
// Commands for the client come back in sync responses, which is kind
// of inconvenient because some services (like the bandwidth throttler)
// want to know about them. So to avoid explicit dependencies on this
// protocol behavior, the syncer dumps all client commands onto a shared
// client command channel.
struct ClientCommandChannelTraits {
typedef const sync_pb::ClientCommand* EventType;
static inline bool IsChannelShutdownEvent(const EventType &event) {
return 0 == event;
}
};
typedef EventChannel<ClientCommandChannelTraits, PThreadMutex>
ClientCommandChannel;
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_CLIENT_COMMAND_CHANNEL_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// THIS CLASS PROVIDES NO SYNCHRONIZATION GUARANTEES.
#include "chrome/browser/sync/engine/conflict_resolution_view.h"
#include <map>
#include <set>
#include "chrome/browser/sync/engine/sync_process_state.h"
#include "chrome/browser/sync/engine/syncer_session.h"
using std::map;
using std::set;
namespace browser_sync {
ConflictResolutionView::ConflictResolutionView(SyncerSession* session)
: process_state_(session->sync_process_state_) {}
int ConflictResolutionView::conflicting_updates() const {
return process_state_->conflicting_updates();
}
int ConflictResolutionView::conflicting_commits() const {
return process_state_->conflicting_commits();
}
void ConflictResolutionView::set_conflicting_commits(const int val) {
process_state_->set_conflicting_commits(val);
}
int ConflictResolutionView::num_sync_cycles() const {
return process_state_->num_sync_cycles_;
}
void ConflictResolutionView::increment_num_sync_cycles() {
++(process_state_->num_sync_cycles_);
}
void ConflictResolutionView::zero_num_sync_cycles() {
process_state_->num_sync_cycles_ = 0;
}
int64 ConflictResolutionView::current_sync_timestamp() const {
return process_state_->current_sync_timestamp();
}
int64 ConflictResolutionView::servers_latest_timestamp() const {
return process_state_->servers_latest_timestamp();
}
// True iff we're stuck. User should contact support.
bool ConflictResolutionView::syncer_stuck() const {
return process_state_->syncer_stuck();
}
void ConflictResolutionView::set_syncer_stuck(const bool val) {
process_state_->set_syncer_stuck(val);
}
IdToConflictSetMap::const_iterator ConflictResolutionView::IdToConflictSetFind(
const syncable::Id& the_id) const {
return process_state_->IdToConflictSetFind(the_id);
}
IdToConflictSetMap::const_iterator
ConflictResolutionView::IdToConflictSetBegin() const {
return process_state_->IdToConflictSetBegin();
}
IdToConflictSetMap::const_iterator
ConflictResolutionView::IdToConflictSetEnd() const {
return process_state_->IdToConflictSetEnd();
}
IdToConflictSetMap::size_type
ConflictResolutionView::IdToConflictSetSize() const {
return process_state_->IdToConflictSetSize();
}
const ConflictSet*
ConflictResolutionView::IdToConflictSetGet(const syncable::Id& the_id) {
return process_state_->IdToConflictSetGet(the_id);
}
set<ConflictSet*>::const_iterator
ConflictResolutionView::ConflictSetsBegin() const {
return process_state_->ConflictSetsBegin();
}
set<ConflictSet*>::const_iterator
ConflictResolutionView::ConflictSetsEnd() const {
return process_state_->ConflictSetsEnd();
}
set<ConflictSet*>::size_type
ConflictResolutionView::ConflictSetsSize() const {
return process_state_->ConflictSetsSize();
}
void ConflictResolutionView::MergeSets(const syncable::Id& set1,
const syncable::Id& set2) {
process_state_->MergeSets(set1, set2);
}
void ConflictResolutionView::CleanupSets() {
process_state_->CleanupSets();
}
bool ConflictResolutionView::HasCommitConflicts() const {
return process_state_->HasConflictingItems();
}
bool ConflictResolutionView::HasBlockedItems() const {
return process_state_->HasBlockedItems();
}
int ConflictResolutionView::CommitConflictsSize() const {
return process_state_->ConflictingItemsSize();
}
int ConflictResolutionView::BlockedItemsSize() const {
return process_state_->BlockedItemsSize();
}
void ConflictResolutionView::AddCommitConflict(const syncable::Id& the_id) {
process_state_->AddConflictingItem(the_id);
}
void ConflictResolutionView::AddBlockedItem(const syncable::Id& the_id) {
process_state_->AddBlockedItem(the_id);
}
void ConflictResolutionView::EraseCommitConflict(
set<syncable::Id>::iterator it) {
process_state_->EraseConflictingItem(it);
}
void ConflictResolutionView::EraseBlockedItem(
set<syncable::Id>::iterator it) {
process_state_->EraseBlockedItem(it);
}
set<syncable::Id>::iterator
ConflictResolutionView::CommitConflictsBegin() const {
return process_state_->ConflictingItemsBegin();
}
set<syncable::Id>::iterator
ConflictResolutionView::BlockedItemsBegin() const {
return process_state_->BlockedItemsBegin();
}
set<syncable::Id>::iterator
ConflictResolutionView::CommitConflictsEnd() const {
return process_state_->ConflictingItemsEnd();
}
set<syncable::Id>::iterator
ConflictResolutionView::BlockedItemsEnd() const {
return process_state_->BlockedItemsEnd();
}
} // namespace browser_sync
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// Conflict resolution view is intended to provide a restricted
// view of the sync cycle state for the conflict resolver. Since the
// resolver doesn't get to see all of the SyncProcess, we can allow
// it to operate on a subsection of the data.
#ifndef CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLUTION_VIEW_H_
#define CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLUTION_VIEW_H_
#include <map>
#include <set>
#include <vector>
#include "base/basictypes.h"
#include "chrome/browser/sync/engine/syncer_types.h"
namespace syncable {
class Id;
}
namespace browser_sync {
class SyncCycleState;
class SyncProcessState;
class SyncerSession;
class ConflictResolutionView {
// THIS CLASS PROVIDES NO SYNCHRONIZATION GUARANTEES.
public:
explicit ConflictResolutionView(SyncProcessState* state)
: process_state_(state) {
}
explicit ConflictResolutionView(SyncerSession* session);
~ConflictResolutionView() {}
int conflicting_updates() const;
// TODO(sync) can successful commit go in session?
int successful_commits() const;
void increment_successful_commits();
void zero_successful_commits();
int conflicting_commits() const;
void set_conflicting_commits(const int val);
int num_sync_cycles() const;
void increment_num_sync_cycles();
void zero_num_sync_cycles();
// True iff we're stuck. Something has gone wrong with the syncer.
bool syncer_stuck() const;
void set_syncer_stuck(const bool val);
int64 current_sync_timestamp() const;
int64 servers_latest_timestamp() const;
IdToConflictSetMap::const_iterator IdToConflictSetFind(
const syncable::Id& the_id) const;
IdToConflictSetMap::const_iterator IdToConflictSetBegin() const;
IdToConflictSetMap::const_iterator IdToConflictSetEnd() const;
IdToConflictSetMap::size_type IdToConflictSetSize() const;
const ConflictSet* IdToConflictSetGet(const syncable::Id& the_id);
std::set<ConflictSet*>::const_iterator ConflictSetsBegin() const;
std::set<ConflictSet*>::const_iterator ConflictSetsEnd() const;
std::set<ConflictSet*>::size_type ConflictSetsSize() const;
void MergeSets(const syncable::Id& set1, const syncable::Id& set2);
void CleanupSets();
bool HasCommitConflicts() const;
bool HasBlockedItems() const;
int CommitConflictsSize() const;
int BlockedItemsSize() const;
void AddCommitConflict(const syncable::Id& the_id);
void AddBlockedItem(const syncable::Id& the_id);
void EraseCommitConflict(std::set<syncable::Id>::iterator it);
void EraseBlockedItem(std::set<syncable::Id>::iterator it);
std::set<syncable::Id>::iterator CommitConflictsBegin() const;
std::set<syncable::Id>::iterator BlockedItemsBegin() const;
std::set<syncable::Id>::iterator CommitConflictsEnd() const;
std::set<syncable::Id>::iterator BlockedItemsEnd() const;
private:
SyncProcessState* process_state_;
DISALLOW_COPY_AND_ASSIGN(ConflictResolutionView);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLUTION_VIEW_H_
This diff is collapsed.
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// A class that watches the syncer and attempts to resolve any conflicts that
// occur.
#ifndef CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLVER_H_
#define CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLVER_H_
#include <list>
#include <vector>
#include "base/basictypes.h"
#include "chrome/browser/sync/engine/conflict_resolution_view.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/engine/syncer_status.h"
#include "chrome/browser/sync/engine/syncer_types.h"
#include "chrome/browser/sync/util/event_sys.h"
#include "testing/gtest/include/gtest/gtest_prod.h" // For FRIEND_TEST
namespace syncable {
class BaseTransaction;
class Id;
class MutableEntry;
class ScopedDirLookup;
class WriteTransaction;
} // namespace syncable
namespace browser_sync {
class ConflictResolver {
friend class SyncerTest;
FRIEND_TEST(SyncerTest, ConflictResolverMergeOverwritesLocalEntry);
public:
ConflictResolver();
~ConflictResolver();
// Called by the syncer at the end of a update/commit cycle.
// Returns true if the syncer should try to apply its updates again.
bool ResolveConflicts(const syncable::ScopedDirLookup& dir,
ConflictResolutionView* view,
SyncerSession *session);
// Called by ProcessServerClientNameClash. Returns true if it's merged the
// items, false otherwise. Does not re-check preconditions covered in
// ProcessServerClientNameClash (i.e. it assumes a name clash).
bool AttemptItemMerge(syncable::WriteTransaction* trans,
syncable::MutableEntry* local_entry,
syncable::MutableEntry* server_entry);
private:
// We keep a map to record how often we've seen each conflict set. We use this
// to screen out false positives caused by transient server or client states,
// and to allow us to try to make smaller changes to fix situations before
// moving onto more drastic solutions.
typedef std::string ConflictSetCountMapKey;
typedef std::map<ConflictSetCountMapKey, int> ConflictSetCountMap;
typedef std::map<syncable::Id, int> SimpleConflictCountMap;
enum ProcessSimpleConflictResult {
NO_SYNC_PROGRESS, // No changes to advance syncing made.
SYNC_PROGRESS, // Progress made.
};
enum ServerClientNameClashReturn {
NO_CLASH,
SOLUTION_DEFERRED,
SOLVED,
BOGUS_SET,
};
// Get a key for the given set. NB: May reorder set contents.
// The key is currently not very efficient, but will ease debugging.
ConflictSetCountMapKey GetSetKey(ConflictSet* conflict_set);
void IgnoreLocalChanges(syncable::MutableEntry * entry);
void OverwriteServerChanges(syncable::WriteTransaction* trans,
syncable::MutableEntry* entry);
ProcessSimpleConflictResult ProcessSimpleConflict(
syncable::WriteTransaction* trans,
syncable::Id id,
SyncerSession* session);
bool ResolveSimpleConflicts(const syncable::ScopedDirLookup& dir,
ConflictResolutionView* view,
SyncerSession* session);
bool ProcessConflictSet(syncable::WriteTransaction* trans,
ConflictSet* conflict_set,
int conflict_count,
SyncerSession* session);
// Gives any unsynced entries in the given set new names if possible.
bool RenameUnsyncedEntries(syncable::WriteTransaction* trans,
ConflictSet* conflict_set);
ServerClientNameClashReturn ProcessServerClientNameClash(
syncable::WriteTransaction* trans,
syncable::MutableEntry* locally_named,
syncable::MutableEntry* server_named,
SyncerSession* session);
ServerClientNameClashReturn ProcessNameClashesInSet(
syncable::WriteTransaction* trans,
ConflictSet* conflict_set,
SyncerSession* session);
// Returns true if we're stuck
template <typename InputIt>
bool LogAndSignalIfConflictStuck(syncable::BaseTransaction* trans,
int attempt_count,
InputIt start, InputIt end,
ConflictResolutionView* view);
ConflictSetCountMap conflict_set_count_map_;
SimpleConflictCountMap simple_conflict_count_map_;
// Contains the ids of uncommitted items that are children of entries merged
// in the previous cycle. This is used to speed up the merge resolution of
// deep trees. Used to happen in store refresh.
// TODO(chron): Can we get rid of this optimization?
std::set<syncable::Id> children_of_merged_dirs_;
DISALLOW_COPY_AND_ASSIGN(ConflictResolver);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_CONFLICT_RESOLVER_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/download_updates_command.h"
#include <string>
#include "chrome/browser/sync/engine/syncer.h"
#include "chrome/browser/sync/engine/syncer_proto_util.h"
#include "chrome/browser/sync/engine/syncproto.h"
#include "chrome/browser/sync/syncable/directory_manager.h"
#include "chrome/browser/sync/util/character_set_converters.h"
#include "chrome/browser/sync/util/sync_types.h"
using syncable::ScopedDirLookup;
namespace browser_sync {
using std::string;
DownloadUpdatesCommand::DownloadUpdatesCommand() {}
DownloadUpdatesCommand::~DownloadUpdatesCommand() {}
void DownloadUpdatesCommand::ExecuteImpl(SyncerSession *session) {
ClientToServerMessage client_to_server_message;
ClientToServerResponse update_response;
client_to_server_message.set_share(
static_cast<const string&>(ToUTF8(session->account_name())));
client_to_server_message.set_message_contents(
ClientToServerMessage::GET_UPDATES);
GetUpdatesMessage* get_updates =
client_to_server_message.mutable_get_updates();
ScopedDirLookup dir(session->dirman(), session->account_name());
if (!dir.good()) {
LOG(ERROR) << "Scoped dir lookup failed!";
return;
}
LOG(INFO) << "Getting updates from ts " << dir->last_sync_timestamp();
get_updates->set_from_timestamp(dir->last_sync_timestamp());
// Set GetUpdatesMessage.GetUpdatesCallerInfo information.
get_updates->mutable_caller_info()->set_source(session->TestAndSetSource());
get_updates->mutable_caller_info()->set_notifications_enabled(
session->notifications_enabled());
bool ok = SyncerProtoUtil::PostClientToServerMessage(
&client_to_server_message,
&update_response,
session);
if (!ok) {
SyncerStatus status(session);
status.increment_consecutive_problem_get_updates();
status.increment_consecutive_errors();
LOG(ERROR) << "PostClientToServerMessage() failed";
return;
}
session->set_update_response(update_response);
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_DOWNLOAD_UPDATES_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_DOWNLOAD_UPDATES_COMMAND_H_
#include "base/basictypes.h"
#include "chrome/browser/sync/engine/syncer_command.h"
#include "chrome/browser/sync/engine/syncer_session.h"
namespace browser_sync {
// Downloads updates from the server and places them in the SyncerSession.
class DownloadUpdatesCommand : public SyncerCommand {
public:
DownloadUpdatesCommand();
virtual ~DownloadUpdatesCommand();
virtual void ExecuteImpl(SyncerSession *session);
private:
DISALLOW_COPY_AND_ASSIGN(DownloadUpdatesCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_DOWNLOAD_UPDATES_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/get_commit_ids_command.h"
#include <set>
#include <utility>
#include <vector>
#include "chrome/browser/sync/engine/syncer_util.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/syncable/syncable.h"
#include "chrome/browser/sync/util/sync_types.h"
using std::set;
using std::vector;
namespace browser_sync {
GetCommitIdsCommand::GetCommitIdsCommand(int commit_batch_size)
: requested_commit_batch_size_(commit_batch_size) {}
GetCommitIdsCommand::~GetCommitIdsCommand() {}
void GetCommitIdsCommand::ExecuteImpl(SyncerSession *session) {
// Gather the full set of unsynced items and store it in the session.
// They are not in the correct order for commit.
syncable::Directory::UnsyncedMetaHandles all_unsynced_handles;
SyncerUtil::GetUnsyncedEntries(session->write_transaction(),
&all_unsynced_handles);
session->set_unsynced_handles(all_unsynced_handles);
BuildCommitIds(session);
const vector<syncable::Id>& verified_commit_ids =
ordered_commit_set_.GetCommitIds();
for (size_t i = 0; i < verified_commit_ids.size(); i++)
LOG(INFO) << "Debug commit batch result:" << verified_commit_ids[i];
session->set_commit_ids(verified_commit_ids);
}
void GetCommitIdsCommand::AddUncommittedParentsAndTheirPredecessors(
syncable::BaseTransaction* trans,
syncable::Id parent_id) {
using namespace syncable;
OrderedCommitSet item_dependencies;
// Climb the tree adding entries leaf -> root.
while (!parent_id.ServerKnows()) {
Entry parent(trans, GET_BY_ID, parent_id);
CHECK(parent.good()) << "Bad user-only parent in item path.";
int64 handle = parent.Get(META_HANDLE);
if (ordered_commit_set_.HaveCommitItem(handle) ||
item_dependencies.HaveCommitItem(handle)) {
break;
}
if (!AddItemThenPredecessors(trans, &parent, IS_UNSYNCED,
&item_dependencies)) {
break; // Parent was already present in the set.
}
parent_id = parent.Get(PARENT_ID);
}
// Reverse what we added to get the correct order.
ordered_commit_set_.AppendReverse(item_dependencies);
}
bool GetCommitIdsCommand::AddItem(syncable::Entry* item,
OrderedCommitSet* result) {
int64 item_handle = item->Get(syncable::META_HANDLE);
if (result->HaveCommitItem(item_handle) ||
ordered_commit_set_.HaveCommitItem(item_handle)) {
return false;
}
result->AddCommitItem(item_handle, item->Get(syncable::ID));
return true;
}
bool GetCommitIdsCommand::AddItemThenPredecessors(
syncable::BaseTransaction* trans,
syncable::Entry* item,
syncable::IndexedBitField inclusion_filter,
OrderedCommitSet* result) {
if (!AddItem(item, result))
return false;
if (item->Get(syncable::IS_DEL))
return true; // Deleted items have no predecessors.
syncable::Id prev_id = item->Get(syncable::PREV_ID);
while (!prev_id.IsRoot()) {
syncable::Entry prev(trans, syncable::GET_BY_ID, prev_id);
CHECK(prev.good()) << "Bad id when walking predecessors.";
if (!prev.Get(inclusion_filter))
break;
if (!AddItem(&prev, result))
break;
prev_id = prev.Get(syncable::PREV_ID);
}
return true;
}
void GetCommitIdsCommand::AddPredecessorsThenItem(
syncable::BaseTransaction* trans,
syncable::Entry* item,
syncable::IndexedBitField inclusion_filter) {
OrderedCommitSet item_dependencies;
AddItemThenPredecessors(trans, item, inclusion_filter, &item_dependencies);
// Reverse what we added to get the correct order.
ordered_commit_set_.AppendReverse(item_dependencies);
}
bool GetCommitIdsCommand::IsCommitBatchFull() {
return ordered_commit_set_.Size() >= requested_commit_batch_size_;
}
void GetCommitIdsCommand::AddCreatesAndMoves(SyncerSession *session) {
// Add moves and creates, and prepend their uncommitted parents.
for (CommitMetahandleIterator iterator(session, &ordered_commit_set_);
!IsCommitBatchFull() && iterator.Valid();
iterator.Increment()) {
int64 metahandle = iterator.Current();
syncable::Entry entry(session->write_transaction(),
syncable::GET_BY_HANDLE,
metahandle);
if (!entry.Get(syncable::IS_DEL)) {
AddUncommittedParentsAndTheirPredecessors(
session->write_transaction(), entry.Get(syncable::PARENT_ID));
AddPredecessorsThenItem(session->write_transaction(), &entry,
syncable::IS_UNSYNCED);
}
}
// It's possible that we overcommitted while trying to expand dependent
// items. If so, truncate the set down to the allowed size.
ordered_commit_set_.Truncate(requested_commit_batch_size_);
}
void GetCommitIdsCommand::AddDeletes(SyncerSession *session) {
set<syncable::Id> legal_delete_parents;
for (CommitMetahandleIterator iterator(session, &ordered_commit_set_);
!IsCommitBatchFull() && iterator.Valid();
iterator.Increment()) {
int64 metahandle = iterator.Current();
syncable::Entry entry(session->write_transaction(),
syncable::GET_BY_HANDLE,
metahandle);
if (entry.Get(syncable::IS_DEL)) {
syncable::Entry parent(session->write_transaction(),
syncable::GET_BY_ID,
entry.Get(syncable::PARENT_ID));
// If the parent is deleted and unsynced, then any children of that
// parent don't need to be added to the delete queue.
//
// Note: the parent could be synced if there was an update deleting a
// folder when we had a deleted all items in it.
// We may get more updates, or we may want to delete the entry.
if (parent.good() &&
parent.Get(syncable::IS_DEL) &&
parent.Get(syncable::IS_UNSYNCED)) {
// However, if an entry is moved, these rules can apply differently.
//
// If the entry was moved, then the destination parent was deleted,
// then we'll miss it in the roll up. We have to add it in manually.
// TODO(chron): Unit test for move / delete cases:
// Case 1: Locally moved, then parent deleted
// Case 2: Server moved, then locally issue recursive delete.
if (entry.Get(syncable::ID).ServerKnows() &&
entry.Get(syncable::PARENT_ID) !=
entry.Get(syncable::SERVER_PARENT_ID)) {
LOG(INFO) << "Inserting moved and deleted entry, will be missed by"
" delete roll." << entry.Get(syncable::ID);
ordered_commit_set_.AddCommitItem(metahandle,
entry.Get(syncable::ID));
}
// Skip this entry since it's a child of a parent that will be
// deleted. The server will unroll the delete and delete the
// child as well.
continue;
}
legal_delete_parents.insert(entry.Get(syncable::PARENT_ID));
}
}
// We could store all the potential entries with a particular parent during
// the above scan, but instead we rescan here. This is less efficient, but
// we're dropping memory alloc/dealloc in favor of linear scans of recently
// examined entries.
//
// Scan through the UnsyncedMetaHandles again. If we have a deleted
// entry, then check if the parent is in legal_delete_parents.
//
// Parent being in legal_delete_parents means for the child:
// a recursive delete is not currently happening (no recent deletes in same
// folder)
// parent did expect at least one old deleted child
// parent was not deleted
for (CommitMetahandleIterator iterator(session, &ordered_commit_set_);
!IsCommitBatchFull() && iterator.Valid();
iterator.Increment()) {
int64 metahandle = iterator.Current();
syncable::MutableEntry entry(session->write_transaction(),
syncable::GET_BY_HANDLE,
metahandle);
if (entry.Get(syncable::IS_DEL)) {
syncable::Id parent_id = entry.Get(syncable::PARENT_ID);
if (legal_delete_parents.count(parent_id)) {
ordered_commit_set_.AddCommitItem(metahandle, entry.Get(syncable::ID));
}
}
}
}
void GetCommitIdsCommand::BuildCommitIds(SyncerSession *session) {
// Commits follow these rules:
// 1. Moves or creates are preceded by needed folder creates, from
// root to leaf. For folders whose contents are ordered, moves
// and creates appear in order.
// 2. Moves/Creates before deletes.
// 3. Deletes, collapsed.
// We commit deleted moves under deleted items as moves when collapsing
// delete trees.
// Add moves and creates, and prepend their uncommitted parents.
AddCreatesAndMoves(session);
// Add all deletes.
AddDeletes(session);
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_GET_COMMIT_IDS_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_GET_COMMIT_IDS_COMMAND_H_
#include <vector>
#include <utility>
#include "chrome/browser/sync/engine/syncer_command.h"
#include "chrome/browser/sync/engine/syncer_util.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/util/sync_types.h"
using std::pair;
using std::vector;
namespace browser_sync {
class GetCommitIdsCommand : public SyncerCommand {
friend class SyncerTest;
public:
explicit GetCommitIdsCommand(int commit_batch_size);
virtual ~GetCommitIdsCommand();
virtual void ExecuteImpl(SyncerSession *session);
// Returns a vector of IDs that should be committed.
void BuildCommitIds(SyncerSession *session);
// These classes are public for testing.
// TODO(ncarter): This code is more generic than just Commit and can
// be reused elsewhere (e.g. PositionalRunBuilder, ChangeReorderBuffer
// do similar things). Merge all these implementations.
class OrderedCommitSet {
public:
// TODO(chron): Reserve space according to batch size?
OrderedCommitSet() {}
~OrderedCommitSet() {}
bool HaveCommitItem(const int64 metahandle) const {
return inserted_metahandles_.count(metahandle) > 0;
}
void AddCommitItem(const int64 metahandle, const syncable::Id& commit_id) {
if (!HaveCommitItem(metahandle)) {
inserted_metahandles_.insert(metahandle);
metahandle_order_.push_back(metahandle);
commit_ids_.push_back(commit_id);
}
}
const vector<syncable::Id>& GetCommitIds() const {
return commit_ids_;
}
pair<int64, syncable::Id> GetCommitItemAt(const int position) const {
DCHECK(position < Size());
return pair<int64, syncable::Id> (
metahandle_order_[position], commit_ids_[position]);
}
int Size() const {
return commit_ids_.size();
}
void AppendReverse(const OrderedCommitSet& other) {
for (int i = other.Size() - 1; i >= 0; i--) {
pair<int64, syncable::Id> item = other.GetCommitItemAt(i);
AddCommitItem(item.first, item.second);
}
}
void Truncate(size_t max_size) {
if (max_size < metahandle_order_.size()) {
for (size_t i = max_size; i < metahandle_order_.size(); ++i) {
inserted_metahandles_.erase(metahandle_order_[i]);
}
commit_ids_.resize(max_size);
metahandle_order_.resize(max_size);
}
}
private:
// These three lists are different views of the same data; e.g they are
// isomorphic.
syncable::MetahandleSet inserted_metahandles_;
vector<syncable::Id> commit_ids_;
vector<int64> metahandle_order_;
DISALLOW_COPY_AND_ASSIGN(OrderedCommitSet);
};
// TODO(chron): Remove writes from this iterator. As a warning, this
// iterator causes writes to entries and so isn't a pure iterator.
// It will do Put(IS_UNSYNCED) as well as add things to the blocked
// session list. Refactor this out later.
class CommitMetahandleIterator {
public:
// TODO(chron): Cache ValidateCommitEntry responses across iterators to save
// UTF8 conversion and filename checking
CommitMetahandleIterator(SyncerSession* session,
OrderedCommitSet* commit_set)
: session_(session),
commit_set_(commit_set) {
handle_iterator_ = session->unsynced_handles().begin();
// TODO(chron): Remove writes from this iterator.
DCHECK(session->has_open_write_transaction());
if (Valid() && !ValidateMetahandleForCommit(*handle_iterator_))
Increment();
}
~CommitMetahandleIterator() {}
int64 Current() const {
DCHECK(Valid());
return *handle_iterator_;
}
bool Increment() {
if (!Valid())
return false;
for (++handle_iterator_;
handle_iterator_ != session_->unsynced_handles().end();
++handle_iterator_) {
if (ValidateMetahandleForCommit(*handle_iterator_))
return true;
}
return false;
}
bool Valid() const {
return !(handle_iterator_ == session_->unsynced_handles().end());
}
private:
bool ValidateMetahandleForCommit(int64 metahandle) {
if (commit_set_->HaveCommitItem(metahandle))
return false;
// We should really not WRITE in this iterator, but we can fix that
// later. ValidateCommitEntry writes to the DB, and we add the
// blocked items. We should move that somewhere else later.
syncable::MutableEntry entry(session_->write_transaction(),
syncable::GET_BY_HANDLE, metahandle);
VerifyCommitResult verify_result =
SyncerUtil::ValidateCommitEntry(&entry);
if (verify_result == VERIFY_BLOCKED) {
session_->AddBlockedItem(entry.Get(syncable::ID)); // TODO(chron): Ew.
} else if (verify_result == VERIFY_UNSYNCABLE) {
// drop unsyncable entries.
entry.Put(syncable::IS_UNSYNCED, false);
}
return verify_result == VERIFY_OK;
}
SyncerSession* session_;
vector<int64>::const_iterator handle_iterator_;
OrderedCommitSet* commit_set_;
DISALLOW_COPY_AND_ASSIGN(CommitMetahandleIterator);
};
private:
void AddUncommittedParentsAndTheirPredecessors(
syncable::BaseTransaction* trans,
syncable::Id parent_id);
// OrderedCommitSet helpers for adding predecessors in order.
// TODO(ncarter): Refactor these so that the |result| parameter goes
// away, and AddItem doesn't need to consider two OrderedCommitSets.
bool AddItem(syncable::Entry* item, OrderedCommitSet* result);
bool AddItemThenPredecessors(syncable::BaseTransaction* trans,
syncable::Entry* item,
syncable::IndexedBitField inclusion_filter,
OrderedCommitSet* result);
void AddPredecessorsThenItem(syncable::BaseTransaction* trans,
syncable::Entry* item,
syncable::IndexedBitField inclusion_filter);
bool IsCommitBatchFull();
void AddCreatesAndMoves(SyncerSession *session);
void AddDeletes(SyncerSession *session);
OrderedCommitSet ordered_commit_set_;
int requested_commit_batch_size_;
DISALLOW_COPY_AND_ASSIGN(GetCommitIdsCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_GET_COMMIT_IDS_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/model_changing_syncer_command.h"
#include "chrome/browser/sync/engine/model_safe_worker.h"
#include "chrome/browser/sync/engine/syncer_session.h"
#include "chrome/browser/sync/util/closure.h"
namespace browser_sync {
void ModelChangingSyncerCommand::ExecuteImpl(SyncerSession *session) {
work_session_ = session;
session->model_safe_worker()->DoWorkAndWaitUntilDone(
NewCallback(this, &ModelChangingSyncerCommand::StartChangingModel));
}
} // namespace browser_sync
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_MODEL_CHANGING_SYNCER_COMMAND_H_
#define CHROME_BROWSER_SYNC_ENGINE_MODEL_CHANGING_SYNCER_COMMAND_H_
#include "chrome/browser/sync/engine/syncer_command.h"
namespace browser_sync {
// An abstract SyncerCommand which dispatches its Execute step to the
// model-safe worker thread. Classes derived from ModelChangingSyncerCommand
// instead of SyncerCommand must implement ModelChangingExecuteImpl instead of
// ExecuteImpl, but otherwise, the contract is the same.
//
// A command should derive from ModelChangingSyncerCommand instead of
// SyncerCommand whenever the operation might change any client-visible
// fields on any syncable::Entry. If the operation involves creating a
// WriteTransaction, this is a sign that ModelChangingSyncerCommand is likely
// necessary.
class ModelChangingSyncerCommand : public SyncerCommand {
public:
ModelChangingSyncerCommand() : work_session_(NULL) { }
virtual ~ModelChangingSyncerCommand() { }
// SyncerCommand implementation. Sets work_session to session.
virtual void ExecuteImpl(SyncerSession* session);
// wrapper so implementations don't worry about storing work_session
void StartChangingModel() {
ModelChangingExecuteImpl(work_session_);
}
// Abstract method to be implemented by subclasses.
virtual void ModelChangingExecuteImpl(SyncerSession* session) = 0;
private:
// ExecuteImpl is expected to be run by SyncerCommand to set work_session.
// StartChangingModel is called to start this command running.
// Implementations will implement ModelChangingExecuteImpl and not
// worry about storing the session or setting it. They are given work_session.
SyncerSession* work_session_;
DISALLOW_COPY_AND_ASSIGN(ModelChangingSyncerCommand);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_MODEL_CHANGING_SYNCER_COMMAND_H_
// Copyright (c) 2006-2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef CHROME_BROWSER_SYNC_ENGINE_MODEL_SAFE_WORKER_H_
#define CHROME_BROWSER_SYNC_ENGINE_MODEL_SAFE_WORKER_H_
#include "chrome/browser/sync/util/closure.h"
#include "chrome/browser/sync/util/sync_types.h"
namespace browser_sync {
// The Syncer uses a ModelSafeWorker for all tasks that could potentially
// modify syncable entries (e.g under a WriteTransaction). The ModelSafeWorker
// only knows how to do one thing, and that is take some work (in a fully
// pre-bound callback) and have it performed (as in Run()) from a thread which
// is guaranteed to be "model-safe", where "safe" refers to not allowing us to
// cause an embedding application model to fall out of sync with the
// syncable::Directory due to a race.
class ModelSafeWorker {
public:
ModelSafeWorker() { }
virtual ~ModelSafeWorker() { }
// Any time the Syncer performs model modifications (e.g employing a
// WriteTransaction), it should be done by this method to ensure it is done
// from a model-safe thread.
//
// TODO(timsteele): For now this is non-reentrant, meaning the work being
// done should be at a high enough level in the stack that
// DoWorkAndWaitUntilDone won't be called again by invoking Run() on |work|.
// This is not strictly necessary; it may be best to call
// DoWorkAndWaitUntilDone at lower levels, such as within ApplyUpdates, but
// this is sufficient to simplify and test out our dispatching approach.
virtual void DoWorkAndWaitUntilDone(Closure* work) {
work->Run(); // By default, do the work on the current thread.
}
private:
DISALLOW_COPY_AND_ASSIGN(ModelSafeWorker);
};
} // namespace browser_sync
#endif // CHROME_BROWSER_SYNC_ENGINE_MODEL_SAFE_WORKER_H_
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "chrome/browser/sync/engine/net/gaia_authenticator.h"
#include <string>
#include "chrome/browser/sync/engine/net/http_return.h"
#include "chrome/browser/sync/util/sync_types.h"
#include "googleurl/src/gurl.h"
#include "testing/gtest/include/gtest/gtest.h"
using std::string;
namespace browser_sync {
class GaiaAuthenticatorTest : public testing::Test { };
class GaiaAuthMock : public GaiaAuthenticator {
public:
GaiaAuthMock() : GaiaAuthenticator("useragent",
"serviceid",
"http://gaia_url") {}
~GaiaAuthMock() {}
protected:
bool Post(const GURL& url, const string& post_body,
unsigned long* response_code, string* response_body) {
*response_code = RC_REQUEST_OK;
response_body->assign("body\n");
return true;
}
};
TEST(GaiaAuthenticatorTest, TestNewlineAtEndOfAuthTokenRemoved) {
GaiaAuthMock mock_auth;
GaiaAuthenticator::AuthResults results;
EXPECT_TRUE(mock_auth.IssueAuthToken(&results, "sid", true));
EXPECT_EQ(0, results.auth_token.compare("body"));
}
} // namespace browser_sync
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
// Copyright (c) 2009 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
//
// Time functions
#include "chrome/browser/sync/notifier/base/posix/time_posix.cc"
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment