123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061206220632064206520662067206820692070207120722073207420752076207720782079208020812082208320842085208620872088208920902091209220932094209520962097209820992100210121022103210421052106210721082109211021112112211321142115211621172118211921202121212221232124212521262127212821292130213121322133213421352136213721382139214021412142214321442145214621472148214921502151215221532154215521562157215821592160216121622163216421652166216721682169217021712172217321742175217621772178217921802181218221832184218521862187218821892190219121922193219421952196219721982199220022012202220322042205220622072208220922102211221222132214221522162217221822192220222122222223222422252226222722282229223022312232223322342235223622372238223922402241224222432244224522462247224822492250225122522253225422552256225722582259226022612262226322642265226622672268226922702271227222732274227522762277227822792280228122822283228422852286228722882289229022912292229322942295229622972298229923002301230223032304230523062307230823092310231123122313231423152316231723182319232023212322232323242325232623272328232923302331233223332334233523362337233823392340234123422343234423452346234723482349235023512352235323542355235623572358235923602361236223632364236523662367236823692370237123722373237423752376237723782379238023812382238323842385238623872388238923902391239223932394239523962397239823992400240124022403240424052406240724082409241024112412241324142415241624172418241924202421242224232424242524262427242824292430243124322433243424352436243724382439244024412442244324442445244624472448244924502451245224532454245524562457245824592460246124622463246424652466246724682469247024712472247324742475247624772478247924802481248224832484248524862487248824892490249124922493249424952496249724982499250025012502250325042505250625072508250925102511251225132514251525162517251825192520252125222523252425252526252725282529253025312532253325342535253625372538253925402541254225432544254525462547254825492550255125522553255425552556255725582559256025612562256325642565256625672568256925702571257225732574257525762577257825792580258125822583258425852586258725882589259025912592259325942595259625972598259926002601260226032604260526062607260826092610261126122613261426152616261726182619262026212622262326242625262626272628262926302631263226332634263526362637263826392640264126422643264426452646264726482649265026512652265326542655265626572658265926602661266226632664266526662667266826692670267126722673267426752676267726782679268026812682268326842685268626872688268926902691269226932694269526962697269826992700270127022703270427052706270727082709271027112712271327142715271627172718271927202721272227232724272527262727272827292730273127322733273427352736273727382739274027412742274327442745274627472748274927502751275227532754275527562757275827592760276127622763276427652766276727682769277027712772277327742775277627772778277927802781278227832784278527862787278827892790279127922793279427952796279727982799280028012802280328042805280628072808280928102811281228132814281528162817281828192820282128222823282428252826282728282829283028312832283328342835283628372838283928402841284228432844284528462847284828492850285128522853285428552856285728582859286028612862286328642865286628672868286928702871287228732874287528762877287828792880288128822883288428852886288728882889289028912892289328942895289628972898289929002901290229032904290529062907290829092910291129122913291429152916291729182919292029212922292329242925292629272928292929302931293229332934293529362937293829392940294129422943294429452946294729482949295029512952295329542955295629572958295929602961296229632964296529662967296829692970297129722973297429752976297729782979298029812982298329842985298629872988298929902991299229932994299529962997299829993000300130023003300430053006300730083009301030113012301330143015301630173018301930203021302230233024302530263027302830293030303130323033303430353036303730383039304030413042304330443045304630473048304930503051305230533054305530563057305830593060306130623063306430653066306730683069307030713072307330743075307630773078307930803081308230833084308530863087308830893090309130923093309430953096309730983099310031013102310331043105310631073108310931103111311231133114311531163117311831193120312131223123312431253126312731283129313031313132313331343135313631373138313931403141314231433144314531463147314831493150315131523153315431553156315731583159316031613162316331643165316631673168316931703171317231733174317531763177317831793180318131823183318431853186318731883189319031913192319331943195319631973198319932003201320232033204320532063207320832093210321132123213321432153216321732183219322032213222322332243225322632273228322932303231323232333234323532363237323832393240324132423243324432453246324732483249325032513252325332543255325632573258325932603261326232633264326532663267326832693270327132723273327432753276327732783279328032813282328332843285328632873288328932903291329232933294329532963297329832993300330133023303330433053306330733083309331033113312331333143315331633173318331933203321332233233324332533263327332833293330333133323333333433353336333733383339334033413342334333443345334633473348334933503351335233533354335533563357335833593360336133623363336433653366336733683369337033713372337333743375337633773378337933803381338233833384338533863387338833893390339133923393339433953396339733983399340034013402340334043405340634073408340934103411341234133414341534163417341834193420342134223423342434253426342734283429343034313432343334343435343634373438343934403441344234433444344534463447344834493450345134523453345434553456345734583459346034613462346334643465346634673468346934703471347234733474347534763477347834793480348134823483348434853486348734883489349034913492349334943495349634973498349935003501350235033504350535063507350835093510351135123513351435153516351735183519352035213522352335243525352635273528352935303531353235333534353535363537353835393540354135423543354435453546354735483549355035513552355335543555355635573558355935603561356235633564356535663567356835693570357135723573357435753576357735783579358035813582358335843585358635873588358935903591359235933594359535963597359835993600360136023603360436053606360736083609361036113612361336143615361636173618361936203621362236233624362536263627362836293630363136323633363436353636363736383639364036413642364336443645364636473648364936503651365236533654365536563657365836593660366136623663366436653666366736683669367036713672367336743675367636773678367936803681368236833684368536863687368836893690369136923693369436953696369736983699370037013702370337043705370637073708370937103711371237133714371537163717371837193720372137223723372437253726372737283729373037313732373337343735373637373738373937403741374237433744374537463747374837493750375137523753375437553756375737583759376037613762376337643765376637673768376937703771377237733774377537763777377837793780378137823783378437853786378737883789379037913792379337943795379637973798379938003801380238033804380538063807380838093810381138123813381438153816381738183819382038213822382338243825382638273828382938303831383238333834383538363837383838393840384138423843384438453846384738483849385038513852385338543855385638573858385938603861386238633864386538663867386838693870387138723873387438753876387738783879388038813882388338843885388638873888388938903891389238933894389538963897389838993900390139023903390439053906390739083909391039113912391339143915391639173918391939203921392239233924392539263927392839293930393139323933393439353936393739383939394039413942394339443945394639473948394939503951395239533954395539563957395839593960396139623963396439653966396739683969397039713972397339743975397639773978397939803981398239833984398539863987398839893990399139923993399439953996399739983999400040014002400340044005400640074008400940104011401240134014401540164017401840194020402140224023402440254026402740284029403040314032403340344035403640374038403940404041404240434044404540464047404840494050405140524053405440554056405740584059406040614062406340644065406640674068406940704071407240734074407540764077407840794080408140824083408440854086408740884089409040914092409340944095409640974098409941004101410241034104410541064107410841094110411141124113411441154116411741184119412041214122412341244125412641274128412941304131413241334134413541364137413841394140414141424143414441454146414741484149415041514152415341544155415641574158415941604161416241634164416541664167416841694170417141724173417441754176417741784179418041814182418341844185418641874188418941904191419241934194419541964197419841994200420142024203420442054206420742084209421042114212421342144215421642174218421942204221422242234224422542264227422842294230423142324233423442354236423742384239424042414242424342444245424642474248424942504251425242534254425542564257425842594260426142624263426442654266426742684269427042714272427342744275427642774278427942804281428242834284428542864287428842894290429142924293429442954296429742984299430043014302430343044305430643074308430943104311431243134314431543164317431843194320432143224323432443254326432743284329433043314332433343344335433643374338433943404341434243434344434543464347434843494350435143524353435443554356435743584359436043614362436343644365436643674368436943704371437243734374437543764377437843794380438143824383438443854386438743884389439043914392439343944395439643974398439944004401440244034404440544064407440844094410441144124413441444154416441744184419442044214422442344244425442644274428442944304431443244334434443544364437443844394440444144424443444444454446444744484449445044514452445344544455445644574458445944604461446244634464446544664467446844694470447144724473447444754476447744784479448044814482448344844485448644874488448944904491449244934494449544964497449844994500450145024503450445054506450745084509451045114512451345144515451645174518451945204521452245234524452545264527452845294530453145324533453445354536453745384539454045414542454345444545454645474548454945504551455245534554455545564557455845594560456145624563456445654566456745684569457045714572457345744575457645774578457945804581458245834584458545864587458845894590459145924593459445954596459745984599460046014602460346044605460646074608460946104611461246134614461546164617461846194620462146224623462446254626462746284629463046314632463346344635463646374638463946404641464246434644464546464647464846494650465146524653465446554656465746584659466046614662466346644665466646674668466946704671467246734674467546764677467846794680468146824683468446854686468746884689469046914692469346944695469646974698469947004701470247034704470547064707470847094710471147124713471447154716471747184719472047214722472347244725472647274728472947304731473247334734473547364737473847394740474147424743474447454746474747484749475047514752475347544755475647574758475947604761476247634764476547664767476847694770477147724773477447754776477747784779478047814782478347844785478647874788478947904791479247934794479547964797479847994800480148024803480448054806480748084809481048114812481348144815481648174818481948204821482248234824482548264827482848294830483148324833483448354836483748384839484048414842484348444845484648474848484948504851485248534854485548564857485848594860486148624863486448654866486748684869487048714872487348744875487648774878487948804881488248834884488548864887488848894890489148924893489448954896489748984899490049014902490349044905490649074908490949104911491249134914491549164917491849194920492149224923492449254926492749284929493049314932493349344935493649374938493949404941494249434944494549464947494849494950495149524953495449554956495749584959496049614962496349644965496649674968496949704971497249734974497549764977497849794980498149824983498449854986498749884989499049914992499349944995499649974998499950005001500250035004500550065007500850095010501150125013501450155016501750185019502050215022502350245025502650275028502950305031503250335034503550365037503850395040504150425043504450455046504750485049505050515052505350545055505650575058505950605061506250635064506550665067506850695070507150725073507450755076507750785079508050815082508350845085508650875088508950905091509250935094509550965097509850995100510151025103510451055106510751085109511051115112511351145115511651175118511951205121512251235124512551265127512851295130513151325133513451355136513751385139514051415142514351445145514651475148514951505151515251535154515551565157515851595160516151625163516451655166516751685169517051715172517351745175517651775178517951805181518251835184518551865187518851895190519151925193519451955196519751985199520052015202520352045205520652075208520952105211521252135214521552165217521852195220522152225223522452255226522752285229523052315232523352345235523652375238523952405241524252435244524552465247524852495250525152525253525452555256525752585259526052615262526352645265526652675268526952705271527252735274527552765277527852795280528152825283528452855286528752885289529052915292529352945295529652975298529953005301530253035304530553065307530853095310531153125313531453155316531753185319532053215322532353245325532653275328532953305331533253335334533553365337533853395340534153425343534453455346534753485349535053515352535353545355535653575358535953605361536253635364536553665367536853695370537153725373537453755376537753785379538053815382538353845385538653875388538953905391539253935394539553965397539853995400540154025403540454055406540754085409541054115412541354145415541654175418541954205421542254235424542554265427542854295430543154325433543454355436543754385439544054415442544354445445544654475448544954505451545254535454545554565457545854595460546154625463546454655466546754685469547054715472547354745475547654775478547954805481548254835484548554865487548854895490549154925493549454955496549754985499550055015502550355045505550655075508550955105511551255135514551555165517551855195520552155225523552455255526552755285529553055315532553355345535553655375538553955405541554255435544554555465547554855495550555155525553555455555556555755585559556055615562556355645565556655675568556955705571557255735574557555765577557855795580558155825583558455855586558755885589559055915592559355945595559655975598559956005601560256035604560556065607560856095610561156125613561456155616561756185619562056215622562356245625562656275628562956305631563256335634563556365637563856395640564156425643564456455646564756485649565056515652565356545655565656575658565956605661566256635664566556665667566856695670567156725673567456755676567756785679568056815682568356845685568656875688568956905691569256935694569556965697569856995700570157025703570457055706570757085709571057115712571357145715571657175718571957205721572257235724572557265727572857295730573157325733573457355736573757385739574057415742574357445745574657475748574957505751575257535754575557565757575857595760576157625763576457655766576757685769577057715772577357745775577657775778577957805781578257835784578557865787578857895790579157925793579457955796579757985799580058015802580358045805580658075808580958105811581258135814581558165817581858195820582158225823582458255826582758285829583058315832583358345835583658375838583958405841584258435844584558465847584858495850585158525853585458555856585758585859586058615862586358645865586658675868586958705871587258735874587558765877587858795880588158825883588458855886588758885889589058915892589358945895589658975898589959005901590259035904590559065907590859095910591159125913591459155916591759185919592059215922592359245925592659275928592959305931593259335934593559365937593859395940594159425943594459455946594759485949595059515952595359545955595659575958595959605961596259635964596559665967596859695970597159725973597459755976597759785979598059815982598359845985598659875988598959905991599259935994599559965997599859996000600160026003600460056006600760086009601060116012601360146015601660176018601960206021602260236024602560266027602860296030603160326033603460356036603760386039604060416042604360446045604660476048604960506051605260536054605560566057605860596060606160626063606460656066606760686069607060716072607360746075607660776078607960806081608260836084608560866087608860896090609160926093609460956096609760986099610061016102610361046105610661076108610961106111611261136114611561166117611861196120612161226123612461256126612761286129613061316132613361346135613661376138613961406141614261436144614561466147614861496150615161526153615461556156615761586159616061616162616361646165616661676168616961706171617261736174617561766177617861796180618161826183618461856186618761886189619061916192619361946195619661976198619962006201620262036204620562066207620862096210621162126213621462156216621762186219622062216222622362246225622662276228622962306231623262336234623562366237623862396240624162426243624462456246624762486249625062516252625362546255625662576258625962606261626262636264626562666267626862696270627162726273627462756276627762786279628062816282628362846285628662876288628962906291629262936294629562966297629862996300630163026303630463056306630763086309631063116312631363146315631663176318631963206321632263236324632563266327632863296330633163326333633463356336633763386339634063416342634363446345634663476348634963506351635263536354635563566357635863596360636163626363636463656366636763686369637063716372637363746375637663776378637963806381638263836384638563866387638863896390639163926393639463956396639763986399640064016402640364046405640664076408640964106411641264136414641564166417641864196420642164226423642464256426642764286429643064316432643364346435643664376438643964406441644264436444644564466447644864496450645164526453645464556456645764586459646064616462646364646465646664676468646964706471647264736474647564766477647864796480648164826483648464856486648764886489649064916492649364946495649664976498649965006501650265036504650565066507650865096510651165126513651465156516651765186519652065216522652365246525652665276528652965306531653265336534653565366537653865396540654165426543654465456546654765486549655065516552655365546555655665576558655965606561656265636564656565666567656865696570657165726573657465756576657765786579658065816582658365846585658665876588658965906591659265936594659565966597659865996600660166026603660466056606660766086609661066116612661366146615661666176618661966206621662266236624662566266627662866296630663166326633663466356636663766386639664066416642664366446645664666476648664966506651665266536654665566566657665866596660666166626663666466656666666766686669667066716672667366746675667666776678667966806681668266836684668566866687668866896690669166926693669466956696669766986699670067016702670367046705670667076708670967106711671267136714671567166717671867196720672167226723672467256726672767286729673067316732673367346735673667376738673967406741674267436744674567466747674867496750675167526753675467556756675767586759676067616762676367646765676667676768676967706771677267736774677567766777677867796780678167826783678467856786678767886789679067916792679367946795679667976798679968006801680268036804680568066807680868096810681168126813681468156816681768186819682068216822682368246825682668276828682968306831683268336834683568366837683868396840684168426843684468456846684768486849685068516852685368546855685668576858685968606861686268636864686568666867686868696870687168726873687468756876687768786879688068816882688368846885688668876888688968906891689268936894689568966897689868996900690169026903690469056906690769086909691069116912691369146915691669176918691969206921692269236924692569266927692869296930693169326933693469356936693769386939694069416942694369446945694669476948694969506951695269536954695569566957695869596960696169626963696469656966696769686969697069716972697369746975697669776978697969806981698269836984698569866987698869896990699169926993699469956996699769986999700070017002700370047005700670077008700970107011701270137014701570167017701870197020702170227023702470257026702770287029703070317032703370347035703670377038703970407041704270437044704570467047704870497050705170527053705470557056705770587059706070617062706370647065706670677068706970707071707270737074707570767077707870797080708170827083708470857086708770887089709070917092709370947095709670977098709971007101710271037104710571067107710871097110711171127113711471157116711771187119712071217122712371247125712671277128712971307131713271337134713571367137713871397140714171427143714471457146714771487149715071517152715371547155715671577158715971607161716271637164716571667167716871697170717171727173717471757176717771787179718071817182718371847185718671877188718971907191719271937194719571967197719871997200720172027203720472057206720772087209721072117212721372147215721672177218721972207221722272237224722572267227722872297230723172327233723472357236723772387239724072417242724372447245724672477248724972507251725272537254725572567257725872597260726172627263726472657266726772687269727072717272727372747275727672777278727972807281728272837284728572867287728872897290729172927293729472957296729772987299730073017302730373047305730673077308730973107311731273137314731573167317731873197320732173227323732473257326732773287329733073317332733373347335733673377338733973407341734273437344734573467347734873497350735173527353735473557356735773587359736073617362736373647365736673677368736973707371737273737374737573767377737873797380738173827383738473857386738773887389739073917392739373947395739673977398739974007401740274037404740574067407740874097410741174127413741474157416741774187419742074217422742374247425742674277428742974307431743274337434743574367437743874397440744174427443744474457446744774487449745074517452745374547455745674577458745974607461746274637464746574667467746874697470747174727473747474757476747774787479748074817482748374847485748674877488748974907491749274937494749574967497749874997500750175027503750475057506750775087509751075117512751375147515751675177518751975207521752275237524752575267527752875297530753175327533753475357536753775387539754075417542754375447545754675477548754975507551755275537554755575567557755875597560756175627563756475657566756775687569757075717572757375747575757675777578757975807581758275837584758575867587758875897590759175927593759475957596759775987599760076017602760376047605760676077608760976107611761276137614761576167617761876197620762176227623762476257626762776287629763076317632763376347635763676377638763976407641764276437644764576467647764876497650765176527653765476557656765776587659766076617662766376647665766676677668766976707671767276737674767576767677767876797680768176827683768476857686768776887689769076917692769376947695769676977698769977007701770277037704770577067707770877097710771177127713771477157716771777187719772077217722772377247725772677277728772977307731773277337734773577367737773877397740774177427743774477457746774777487749775077517752775377547755775677577758775977607761776277637764776577667767776877697770777177727773777477757776777777787779778077817782778377847785778677877788778977907791779277937794779577967797779877997800780178027803780478057806780778087809781078117812781378147815781678177818781978207821782278237824782578267827782878297830783178327833783478357836783778387839784078417842784378447845784678477848784978507851785278537854785578567857785878597860786178627863786478657866786778687869787078717872787378747875787678777878787978807881788278837884788578867887788878897890789178927893789478957896789778987899790079017902790379047905790679077908790979107911791279137914791579167917791879197920792179227923792479257926792779287929793079317932793379347935793679377938793979407941794279437944794579467947794879497950795179527953795479557956795779587959796079617962796379647965796679677968796979707971797279737974797579767977797879797980798179827983798479857986798779887989799079917992799379947995799679977998799980008001800280038004800580068007800880098010801180128013801480158016801780188019802080218022802380248025802680278028802980308031803280338034803580368037803880398040804180428043804480458046804780488049805080518052805380548055805680578058805980608061806280638064806580668067806880698070807180728073807480758076807780788079808080818082808380848085808680878088808980908091809280938094809580968097809880998100810181028103810481058106810781088109811081118112811381148115811681178118811981208121812281238124812581268127812881298130813181328133813481358136813781388139814081418142814381448145814681478148814981508151815281538154815581568157815881598160816181628163816481658166816781688169817081718172817381748175817681778178817981808181818281838184818581868187818881898190819181928193819481958196819781988199820082018202820382048205820682078208820982108211821282138214821582168217821882198220822182228223822482258226822782288229823082318232823382348235823682378238823982408241824282438244824582468247824882498250825182528253825482558256825782588259826082618262826382648265826682678268826982708271827282738274827582768277827882798280828182828283828482858286828782888289829082918292829382948295829682978298829983008301830283038304830583068307830883098310831183128313831483158316831783188319832083218322832383248325832683278328832983308331833283338334833583368337833883398340834183428343834483458346834783488349835083518352835383548355835683578358835983608361836283638364836583668367836883698370837183728373837483758376837783788379838083818382838383848385838683878388838983908391839283938394839583968397839883998400840184028403840484058406840784088409841084118412841384148415841684178418841984208421842284238424842584268427842884298430843184328433843484358436843784388439844084418442844384448445844684478448844984508451845284538454845584568457845884598460846184628463846484658466846784688469847084718472847384748475847684778478847984808481848284838484848584868487848884898490849184928493849484958496849784988499850085018502850385048505850685078508850985108511851285138514851585168517851885198520852185228523852485258526852785288529853085318532853385348535853685378538853985408541854285438544854585468547854885498550855185528553855485558556855785588559856085618562856385648565856685678568856985708571857285738574857585768577857885798580858185828583858485858586858785888589859085918592859385948595859685978598859986008601860286038604860586068607860886098610861186128613861486158616861786188619862086218622862386248625862686278628862986308631863286338634863586368637863886398640864186428643864486458646864786488649865086518652865386548655865686578658865986608661866286638664866586668667866886698670867186728673867486758676867786788679868086818682868386848685868686878688868986908691869286938694869586968697869886998700870187028703870487058706870787088709871087118712871387148715871687178718871987208721872287238724872587268727872887298730873187328733873487358736873787388739874087418742874387448745874687478748874987508751875287538754875587568757875887598760876187628763876487658766876787688769877087718772877387748775877687778778877987808781878287838784878587868787878887898790879187928793879487958796879787988799880088018802880388048805880688078808880988108811881288138814881588168817881888198820882188228823882488258826882788288829883088318832883388348835883688378838883988408841884288438844884588468847884888498850885188528853885488558856885788588859886088618862886388648865886688678868886988708871887288738874887588768877887888798880888188828883888488858886888788888889889088918892889388948895889688978898889989008901890289038904890589068907890889098910891189128913891489158916891789188919892089218922892389248925892689278928892989308931893289338934893589368937893889398940894189428943894489458946894789488949895089518952895389548955895689578958895989608961896289638964896589668967896889698970897189728973897489758976897789788979898089818982898389848985898689878988898989908991899289938994899589968997899889999000900190029003900490059006900790089009901090119012901390149015901690179018901990209021902290239024902590269027902890299030903190329033903490359036903790389039904090419042904390449045904690479048904990509051905290539054905590569057905890599060906190629063906490659066906790689069907090719072907390749075907690779078907990809081908290839084908590869087908890899090909190929093909490959096909790989099910091019102910391049105910691079108910991109111911291139114911591169117911891199120912191229123912491259126912791289129913091319132913391349135913691379138913991409141914291439144914591469147914891499150915191529153915491559156915791589159916091619162916391649165916691679168916991709171917291739174917591769177917891799180918191829183918491859186918791889189919091919192919391949195919691979198919992009201920292039204920592069207920892099210921192129213921492159216921792189219922092219222922392249225922692279228922992309231923292339234923592369237923892399240924192429243924492459246924792489249925092519252925392549255925692579258925992609261926292639264926592669267926892699270927192729273927492759276927792789279928092819282928392849285928692879288928992909291929292939294929592969297929892999300930193029303930493059306930793089309931093119312931393149315931693179318931993209321932293239324932593269327932893299330933193329333933493359336933793389339934093419342934393449345934693479348934993509351935293539354935593569357935893599360936193629363936493659366936793689369937093719372937393749375937693779378937993809381938293839384938593869387938893899390939193929393939493959396939793989399940094019402940394049405940694079408940994109411941294139414941594169417941894199420942194229423942494259426942794289429943094319432943394349435943694379438943994409441944294439444944594469447944894499450945194529453945494559456945794589459946094619462946394649465946694679468946994709471947294739474947594769477947894799480948194829483948494859486948794889489949094919492949394949495949694979498949995009501950295039504950595069507950895099510951195129513951495159516951795189519952095219522952395249525952695279528952995309531953295339534953595369537953895399540954195429543954495459546954795489549955095519552955395549555955695579558955995609561956295639564956595669567956895699570957195729573957495759576957795789579958095819582958395849585958695879588958995909591959295939594959595969597959895999600960196029603960496059606960796089609961096119612961396149615961696179618961996209621962296239624962596269627962896299630963196329633963496359636963796389639964096419642964396449645964696479648964996509651965296539654965596569657965896599660966196629663966496659666966796689669967096719672967396749675967696779678967996809681968296839684968596869687968896899690969196929693969496959696969796989699970097019702970397049705970697079708970997109711971297139714971597169717971897199720972197229723972497259726972797289729973097319732973397349735973697379738973997409741974297439744974597469747974897499750975197529753975497559756975797589759976097619762976397649765976697679768976997709771977297739774977597769777977897799780978197829783978497859786978797889789979097919792979397949795979697979798979998009801980298039804980598069807980898099810981198129813981498159816981798189819982098219822982398249825982698279828982998309831983298339834983598369837983898399840984198429843984498459846984798489849985098519852985398549855985698579858985998609861986298639864986598669867986898699870987198729873987498759876987798789879988098819882988398849885988698879888988998909891989298939894989598969897989898999900990199029903990499059906990799089909991099119912991399149915991699179918991999209921992299239924992599269927992899299930993199329933993499359936993799389939994099419942994399449945994699479948994999509951995299539954995599569957995899599960996199629963996499659966996799689969997099719972997399749975997699779978997999809981998299839984998599869987998899899990999199929993999499959996999799989999100001000110002100031000410005100061000710008100091001010011100121001310014100151001610017100181001910020100211002210023100241002510026100271002810029100301003110032100331003410035100361003710038100391004010041100421004310044100451004610047100481004910050100511005210053100541005510056100571005810059100601006110062100631006410065100661006710068100691007010071100721007310074100751007610077100781007910080100811008210083100841008510086100871008810089100901009110092100931009410095100961009710098100991010010101101021010310104101051010610107101081010910110101111011210113101141011510116101171011810119101201012110122101231012410125101261012710128101291013010131101321013310134101351013610137101381013910140101411014210143101441014510146101471014810149101501015110152101531015410155101561015710158101591016010161101621016310164101651016610167101681016910170101711017210173101741017510176101771017810179101801018110182101831018410185101861018710188101891019010191101921019310194101951019610197101981019910200102011020210203102041020510206102071020810209102101021110212102131021410215102161021710218102191022010221102221022310224102251022610227102281022910230102311023210233102341023510236102371023810239102401024110242102431024410245102461024710248102491025010251102521025310254102551025610257102581025910260102611026210263102641026510266102671026810269102701027110272102731027410275102761027710278102791028010281102821028310284102851028610287102881028910290102911029210293102941029510296102971029810299103001030110302103031030410305103061030710308103091031010311103121031310314103151031610317103181031910320103211032210323103241032510326103271032810329103301033110332103331033410335103361033710338103391034010341103421034310344103451034610347103481034910350103511035210353103541035510356103571035810359103601036110362103631036410365103661036710368103691037010371103721037310374103751037610377103781037910380103811038210383103841038510386103871038810389103901039110392103931039410395103961039710398103991040010401104021040310404104051040610407104081040910410104111041210413104141041510416104171041810419104201042110422104231042410425104261042710428104291043010431104321043310434104351043610437104381043910440104411044210443104441044510446104471044810449104501045110452104531045410455104561045710458104591046010461104621046310464104651046610467104681046910470104711047210473104741047510476104771047810479104801048110482104831048410485104861048710488104891049010491104921049310494104951049610497104981049910500105011050210503105041050510506105071050810509105101051110512105131051410515105161051710518105191052010521105221052310524105251052610527105281052910530105311053210533105341053510536105371053810539105401054110542105431054410545105461054710548105491055010551105521055310554105551055610557105581055910560105611056210563105641056510566105671056810569105701057110572105731057410575105761057710578105791058010581105821058310584105851058610587105881058910590105911059210593105941059510596105971059810599106001060110602106031060410605106061060710608106091061010611106121061310614106151061610617106181061910620106211062210623106241062510626106271062810629106301063110632106331063410635106361063710638106391064010641106421064310644106451064610647106481064910650106511065210653106541065510656106571065810659106601066110662106631066410665106661066710668106691067010671106721067310674106751067610677106781067910680106811068210683106841068510686106871068810689106901069110692106931069410695106961069710698106991070010701107021070310704107051070610707107081070910710107111071210713107141071510716107171071810719107201072110722107231072410725107261072710728107291073010731107321073310734107351073610737107381073910740107411074210743107441074510746107471074810749107501075110752107531075410755107561075710758107591076010761107621076310764107651076610767107681076910770107711077210773107741077510776107771077810779107801078110782107831078410785107861078710788107891079010791107921079310794107951079610797107981079910800108011080210803108041080510806108071080810809108101081110812108131081410815108161081710818108191082010821108221082310824108251082610827108281082910830108311083210833108341083510836108371083810839108401084110842108431084410845108461084710848108491085010851108521085310854108551085610857108581085910860108611086210863108641086510866108671086810869108701087110872108731087410875108761087710878108791088010881108821088310884108851088610887108881088910890108911089210893108941089510896108971089810899109001090110902109031090410905109061090710908109091091010911109121091310914109151091610917109181091910920109211092210923109241092510926109271092810929109301093110932109331093410935109361093710938109391094010941109421094310944109451094610947109481094910950109511095210953109541095510956109571095810959109601096110962109631096410965109661096710968109691097010971109721097310974109751097610977109781097910980109811098210983109841098510986109871098810989109901099110992109931099410995109961099710998109991100011001110021100311004110051100611007110081100911010110111101211013110141101511016110171101811019110201102111022110231102411025110261102711028110291103011031110321103311034110351103611037110381103911040110411104211043110441104511046110471104811049110501105111052110531105411055110561105711058110591106011061110621106311064110651106611067110681106911070110711107211073110741107511076110771107811079110801108111082110831108411085110861108711088110891109011091110921109311094110951109611097110981109911100111011110211103111041110511106111071110811109111101111111112111131111411115111161111711118111191112011121111221112311124111251112611127111281112911130111311113211133111341113511136111371113811139111401114111142111431114411145111461114711148111491115011151111521115311154111551115611157111581115911160111611116211163111641116511166111671116811169111701117111172111731117411175111761117711178111791118011181111821118311184111851118611187111881118911190111911119211193111941119511196111971119811199112001120111202112031120411205112061120711208112091121011211112121121311214112151121611217112181121911220112211122211223112241122511226112271122811229112301123111232112331123411235112361123711238112391124011241112421124311244112451124611247112481124911250112511125211253112541125511256112571125811259112601126111262112631126411265112661126711268112691127011271112721127311274112751127611277112781127911280112811128211283112841128511286112871128811289112901129111292112931129411295112961129711298112991130011301113021130311304113051130611307113081130911310113111131211313113141131511316113171131811319113201132111322113231132411325113261132711328113291133011331113321133311334113351133611337113381133911340113411134211343113441134511346113471134811349113501135111352113531135411355113561135711358113591136011361113621136311364113651136611367113681136911370113711137211373113741137511376113771137811379113801138111382113831138411385113861138711388113891139011391113921139311394113951139611397113981139911400114011140211403114041140511406114071140811409114101141111412114131141411415114161141711418114191142011421114221142311424114251142611427114281142911430114311143211433114341143511436114371143811439114401144111442114431144411445114461144711448114491145011451114521145311454114551145611457114581145911460114611146211463114641146511466114671146811469114701147111472114731147411475114761147711478114791148011481114821148311484114851148611487114881148911490114911149211493114941149511496114971149811499115001150111502115031150411505115061150711508115091151011511115121151311514115151151611517115181151911520115211152211523115241152511526115271152811529115301153111532115331153411535115361153711538115391154011541115421154311544115451154611547115481154911550115511155211553115541155511556115571155811559115601156111562115631156411565115661156711568115691157011571115721157311574115751157611577115781157911580115811158211583115841158511586115871158811589115901159111592115931159411595115961159711598115991160011601116021160311604116051160611607116081160911610116111161211613116141161511616116171161811619116201162111622116231162411625116261162711628116291163011631116321163311634116351163611637116381163911640116411164211643116441164511646116471164811649116501165111652116531165411655116561165711658116591166011661116621166311664116651166611667116681166911670116711167211673116741167511676116771167811679116801168111682116831168411685116861168711688116891169011691116921169311694116951169611697116981169911700117011170211703117041170511706117071170811709117101171111712117131171411715117161171711718117191172011721117221172311724117251172611727117281172911730117311173211733117341173511736117371173811739117401174111742117431174411745117461174711748117491175011751117521175311754117551175611757117581175911760117611176211763117641176511766117671176811769117701177111772117731177411775117761177711778117791178011781117821178311784117851178611787117881178911790117911179211793117941179511796117971179811799118001180111802118031180411805118061180711808118091181011811118121181311814118151181611817118181181911820118211182211823118241182511826118271182811829118301183111832118331183411835118361183711838118391184011841118421184311844118451184611847118481184911850118511185211853118541185511856118571185811859118601186111862118631186411865118661186711868118691187011871118721187311874118751187611877118781187911880118811188211883118841188511886118871188811889118901189111892118931189411895118961189711898118991190011901119021190311904119051190611907119081190911910119111191211913119141191511916119171191811919119201192111922119231192411925119261192711928119291193011931119321193311934119351193611937119381193911940119411194211943119441194511946119471194811949119501195111952119531195411955119561195711958119591196011961119621196311964119651196611967119681196911970119711197211973119741197511976119771197811979119801198111982119831198411985119861198711988119891199011991119921199311994119951199611997119981199912000120011200212003120041200512006120071200812009120101201112012120131201412015120161201712018120191202012021120221202312024120251202612027120281202912030120311203212033120341203512036120371203812039120401204112042120431204412045120461204712048120491205012051120521205312054120551205612057120581205912060120611206212063120641206512066120671206812069120701207112072120731207412075120761207712078120791208012081120821208312084120851208612087120881208912090120911209212093120941209512096120971209812099121001210112102121031210412105121061210712108121091211012111121121211312114121151211612117121181211912120121211212212123121241212512126121271212812129121301213112132121331213412135121361213712138121391214012141121421214312144121451214612147121481214912150121511215212153121541215512156121571215812159121601216112162121631216412165121661216712168121691217012171121721217312174121751217612177121781217912180121811218212183121841218512186121871218812189121901219112192121931219412195121961219712198121991220012201122021220312204122051220612207122081220912210122111221212213122141221512216122171221812219122201222112222122231222412225122261222712228122291223012231122321223312234122351223612237122381223912240122411224212243122441224512246122471224812249122501225112252122531225412255122561225712258122591226012261122621226312264122651226612267122681226912270122711227212273122741227512276122771227812279122801228112282122831228412285122861228712288122891229012291122921229312294122951229612297122981229912300123011230212303123041230512306123071230812309123101231112312123131231412315123161231712318123191232012321123221232312324123251232612327123281232912330123311233212333123341233512336123371233812339123401234112342123431234412345123461234712348123491235012351123521235312354123551235612357123581235912360123611236212363123641236512366123671236812369123701237112372123731237412375123761237712378123791238012381123821238312384123851238612387123881238912390123911239212393123941239512396123971239812399124001240112402124031240412405124061240712408124091241012411124121241312414124151241612417124181241912420124211242212423124241242512426124271242812429124301243112432124331243412435124361243712438124391244012441124421244312444124451244612447124481244912450124511245212453124541245512456124571245812459124601246112462124631246412465124661246712468124691247012471124721247312474124751247612477124781247912480124811248212483124841248512486124871248812489124901249112492124931249412495124961249712498124991250012501125021250312504125051250612507125081250912510125111251212513125141251512516125171251812519125201252112522125231252412525125261252712528125291253012531125321253312534125351253612537125381253912540125411254212543125441254512546125471254812549125501255112552125531255412555125561255712558125591256012561125621256312564125651256612567125681256912570125711257212573125741257512576125771257812579125801258112582125831258412585125861258712588125891259012591125921259312594125951259612597125981259912600126011260212603126041260512606126071260812609126101261112612126131261412615126161261712618126191262012621126221262312624126251262612627126281262912630126311263212633126341263512636126371263812639126401264112642126431264412645126461264712648126491265012651126521265312654126551265612657126581265912660126611266212663126641266512666126671266812669126701267112672126731267412675126761267712678126791268012681126821268312684126851268612687126881268912690126911269212693126941269512696126971269812699127001270112702127031270412705127061270712708127091271012711127121271312714127151271612717127181271912720127211272212723127241272512726127271272812729127301273112732127331273412735127361273712738127391274012741127421274312744127451274612747127481274912750127511275212753127541275512756127571275812759127601276112762127631276412765127661276712768127691277012771127721277312774127751277612777127781277912780127811278212783127841278512786127871278812789127901279112792127931279412795127961279712798127991280012801128021280312804128051280612807128081280912810128111281212813128141281512816128171281812819128201282112822128231282412825128261282712828128291283012831128321283312834128351283612837128381283912840128411284212843128441284512846128471284812849128501285112852128531285412855128561285712858128591286012861128621286312864128651286612867128681286912870128711287212873128741287512876128771287812879128801288112882128831288412885128861288712888128891289012891128921289312894128951289612897128981289912900129011290212903129041290512906129071290812909129101291112912129131291412915129161291712918129191292012921129221292312924129251292612927129281292912930129311293212933129341293512936129371293812939129401294112942129431294412945129461294712948129491295012951129521295312954129551295612957129581295912960129611296212963129641296512966129671296812969129701297112972129731297412975129761297712978129791298012981129821298312984129851298612987129881298912990129911299212993129941299512996129971299812999130001300113002130031300413005130061300713008130091301013011130121301313014130151301613017130181301913020130211302213023130241302513026130271302813029130301303113032130331303413035130361303713038130391304013041130421304313044130451304613047130481304913050130511305213053130541305513056130571305813059130601306113062130631306413065130661306713068130691307013071130721307313074130751307613077130781307913080130811308213083130841308513086130871308813089130901309113092130931309413095130961309713098130991310013101131021310313104131051310613107131081310913110131111311213113131141311513116131171311813119131201312113122131231312413125131261312713128131291313013131131321313313134131351313613137131381313913140131411314213143131441314513146131471314813149131501315113152131531315413155131561315713158131591316013161131621316313164131651316613167131681316913170131711317213173131741317513176131771317813179131801318113182131831318413185131861318713188131891319013191131921319313194131951319613197131981319913200132011320213203132041320513206132071320813209132101321113212132131321413215132161321713218132191322013221132221322313224132251322613227132281322913230132311323213233132341323513236132371323813239132401324113242132431324413245132461324713248132491325013251132521325313254132551325613257132581325913260132611326213263132641326513266132671326813269132701327113272132731327413275132761327713278132791328013281132821328313284132851328613287132881328913290132911329213293132941329513296132971329813299133001330113302133031330413305133061330713308133091331013311133121331313314133151331613317133181331913320133211332213323133241332513326133271332813329133301333113332133331333413335133361333713338133391334013341133421334313344133451334613347133481334913350133511335213353133541335513356133571335813359133601336113362133631336413365133661336713368133691337013371133721337313374133751337613377133781337913380133811338213383133841338513386133871338813389133901339113392133931339413395133961339713398133991340013401134021340313404134051340613407134081340913410134111341213413134141341513416134171341813419134201342113422134231342413425134261342713428134291343013431134321343313434134351343613437134381343913440134411344213443134441344513446134471344813449134501345113452134531345413455134561345713458134591346013461134621346313464134651346613467134681346913470134711347213473134741347513476134771347813479134801348113482134831348413485134861348713488134891349013491134921349313494134951349613497134981349913500135011350213503135041350513506135071350813509135101351113512135131351413515135161351713518135191352013521135221352313524135251352613527135281352913530135311353213533135341353513536135371353813539135401354113542135431354413545135461354713548135491355013551135521355313554135551355613557135581355913560135611356213563135641356513566135671356813569135701357113572135731357413575135761357713578135791358013581135821358313584135851358613587135881358913590135911359213593135941359513596135971359813599136001360113602136031360413605136061360713608136091361013611136121361313614136151361613617136181361913620136211362213623136241362513626136271362813629136301363113632136331363413635136361363713638136391364013641136421364313644136451364613647136481364913650136511365213653136541365513656136571365813659136601366113662136631366413665136661366713668136691367013671136721367313674136751367613677136781367913680136811368213683136841368513686136871368813689136901369113692136931369413695136961369713698136991370013701137021370313704137051370613707137081370913710137111371213713137141371513716137171371813719137201372113722137231372413725137261372713728137291373013731137321373313734137351373613737137381373913740137411374213743137441374513746137471374813749137501375113752137531375413755137561375713758137591376013761137621376313764137651376613767137681376913770137711377213773137741377513776137771377813779137801378113782137831378413785137861378713788137891379013791137921379313794137951379613797137981379913800138011380213803138041380513806138071380813809138101381113812138131381413815138161381713818138191382013821138221382313824138251382613827138281382913830138311383213833138341383513836138371383813839138401384113842138431384413845138461384713848138491385013851138521385313854138551385613857138581385913860138611386213863138641386513866138671386813869138701387113872138731387413875138761387713878138791388013881138821388313884138851388613887138881388913890138911389213893138941389513896138971389813899139001390113902139031390413905139061390713908139091391013911139121391313914139151391613917139181391913920139211392213923139241392513926139271392813929139301393113932139331393413935139361393713938139391394013941139421394313944139451394613947139481394913950139511395213953139541395513956139571395813959139601396113962139631396413965139661396713968139691397013971139721397313974139751397613977139781397913980139811398213983139841398513986139871398813989139901399113992139931399413995139961399713998139991400014001140021400314004140051400614007140081400914010140111401214013140141401514016140171401814019140201402114022140231402414025140261402714028140291403014031140321403314034140351403614037140381403914040140411404214043140441404514046140471404814049140501405114052140531405414055140561405714058140591406014061140621406314064140651406614067140681406914070140711407214073140741407514076140771407814079140801408114082140831408414085140861408714088140891409014091140921409314094140951409614097140981409914100141011410214103141041410514106141071410814109141101411114112141131411414115141161411714118141191412014121141221412314124141251412614127141281412914130141311413214133141341413514136141371413814139141401414114142141431414414145141461414714148141491415014151141521415314154141551415614157141581415914160141611416214163141641416514166141671416814169141701417114172141731417414175141761417714178141791418014181141821418314184141851418614187141881418914190141911419214193141941419514196141971419814199142001420114202142031420414205142061420714208142091421014211142121421314214142151421614217142181421914220142211422214223142241422514226142271422814229142301423114232142331423414235142361423714238142391424014241142421424314244142451424614247142481424914250142511425214253142541425514256142571425814259142601426114262142631426414265142661426714268142691427014271142721427314274142751427614277142781427914280142811428214283142841428514286142871428814289142901429114292142931429414295142961429714298142991430014301143021430314304143051430614307143081430914310143111431214313143141431514316143171431814319143201432114322143231432414325143261432714328143291433014331143321433314334143351433614337143381433914340143411434214343143441434514346143471434814349143501435114352143531435414355143561435714358143591436014361143621436314364143651436614367143681436914370143711437214373143741437514376143771437814379143801438114382143831438414385143861438714388143891439014391143921439314394143951439614397143981439914400144011440214403144041440514406144071440814409144101441114412144131441414415144161441714418144191442014421144221442314424144251442614427144281442914430144311443214433144341443514436144371443814439144401444114442144431444414445144461444714448144491445014451144521445314454144551445614457144581445914460144611446214463144641446514466144671446814469144701447114472144731447414475144761447714478144791448014481144821448314484144851448614487144881448914490144911449214493144941449514496144971449814499145001450114502145031450414505145061450714508145091451014511145121451314514145151451614517145181451914520145211452214523145241452514526145271452814529145301453114532145331453414535145361453714538145391454014541145421454314544145451454614547145481454914550145511455214553145541455514556145571455814559145601456114562145631456414565145661456714568145691457014571145721457314574145751457614577145781457914580145811458214583145841458514586145871458814589145901459114592145931459414595145961459714598145991460014601146021460314604146051460614607146081460914610146111461214613146141461514616146171461814619146201462114622146231462414625146261462714628146291463014631146321463314634146351463614637146381463914640146411464214643146441464514646146471464814649146501465114652146531465414655146561465714658146591466014661146621466314664146651466614667146681466914670146711467214673146741467514676146771467814679146801468114682146831468414685146861468714688146891469014691146921469314694146951469614697146981469914700147011470214703147041470514706147071470814709147101471114712147131471414715147161471714718147191472014721147221472314724147251472614727147281472914730147311473214733147341473514736147371473814739147401474114742147431474414745147461474714748147491475014751147521475314754147551475614757147581475914760147611476214763147641476514766147671476814769147701477114772147731477414775147761477714778147791478014781147821478314784147851478614787147881478914790147911479214793147941479514796147971479814799148001480114802148031480414805148061480714808148091481014811148121481314814148151481614817148181481914820148211482214823148241482514826148271482814829148301483114832148331483414835148361483714838148391484014841148421484314844148451484614847148481484914850148511485214853148541485514856148571485814859148601486114862148631486414865148661486714868148691487014871148721487314874148751487614877148781487914880148811488214883148841488514886148871488814889148901489114892148931489414895148961489714898148991490014901149021490314904149051490614907149081490914910149111491214913149141491514916149171491814919149201492114922149231492414925149261492714928149291493014931149321493314934149351493614937149381493914940149411494214943149441494514946149471494814949149501495114952149531495414955149561495714958149591496014961149621496314964149651496614967149681496914970149711497214973149741497514976149771497814979149801498114982149831498414985149861498714988149891499014991149921499314994149951499614997149981499915000150011500215003150041500515006150071500815009150101501115012150131501415015150161501715018150191502015021150221502315024150251502615027150281502915030150311503215033150341503515036150371503815039150401504115042150431504415045150461504715048150491505015051150521505315054150551505615057150581505915060150611506215063150641506515066150671506815069150701507115072150731507415075150761507715078150791508015081150821508315084150851508615087150881508915090150911509215093150941509515096150971509815099151001510115102151031510415105151061510715108151091511015111151121511315114151151511615117151181511915120151211512215123151241512515126151271512815129151301513115132151331513415135151361513715138151391514015141151421514315144151451514615147151481514915150151511515215153151541515515156151571515815159151601516115162151631516415165151661516715168151691517015171151721517315174151751517615177151781517915180151811518215183151841518515186151871518815189151901519115192151931519415195151961519715198151991520015201152021520315204152051520615207152081520915210152111521215213152141521515216152171521815219152201522115222152231522415225152261522715228152291523015231152321523315234152351523615237152381523915240152411524215243152441524515246152471524815249152501525115252152531525415255152561525715258152591526015261152621526315264152651526615267152681526915270152711527215273152741527515276152771527815279152801528115282152831528415285152861528715288152891529015291152921529315294152951529615297152981529915300153011530215303153041530515306153071530815309153101531115312153131531415315153161531715318153191532015321153221532315324153251532615327153281532915330153311533215333153341533515336153371533815339153401534115342153431534415345153461534715348153491535015351153521535315354153551535615357153581535915360153611536215363153641536515366153671536815369153701537115372153731537415375153761537715378153791538015381153821538315384153851538615387153881538915390153911539215393153941539515396153971539815399154001540115402154031540415405154061540715408154091541015411154121541315414154151541615417154181541915420154211542215423154241542515426154271542815429154301543115432154331543415435154361543715438154391544015441154421544315444154451544615447154481544915450154511545215453154541545515456154571545815459154601546115462154631546415465154661546715468154691547015471154721547315474154751547615477154781547915480154811548215483154841548515486154871548815489154901549115492154931549415495154961549715498154991550015501155021550315504155051550615507155081550915510155111551215513155141551515516155171551815519155201552115522155231552415525155261552715528155291553015531155321553315534155351553615537155381553915540155411554215543155441554515546155471554815549155501555115552155531555415555155561555715558155591556015561155621556315564155651556615567155681556915570155711557215573155741557515576155771557815579155801558115582155831558415585155861558715588155891559015591155921559315594155951559615597155981559915600156011560215603156041560515606156071560815609156101561115612156131561415615156161561715618156191562015621156221562315624156251562615627156281562915630156311563215633156341563515636156371563815639156401564115642156431564415645156461564715648156491565015651156521565315654156551565615657156581565915660156611566215663156641566515666156671566815669156701567115672156731567415675156761567715678156791568015681156821568315684156851568615687156881568915690156911569215693156941569515696156971569815699157001570115702157031570415705157061570715708157091571015711157121571315714157151571615717157181571915720157211572215723157241572515726157271572815729157301573115732157331573415735157361573715738157391574015741157421574315744157451574615747157481574915750157511575215753157541575515756157571575815759157601576115762157631576415765157661576715768157691577015771157721577315774157751577615777157781577915780157811578215783157841578515786157871578815789157901579115792157931579415795157961579715798157991580015801158021580315804158051580615807158081580915810158111581215813158141581515816158171581815819158201582115822158231582415825158261582715828158291583015831158321583315834158351583615837158381583915840158411584215843158441584515846158471584815849158501585115852158531585415855158561585715858158591586015861158621586315864158651586615867158681586915870158711587215873158741587515876158771587815879158801588115882158831588415885158861588715888158891589015891158921589315894158951589615897158981589915900159011590215903159041590515906159071590815909159101591115912159131591415915159161591715918159191592015921159221592315924159251592615927159281592915930159311593215933159341593515936159371593815939159401594115942159431594415945159461594715948159491595015951159521595315954159551595615957159581595915960159611596215963159641596515966159671596815969159701597115972159731597415975159761597715978159791598015981159821598315984159851598615987159881598915990159911599215993159941599515996159971599815999160001600116002160031600416005160061600716008160091601016011160121601316014160151601616017160181601916020160211602216023160241602516026160271602816029160301603116032160331603416035160361603716038160391604016041160421604316044160451604616047160481604916050160511605216053160541605516056160571605816059160601606116062160631606416065160661606716068160691607016071160721607316074160751607616077160781607916080160811608216083160841608516086160871608816089160901609116092160931609416095160961609716098160991610016101161021610316104161051610616107161081610916110161111611216113161141611516116161171611816119161201612116122161231612416125161261612716128161291613016131161321613316134161351613616137161381613916140161411614216143161441614516146161471614816149161501615116152161531615416155 |
- rclone(1) User Manual
- Nick Craig-Wood
- Oct 15, 2018
- RCLONE
- [Logo]
- Rclone is a command line program to sync files and directories to and
- from:
- - Amazon Drive (See note)
- - Amazon S3
- - Backblaze B2
- - Box
- - Ceph
- - DigitalOcean Spaces
- - Dreamhost
- - Dropbox
- - FTP
- - Google Cloud Storage
- - Google Drive
- - HTTP
- - Hubic
- - Jottacloud
- - IBM COS S3
- - Memset Memstore
- - Mega
- - Microsoft Azure Blob Storage
- - Microsoft OneDrive
- - Minio
- - Nextcloud
- - OVH
- - OpenDrive
- - Openstack Swift
- - Oracle Cloud Storage
- - ownCloud
- - pCloud
- - put.io
- - QingStor
- - Rackspace Cloud Files
- - SFTP
- - Wasabi
- - WebDAV
- - Yandex Disk
- - The local filesystem
- Features
- - MD5/SHA1 hashes checked at all times for file integrity
- - Timestamps preserved on files
- - Partial syncs supported on a whole file basis
- - Copy mode to just copy new/changed files
- - Sync (one way) mode to make a directory identical
- - Check mode to check for file hash equality
- - Can sync to and from network, eg two different cloud accounts
- - (Encryption) backend
- - (Cache) backend
- - (Union) backend
- - Optional FUSE mount (rclone mount)
- Links
- - Home page
- - GitHub project page for source and bug tracker
- - Rclone Forum
- - Google+ page
- - Downloads
- INSTALL
- Rclone is a Go program and comes as a single binary file.
- Quickstart
- - Download the relevant binary.
- - Extract the rclone or rclone.exe binary from the archive
- - Run rclone config to setup. See rclone config docs for more details.
- See below for some expanded Linux / macOS instructions.
- See the Usage section of the docs for how to use rclone, or run
- rclone -h.
- Script installation
- To install rclone on Linux/macOS/BSD systems, run:
- curl https://rclone.org/install.sh | sudo bash
- For beta installation, run:
- curl https://rclone.org/install.sh | sudo bash -s beta
- Note that this script checks the version of rclone installed first and
- won't re-download if not needed.
- Linux installation from precompiled binary
- Fetch and unpack
- curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
- unzip rclone-current-linux-amd64.zip
- cd rclone-*-linux-amd64
- Copy binary file
- sudo cp rclone /usr/bin/
- sudo chown root:root /usr/bin/rclone
- sudo chmod 755 /usr/bin/rclone
- Install manpage
- sudo mkdir -p /usr/local/share/man/man1
- sudo cp rclone.1 /usr/local/share/man/man1/
- sudo mandb
- Run rclone config to setup. See rclone config docs for more details.
- rclone config
- macOS installation from precompiled binary
- Download the latest version of rclone.
- cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip
- Unzip the download and cd to the extracted folder.
- unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64
- Move rclone to your $PATH. You will be prompted for your password.
- sudo mkdir -p /usr/local/bin
- sudo mv rclone /usr/local/bin/
- (the mkdir command is safe to run, even if the directory already
- exists).
- Remove the leftover files.
- cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip
- Run rclone config to setup. See rclone config docs for more details.
- rclone config
- Install from source
- Make sure you have at least Go 1.7 installed. Download go if necessary.
- The latest release is recommended. Then
- git clone https://github.com/ncw/rclone.git
- cd rclone
- go build
- ./rclone version
- You can also build and install rclone in the GOPATH (which defaults to
- ~/go) with:
- go get -u -v github.com/ncw/rclone
- and this will build the binary in $GOPATH/bin (~/go/bin/rclone by
- default) after downloading the source to
- $GOPATH/src/github.com/ncw/rclone (~/go/src/github.com/ncw/rclone by
- default).
- Installation with Ansible
- This can be done with Stefan Weichinger's ansible role.
- Instructions
- 1. git clone https://github.com/stefangweichinger/ansible-rclone.git
- into your local roles-directory
- 2. add the role to the hosts you want rclone installed to:
-
- - hosts: rclone-hosts
- roles:
- - rclone
- Configure
- First, you'll need to configure rclone. As the object storage systems
- have quite complicated authentication these are kept in a config file.
- (See the --config entry for how to find the config file and choose its
- location.)
- The easiest way to make the config is to run rclone with the config
- option:
- rclone config
- See the following for detailed instructions for
- - Alias
- - Amazon Drive
- - Amazon S3
- - Backblaze B2
- - Box
- - Cache
- - Crypt - to encrypt other remotes
- - DigitalOcean Spaces
- - Dropbox
- - FTP
- - Google Cloud Storage
- - Google Drive
- - HTTP
- - Hubic
- - Jottacloud
- - Mega
- - Microsoft Azure Blob Storage
- - Microsoft OneDrive
- - Openstack Swift / Rackspace Cloudfiles / Memset Memstore
- - OpenDrive
- - Pcloud
- - QingStor
- - SFTP
- - Union
- - WebDAV
- - Yandex Disk
- - The local filesystem
- Usage
- Rclone syncs a directory tree from one storage system to another.
- Its syntax is like this
- Syntax: [options] subcommand <parameters> <parameters...>
- Source and destination paths are specified by the name you gave the
- storage system in the config file then the sub path, eg "drive:myfolder"
- to look at "myfolder" in Google drive.
- You can define as many storage paths as you like in the config file.
- Subcommands
- rclone uses a system of subcommands. For example
- rclone ls remote:path # lists a re
- rclone copy /local/path remote:path # copies /local/path to the remote
- rclone sync /local/path remote:path # syncs /local/path to the remote
- rclone config
- Enter an interactive configuration session.
- Synopsis
- Enter an interactive configuration session where you can setup new
- remotes and manage existing ones. You may also set or remove a password
- to protect your configuration.
- rclone config [flags]
- Options
- -h, --help help for config
- rclone copy
- Copy files from source to dest, skipping already copied
- Synopsis
- Copy the source to the destination. Doesn't transfer unchanged files,
- testing by size and modification time or MD5SUM. Doesn't delete files
- from the destination.
- Note that it is always the contents of the directory that is synced, not
- the directory so when source:path is a directory, it's the contents of
- source:path that are copied, not the directory name and contents.
- If dest:path doesn't exist, it is created and the source:path contents
- go there.
- For example
- rclone copy source:sourcepath dest:destpath
- Let's say there are two files in sourcepath
- sourcepath/one.txt
- sourcepath/two.txt
- This copies them to
- destpath/one.txt
- destpath/two.txt
- Not to
- destpath/sourcepath/one.txt
- destpath/sourcepath/two.txt
- If you are familiar with rsync, rclone always works as if you had
- written a trailing / - meaning "copy the contents of this directory".
- This applies to all commands and whether you are talking about the
- source or destination.
- rclone copy source:path dest:path [flags]
- Options
- -h, --help help for copy
- rclone sync
- Make source and dest identical, modifying destination only.
- Synopsis
- Sync the source to the destination, changing the destination only.
- Doesn't transfer unchanged files, testing by size and modification time
- or MD5SUM. Destination is updated to match source, including deleting
- files if necessary.
- IMPORTANT: Since this can cause data loss, test first with the --dry-run
- flag to see exactly what would be copied and deleted.
- Note that files in the destination won't be deleted if there were any
- errors at any point.
- It is always the contents of the directory that is synced, not the
- directory so when source:path is a directory, it's the contents of
- source:path that are copied, not the directory name and contents. See
- extended explanation in the copy command above if unsure.
- If dest:path doesn't exist, it is created and the source:path contents
- go there.
- rclone sync source:path dest:path [flags]
- Options
- -h, --help help for sync
- rclone move
- Move files from source to dest.
- Synopsis
- Moves the contents of the source directory to the destination directory.
- Rclone will error if the source and destination overlap and the remote
- does not support a server side directory move operation.
- If no filters are in use and if possible this will server side move
- source:path into dest:path. After this source:path will no longer longer
- exist.
- Otherwise for each file in source:path selected by the filters (if any)
- this will move it into dest:path. If possible a server side move will be
- used, otherwise it will copy it (server side if possible) into dest:path
- then delete the original (if no errors on copy) in source:path.
- If you want to delete empty source directories after move, use the
- --delete-empty-src-dirs flag.
- IMPORTANT: Since this can cause data loss, test first with the --dry-run
- flag.
- rclone move source:path dest:path [flags]
- Options
- --delete-empty-src-dirs Delete empty source dirs after move
- -h, --help help for move
- rclone delete
- Remove the contents of path.
- Synopsis
- Remove the contents of path. Unlike purge it obeys include/exclude
- filters so can be used to selectively delete files.
- Eg delete all files bigger than 100MBytes
- Check what would be deleted first (use either)
- rclone --min-size 100M lsl remote:path
- rclone --dry-run --min-size 100M delete remote:path
- Then delete
- rclone --min-size 100M delete remote:path
- That reads "delete everything with a minimum size of 100 MB", hence
- delete all files bigger than 100MBytes.
- rclone delete remote:path [flags]
- Options
- -h, --help help for delete
- rclone purge
- Remove the path and all of its contents.
- Synopsis
- Remove the path and all of its contents. Note that this does not obey
- include/exclude filters - everything will be removed. Use delete if you
- want to selectively delete files.
- rclone purge remote:path [flags]
- Options
- -h, --help help for purge
- rclone mkdir
- Make the path if it doesn't already exist.
- Synopsis
- Make the path if it doesn't already exist.
- rclone mkdir remote:path [flags]
- Options
- -h, --help help for mkdir
- rclone rmdir
- Remove the path if empty.
- Synopsis
- Remove the path. Note that you can't remove a path with objects in it,
- use purge for that.
- rclone rmdir remote:path [flags]
- Options
- -h, --help help for rmdir
- rclone check
- Checks the files in the source and destination match.
- Synopsis
- Checks the files in the source and destination match. It compares sizes
- and hashes (MD5 or SHA1) and logs a report of files which don't match.
- It doesn't alter the source or destination.
- If you supply the --size-only flag, it will only compare the sizes not
- the hashes as well. Use this for a quick check.
- If you supply the --download flag, it will download the data from both
- remotes and check them against each other on the fly. This can be useful
- for remotes that don't support hashes or if you really want to check all
- the data.
- If you supply the --one-way flag, it will only check that files in
- source match the files in destination, not the other way around. Meaning
- extra files in destination that are not in the source will not trigger
- an error.
- rclone check source:path dest:path [flags]
- Options
- --download Check by downloading rather than with hash.
- -h, --help help for check
- --one-way Check one way only, source files must exist on remote
- rclone ls
- List the objects in the path with size and path.
- Synopsis
- Lists the objects in the source path to standard output in a human
- readable format with size and path. Recurses by default.
- Eg
- $ rclone ls swift:bucket
- 60295 bevajer5jef
- 90613 canole
- 94467 diwogej7
- 37600 fubuwic
- Any of the filtering options can be applied to this commmand.
- There are several related list commands
- - ls to list size and path of objects only
- - lsl to list modification time, size and path of objects only
- - lsd to list directories only
- - lsf to list objects and directories in easy to parse format
- - lsjson to list objects and directories in JSON format
- ls,lsl,lsd are designed to be human readable. lsf is designed to be
- human and machine readable. lsjson is designed to be machine readable.
- Note that ls and lsl recurse by default - use "--max-depth 1" to stop
- the recursion.
- The other list commands lsd,lsf,lsjson do not recurse by default - use
- "-R" to make them recurse.
- Listing a non existent directory will produce an error except for
- remotes which can't have empty directories (eg s3, swift, gcs, etc - the
- bucket based remotes).
- rclone ls remote:path [flags]
- Options
- -h, --help help for ls
- rclone lsd
- List all directories/containers/buckets in the path.
- Synopsis
- Lists the directories in the source path to standard output. Does not
- recurse by default. Use the -R flag to recurse.
- This command lists the total size of the directory (if known, -1 if
- not), the modification time (if known, the current time if not), the
- number of objects in the directory (if known, -1 if not) and the name of
- the directory, Eg
- $ rclone lsd swift:
- 494000 2018-04-26 08:43:20 10000 10000files
- 65 2018-04-26 08:43:20 1 1File
- Or
- $ rclone lsd drive:test
- -1 2016-10-17 17:41:53 -1 1000files
- -1 2017-01-03 14:40:54 -1 2500files
- -1 2017-07-08 14:39:28 -1 4000files
- If you just want the directory names use "rclone lsf --dirs-only".
- Any of the filtering options can be applied to this commmand.
- There are several related list commands
- - ls to list size and path of objects only
- - lsl to list modification time, size and path of objects only
- - lsd to list directories only
- - lsf to list objects and directories in easy to parse format
- - lsjson to list objects and directories in JSON format
- ls,lsl,lsd are designed to be human readable. lsf is designed to be
- human and machine readable. lsjson is designed to be machine readable.
- Note that ls and lsl recurse by default - use "--max-depth 1" to stop
- the recursion.
- The other list commands lsd,lsf,lsjson do not recurse by default - use
- "-R" to make them recurse.
- Listing a non existent directory will produce an error except for
- remotes which can't have empty directories (eg s3, swift, gcs, etc - the
- bucket based remotes).
- rclone lsd remote:path [flags]
- Options
- -h, --help help for lsd
- -R, --recursive Recurse into the listing.
- rclone lsl
- List the objects in path with modification time, size and path.
- Synopsis
- Lists the objects in the source path to standard output in a human
- readable format with modification time, size and path. Recurses by
- default.
- Eg
- $ rclone lsl swift:bucket
- 60295 2016-06-25 18:55:41.062626927 bevajer5jef
- 90613 2016-06-25 18:55:43.302607074 canole
- 94467 2016-06-25 18:55:43.046609333 diwogej7
- 37600 2016-06-25 18:55:40.814629136 fubuwic
- Any of the filtering options can be applied to this commmand.
- There are several related list commands
- - ls to list size and path of objects only
- - lsl to list modification time, size and path of objects only
- - lsd to list directories only
- - lsf to list objects and directories in easy to parse format
- - lsjson to list objects and directories in JSON format
- ls,lsl,lsd are designed to be human readable. lsf is designed to be
- human and machine readable. lsjson is designed to be machine readable.
- Note that ls and lsl recurse by default - use "--max-depth 1" to stop
- the recursion.
- The other list commands lsd,lsf,lsjson do not recurse by default - use
- "-R" to make them recurse.
- Listing a non existent directory will produce an error except for
- remotes which can't have empty directories (eg s3, swift, gcs, etc - the
- bucket based remotes).
- rclone lsl remote:path [flags]
- Options
- -h, --help help for lsl
- rclone md5sum
- Produces an md5sum file for all the objects in the path.
- Synopsis
- Produces an md5sum file for all the objects in the path. This is in the
- same format as the standard md5sum tool produces.
- rclone md5sum remote:path [flags]
- Options
- -h, --help help for md5sum
- rclone sha1sum
- Produces an sha1sum file for all the objects in the path.
- Synopsis
- Produces an sha1sum file for all the objects in the path. This is in the
- same format as the standard sha1sum tool produces.
- rclone sha1sum remote:path [flags]
- Options
- -h, --help help for sha1sum
- rclone size
- Prints the total size and number of objects in remote:path.
- Synopsis
- Prints the total size and number of objects in remote:path.
- rclone size remote:path [flags]
- Options
- -h, --help help for size
- --json format output as JSON
- rclone version
- Show the version number.
- Synopsis
- Show the version number, the go version and the architecture.
- Eg
- $ rclone version
- rclone v1.41
- - os/arch: linux/amd64
- - go version: go1.10
- If you supply the --check flag, then it will do an online check to
- compare your version with the latest release and the latest beta.
- $ rclone version --check
- yours: 1.42.0.6
- latest: 1.42 (released 2018-06-16)
- beta: 1.42.0.5 (released 2018-06-17)
- Or
- $ rclone version --check
- yours: 1.41
- latest: 1.42 (released 2018-06-16)
- upgrade: https://downloads.rclone.org/v1.42
- beta: 1.42.0.5 (released 2018-06-17)
- upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
- rclone version [flags]
- Options
- --check Check for new version.
- -h, --help help for version
- rclone cleanup
- Clean up the remote if possible
- Synopsis
- Clean up the remote if possible. Empty the trash or delete old file
- versions. Not supported by all remotes.
- rclone cleanup remote:path [flags]
- Options
- -h, --help help for cleanup
- rclone dedupe
- Interactively find duplicate files and delete/rename them.
- Synopsis
- By default dedupe interactively finds duplicate files and offers to
- delete all but one or rename them to be different. Only useful with
- Google Drive which can have duplicate file names.
- In the first pass it will merge directories with the same name. It will
- do this iteratively until all the identical directories have been
- merged.
- The dedupe command will delete all but one of any identical (same
- md5sum) files it finds without confirmation. This means that for most
- duplicated files the dedupe command will not be interactive. You can use
- --dry-run to see what would happen without doing anything.
- Here is an example run.
- Before - with duplicates
- $ rclone lsl drive:dupes
- 6048320 2016-03-05 16:23:16.798000000 one.txt
- 6048320 2016-03-05 16:23:11.775000000 one.txt
- 564374 2016-03-05 16:23:06.731000000 one.txt
- 6048320 2016-03-05 16:18:26.092000000 one.txt
- 6048320 2016-03-05 16:22:46.185000000 two.txt
- 1744073 2016-03-05 16:22:38.104000000 two.txt
- 564374 2016-03-05 16:22:52.118000000 two.txt
- Now the dedupe session
- $ rclone dedupe drive:dupes
- 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
- one.txt: Found 4 duplicates - deleting identical copies
- one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
- one.txt: 2 duplicates remain
- 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
- 2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
- s) Skip and do nothing
- k) Keep just one (choose which in next step)
- r) Rename all to be different (by changing file.jpg to file-1.jpg)
- s/k/r> k
- Enter the number of the file to keep> 1
- one.txt: Deleted 1 extra copies
- two.txt: Found 3 duplicates - deleting identical copies
- two.txt: 3 duplicates remain
- 1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
- 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
- 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
- s) Skip and do nothing
- k) Keep just one (choose which in next step)
- r) Rename all to be different (by changing file.jpg to file-1.jpg)
- s/k/r> r
- two-1.txt: renamed from: two.txt
- two-2.txt: renamed from: two.txt
- two-3.txt: renamed from: two.txt
- The result being
- $ rclone lsl drive:dupes
- 6048320 2016-03-05 16:23:16.798000000 one.txt
- 564374 2016-03-05 16:22:52.118000000 two-1.txt
- 6048320 2016-03-05 16:22:46.185000000 two-2.txt
- 1744073 2016-03-05 16:22:38.104000000 two-3.txt
- Dedupe can be run non interactively using the --dedupe-mode flag or by
- using an extra parameter with the same value
- - --dedupe-mode interactive - interactive as above.
- - --dedupe-mode skip - removes identical files then skips anything
- left.
- - --dedupe-mode first - removes identical files then keeps the first
- one.
- - --dedupe-mode newest - removes identical files then keeps the newest
- one.
- - --dedupe-mode oldest - removes identical files then keeps the oldest
- one.
- - --dedupe-mode largest - removes identical files then keeps the
- largest one.
- - --dedupe-mode rename - removes identical files then renames the rest
- to be different.
- For example to rename all the identically named photos in your Google
- Photos directory, do
- rclone dedupe --dedupe-mode rename "drive:Google Photos"
- Or
- rclone dedupe rename "drive:Google Photos"
- rclone dedupe [mode] remote:path [flags]
- Options
- --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
- -h, --help help for dedupe
- rclone about
- Get quota information from the remote.
- Synopsis
- Get quota information from the remote, like bytes used/free/quota and
- bytes used in the trash. Not supported by all remotes.
- This will print to stdout something like this:
- Total: 17G
- Used: 7.444G
- Free: 1.315G
- Trashed: 100.000M
- Other: 8.241G
- Where the fields are:
- - Total: total size available.
- - Used: total size used
- - Free: total amount this user could upload.
- - Trashed: total amount in the trash
- - Other: total amount in other storage (eg Gmail, Google Photos)
- - Objects: total number of objects in the storage
- Note that not all the backends provide all the fields - they will be
- missing if they are not known for that backend. Where it is known that
- the value is unlimited the value will also be omitted.
- Use the --full flag to see the numbers written out in full, eg
- Total: 18253611008
- Used: 7993453766
- Free: 1411001220
- Trashed: 104857602
- Other: 8849156022
- Use the --json flag for a computer readable output, eg
- {
- "total": 18253611008,
- "used": 7993453766,
- "trashed": 104857602,
- "other": 8849156022,
- "free": 1411001220
- }
- rclone about remote: [flags]
- Options
- --full Full numbers instead of SI units
- -h, --help help for about
- --json Format output as JSON
- rclone authorize
- Remote authorization.
- Synopsis
- Remote authorization. Used to authorize a remote or headless rclone from
- a machine with a browser - use as instructed by rclone config.
- rclone authorize [flags]
- Options
- -h, --help help for authorize
- rclone cachestats
- Print cache stats for a remote
- Synopsis
- Print cache stats for a remote in JSON format
- rclone cachestats source: [flags]
- Options
- -h, --help help for cachestats
- rclone cat
- Concatenates any files and sends them to stdout.
- Synopsis
- rclone cat sends any files to standard output.
- You can use it like this to output a single file
- rclone cat remote:path/to/file
- Or like this to output any file in dir or subdirectories.
- rclone cat remote:path/to/dir
- Or like this to output any .txt files in dir or subdirectories.
- rclone --include "*.txt" cat remote:path/to/dir
- Use the --head flag to print characters only at the start, --tail for
- the end and --offset and --count to print a section in the middle. Note
- that if offset is negative it will count from the end, so --offset -1
- --count 1 is equivalent to --tail 1.
- rclone cat remote:path [flags]
- Options
- --count int Only print N characters. (default -1)
- --discard Discard the output instead of printing.
- --head int Only print the first N characters.
- -h, --help help for cat
- --offset int Start printing at offset N (or from end if -ve).
- --tail int Only print the last N characters.
- rclone config create
- Create a new remote with name, type and options.
- Synopsis
- Create a new remote of with and options. The options should be passed in
- in pairs of .
- For example to make a swift remote of name myremote using auto config
- you would do:
- rclone config create myremote swift env_auth true
- rclone config create <name> <type> [<key> <value>]* [flags]
- Options
- -h, --help help for create
- rclone config delete
- Delete an existing remote .
- Synopsis
- Delete an existing remote .
- rclone config delete <name> [flags]
- Options
- -h, --help help for delete
- rclone config dump
- Dump the config file as JSON.
- Synopsis
- Dump the config file as JSON.
- rclone config dump [flags]
- Options
- -h, --help help for dump
- rclone config edit
- Enter an interactive configuration session.
- Synopsis
- Enter an interactive configuration session where you can setup new
- remotes and manage existing ones. You may also set or remove a password
- to protect your configuration.
- rclone config edit [flags]
- Options
- -h, --help help for edit
- rclone config file
- Show path of configuration file in use.
- Synopsis
- Show path of configuration file in use.
- rclone config file [flags]
- Options
- -h, --help help for file
- rclone config password
- Update password in an existing remote.
- Synopsis
- Update an existing remote's password. The password should be passed in
- in pairs of .
- For example to set password of a remote of name myremote you would do:
- rclone config password myremote fieldname mypassword
- rclone config password <name> [<key> <value>]+ [flags]
- Options
- -h, --help help for password
- rclone config providers
- List in JSON format all the providers and options.
- Synopsis
- List in JSON format all the providers and options.
- rclone config providers [flags]
- Options
- -h, --help help for providers
- rclone config show
- Print (decrypted) config file, or the config for a single remote.
- Synopsis
- Print (decrypted) config file, or the config for a single remote.
- rclone config show [<remote>] [flags]
- Options
- -h, --help help for show
- rclone config update
- Update options in an existing remote.
- Synopsis
- Update an existing remote's options. The options should be passed in in
- pairs of .
- For example to update the env_auth field of a remote of name myremote
- you would do:
- rclone config update myremote swift env_auth true
- rclone config update <name> [<key> <value>]+ [flags]
- Options
- -h, --help help for update
- rclone copyto
- Copy files from source to dest, skipping already copied
- Synopsis
- If source:path is a file or directory then it copies it to a file or
- directory named dest:path.
- This can be used to upload single files to other than their current
- name. If the source is a directory then it acts exactly like the copy
- command.
- So
- rclone copyto src dst
- where src and dst are rclone paths, either remote:path or /path/to/local
- or C:.
- This will:
- if src is file
- copy it to dst, overwriting an existing file if it exists
- if src is directory
- copy it to dst, overwriting existing files if they exist
- see copy command for full details
- This doesn't transfer unchanged files, testing by size and modification
- time or MD5SUM. It doesn't delete files from the destination.
- rclone copyto source:path dest:path [flags]
- Options
- -h, --help help for copyto
- rclone copyurl
- Copy url content to dest.
- Synopsis
- Download urls content and copy it to destination without saving it in
- tmp storage.
- rclone copyurl https://example.com dest:path [flags]
- Options
- -h, --help help for copyurl
- rclone cryptcheck
- Cryptcheck checks the integrity of a crypted remote.
- Synopsis
- rclone cryptcheck checks a remote against a crypted remote. This is the
- equivalent of running rclone check, but able to check the checksums of
- the crypted remote.
- For it to work the underlying remote of the cryptedremote must support
- some kind of checksum.
- It works by reading the nonce from each file on the cryptedremote: and
- using that to encrypt each file on the remote:. It then checks the
- checksum of the underlying file on the cryptedremote: against the
- checksum of the file it has just encrypted.
- Use it like this
- rclone cryptcheck /path/to/files encryptedremote:path
- You can use it like this also, but that will involve downloading all the
- files in remote:path.
- rclone cryptcheck remote:path encryptedremote:path
- After it has run it will log the status of the encryptedremote:.
- If you supply the --one-way flag, it will only check that files in
- source match the files in destination, not the other way around. Meaning
- extra files in destination that are not in the source will not trigger
- an error.
- rclone cryptcheck remote:path cryptedremote:path [flags]
- Options
- -h, --help help for cryptcheck
- --one-way Check one way only, source files must exist on destination
- rclone cryptdecode
- Cryptdecode returns unencrypted file names.
- Synopsis
- rclone cryptdecode returns unencrypted file names when provided with a
- list of encrypted file names. List limit is 10 items.
- If you supply the --reverse flag, it will return encrypted file names.
- use it like this
- rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
- rclone cryptdecode --reverse encryptedremote: filename1 filename2
- rclone cryptdecode encryptedremote: encryptedfilename [flags]
- Options
- -h, --help help for cryptdecode
- --reverse Reverse cryptdecode, encrypts filenames
- rclone dbhashsum
- Produces a Dropbox hash file for all the objects in the path.
- Synopsis
- Produces a Dropbox hash file for all the objects in the path. The hashes
- are calculated according to Dropbox content hash rules. The output is in
- the same format as md5sum and sha1sum.
- rclone dbhashsum remote:path [flags]
- Options
- -h, --help help for dbhashsum
- rclone deletefile
- Remove a single file from remote.
- Synopsis
- Remove a single file from remote. Unlike delete it cannot be used to
- remove a directory and it doesn't obey include/exclude filters - if the
- specified file exists, it will always be removed.
- rclone deletefile remote:path [flags]
- Options
- -h, --help help for deletefile
- rclone genautocomplete
- Output completion script for a given shell.
- Synopsis
- Generates a shell completion script for rclone. Run with --help to list
- the supported shells.
- Options
- -h, --help help for genautocomplete
- rclone genautocomplete bash
- Output bash completion script for rclone.
- Synopsis
- Generates a bash shell autocompletion script for rclone.
- This writes to /etc/bash_completion.d/rclone by default so will probably
- need to be run with sudo or as root, eg
- sudo rclone genautocomplete bash
- Logout and login again to use the autocompletion scripts, or source them
- directly
- . /etc/bash_completion
- If you supply a command line argument the script will be written there.
- rclone genautocomplete bash [output_file] [flags]
- Options
- -h, --help help for bash
- rclone genautocomplete zsh
- Output zsh completion script for rclone.
- Synopsis
- Generates a zsh autocompletion script for rclone.
- This writes to /usr/share/zsh/vendor-completions/_rclone by default so
- will probably need to be run with sudo or as root, eg
- sudo rclone genautocomplete zsh
- Logout and login again to use the autocompletion scripts, or source them
- directly
- autoload -U compinit && compinit
- If you supply a command line argument the script will be written there.
- rclone genautocomplete zsh [output_file] [flags]
- Options
- -h, --help help for zsh
- rclone gendocs
- Output markdown docs for rclone to the directory supplied.
- Synopsis
- This produces markdown docs for the rclone commands to the directory
- supplied. These are in a format suitable for hugo to render into the
- rclone.org website.
- rclone gendocs output_directory [flags]
- Options
- -h, --help help for gendocs
- rclone hashsum
- Produces an hashsum file for all the objects in the path.
- Synopsis
- Produces a hash file for all the objects in the path using the hash
- named. The output is in the same format as the standard md5sum/sha1sum
- tool.
- Run without a hash to see the list of supported hashes, eg
- $ rclone hashsum
- Supported hashes are:
- * MD5
- * SHA-1
- * DropboxHash
- * QuickXorHash
- Then
- $ rclone hashsum MD5 remote:path
- rclone hashsum <hash> remote:path [flags]
- Options
- -h, --help help for hashsum
- rclone link
- Generate public link to file/folder.
- Synopsis
- rclone link will create or retrieve a public link to the given file or
- folder.
- rclone link remote:path/to/file
- rclone link remote:path/to/folder/
- If successful, the last line of the output will contain the link. Exact
- capabilities depend on the remote, but the link will always be created
- with the least constraints – e.g. no expiry, no password protection,
- accessible without account.
- rclone link remote:path [flags]
- Options
- -h, --help help for link
- rclone listremotes
- List all the remotes in the config file.
- Synopsis
- rclone listremotes lists all the available remotes from the config file.
- When uses with the -l flag it lists the types too.
- rclone listremotes [flags]
- Options
- -h, --help help for listremotes
- -l, --long Show the type as well as names.
- rclone lsf
- List directories and objects in remote:path formatted for parsing
- Synopsis
- List the contents of the source path (directories and objects) to
- standard output in a form which is easy to parse by scripts. By default
- this will just be the names of the objects and directories, one per
- line. The directories will have a / suffix.
- Eg
- $ rclone lsf swift:bucket
- bevajer5jef
- canole
- diwogej7
- ferejej3gux/
- fubuwic
- Use the --format option to control what gets listed. By default this is
- just the path, but you can use these parameters to control the output:
- p - path
- s - size
- t - modification time
- h - hash
- i - ID of object if known
- m - MimeType of object if known
- So if you wanted the path, size and modification time, you would use
- --format "pst", or maybe --format "tsp" to put the path last.
- Eg
- $ rclone lsf --format "tsp" swift:bucket
- 2016-06-25 18:55:41;60295;bevajer5jef
- 2016-06-25 18:55:43;90613;canole
- 2016-06-25 18:55:43;94467;diwogej7
- 2018-04-26 08:50:45;0;ferejej3gux/
- 2016-06-25 18:55:40;37600;fubuwic
- If you specify "h" in the format you will get the MD5 hash by default,
- use the "--hash" flag to change which hash you want. Note that this can
- be returned as an empty string if it isn't available on the object (and
- for directories), "ERROR" if there was an error reading it from the
- object and "UNSUPPORTED" if that object does not support that hash type.
- For example to emulate the md5sum command you can use
- rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
- Eg
- $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
- 7908e352297f0f530b84a756f188baa3 bevajer5jef
- cd65ac234e6fea5925974a51cdd865cc canole
- 03b5341b4f234b9d984d03ad076bae91 diwogej7
- 8fd37c3810dd660778137ac3a66cc06d fubuwic
- 99713e14a4c4ff553acaf1930fad985b gixacuh7ku
- (Though "rclone md5sum ." is an easier way of typing this.)
- By default the separator is ";" this can be changed with the --separator
- flag. Note that separators aren't escaped in the path so putting it last
- is a good strategy.
- Eg
- $ rclone lsf --separator "," --format "tshp" swift:bucket
- 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
- 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
- 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
- 2018-04-26 08:52:53,0,,ferejej3gux/
- 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic
- You can output in CSV standard format. This will escape things in " if
- they contain ,
- Eg
- $ rclone lsf --csv --files-only --format ps remote:path
- test.log,22355
- test.sh,449
- "this file contains a comma, in the file name.txt",6
- Note that the --absolute parameter is useful for making lists of files
- to pass to an rclone copy with the --files-from flag.
- For example to find all the files modified within one day and copy those
- only (without traversing the whole directory structure):
- rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
- rclone copy --files-from new_files /path/to/local remote:path
- Any of the filtering options can be applied to this commmand.
- There are several related list commands
- - ls to list size and path of objects only
- - lsl to list modification time, size and path of objects only
- - lsd to list directories only
- - lsf to list objects and directories in easy to parse format
- - lsjson to list objects and directories in JSON format
- ls,lsl,lsd are designed to be human readable. lsf is designed to be
- human and machine readable. lsjson is designed to be machine readable.
- Note that ls and lsl recurse by default - use "--max-depth 1" to stop
- the recursion.
- The other list commands lsd,lsf,lsjson do not recurse by default - use
- "-R" to make them recurse.
- Listing a non existent directory will produce an error except for
- remotes which can't have empty directories (eg s3, swift, gcs, etc - the
- bucket based remotes).
- rclone lsf remote:path [flags]
- Options
- --absolute Put a leading / in front of path names.
- --csv Output in CSV format.
- -d, --dir-slash Append a slash to directory names. (default true)
- --dirs-only Only list directories.
- --files-only Only list files.
- -F, --format string Output format - see help for details (default "p")
- --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "MD5")
- -h, --help help for lsf
- -R, --recursive Recurse into the listing.
- -s, --separator string Separator for the items in the format. (default ";")
- rclone lsjson
- List directories and objects in the path in JSON format.
- Synopsis
- List directories and objects in the path in JSON format.
- The output is an array of Items, where each Item looks like this
- { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
- "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" :
- "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" },
- "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsDir" : false,
- "MimeType" : "application/octet-stream", "ModTime" :
- "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted"
- : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt",
- "Size" : 6 }
- If --hash is not specified the Hashes property won't be emitted.
- If --no-modtime is specified then ModTime will be blank.
- If --encrypted is not specified the Encrypted won't be emitted.
- The Path field will only show folders below the remote path being
- listed. If "remote:path" contains the file "subfolder/file.txt", the
- Path for "file.txt" will be "subfolder/file.txt", not
- "remote:path/subfolder/file.txt". When used without --recursive the Path
- will always be the same as Name.
- The time is in RFC3339 format with nanosecond precision.
- The whole output can be processed as a JSON blob, or alternatively it
- can be processed line by line as each item is written one to a line.
- Any of the filtering options can be applied to this commmand.
- There are several related list commands
- - ls to list size and path of objects only
- - lsl to list modification time, size and path of objects only
- - lsd to list directories only
- - lsf to list objects and directories in easy to parse format
- - lsjson to list objects and directories in JSON format
- ls,lsl,lsd are designed to be human readable. lsf is designed to be
- human and machine readable. lsjson is designed to be machine readable.
- Note that ls and lsl recurse by default - use "--max-depth 1" to stop
- the recursion.
- The other list commands lsd,lsf,lsjson do not recurse by default - use
- "-R" to make them recurse.
- Listing a non existent directory will produce an error except for
- remotes which can't have empty directories (eg s3, swift, gcs, etc - the
- bucket based remotes).
- rclone lsjson remote:path [flags]
- Options
- -M, --encrypted Show the encrypted names.
- --hash Include hashes in the output (may take longer).
- -h, --help help for lsjson
- --no-modtime Don't read the modification time (can speed things up).
- --original Show the ID of the underlying Object.
- -R, --recursive Recurse into the listing.
- rclone mount
- Mount the remote as file system on a mountpoint.
- Synopsis
- rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
- Rclone's cloud storage systems as a file system with FUSE.
- First set up your remote using rclone config. Check it works with
- rclone ls etc.
- Start the mount like this
- rclone mount remote:path/to/files /path/to/local/mount
- Or on Windows like this where X: is an unused drive letter
- rclone mount remote:path/to/files X:
- When the program ends, either via Ctrl+C or receiving a SIGINT or
- SIGTERM signal, the mount is automatically stopped.
- The umount operation can fail, for example when the mountpoint is busy.
- When that happens, it is the user's responsibility to stop the mount
- manually with
- # Linux
- fusermount -u /path/to/local/mount
- # OS X
- umount /path/to/local/mount
- Installing on Windows
- To run rclone mount on Windows, you will need to download and install
- WinFsp.
- WinFsp is an open source Windows File System Proxy which makes it easy
- to write user space file systems for Windows. It provides a FUSE
- emulation layer which rclone uses combination with cgofuse. Both of
- these packages are by Bill Zissimopoulos who was very helpful during the
- implementation of rclone mount for Windows.
- Windows caveats
- Note that drives created as Administrator are not visible by other
- accounts (including the account that was elevated as Administrator). So
- if you start a Windows drive from an Administrative Command Prompt and
- then try to access the same drive from Explorer (which does not run as
- Administrator), you will not be able to see the new drive.
- The easiest way around this is to start the drive from a normal command
- prompt. It is also possible to start a drive from the SYSTEM account
- (using the WinFsp.Launcher infrastructure) which creates drives
- accessible for everyone on the system or alternatively using the nssm
- service manager.
- Limitations
- Without the use of "--vfs-cache-mode" this can only write files
- sequentially, it can only seek when reading. This means that many
- applications won't work with their files on an rclone mount without
- "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File
- Caching section for more info.
- The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
- Hubic) won't work from the root - you will need to specify a bucket, or
- a path within the bucket. So swift: won't work whereas swift:bucket will
- as will swift:bucket/path. None of these support the concept of
- directories, so empty directories will have a tendency to disappear once
- they fall out of the directory cache.
- Only supported on Linux, FreeBSD, OS X and Windows at the moment.
- rclone mount vs rclone sync/copy
- File systems expect things to be 100% reliable, whereas cloud storage
- systems are a long way from 100% reliable. The rclone sync/copy commands
- cope with this with lots of retries. However rclone mount can't use
- retries in the same way without making local copies of the uploads. Look
- at the file caching for solutions to make mount mount more reliable.
- Attribute caching
- You can use the flag --attr-timeout to set the time the kernel caches
- the attributes (size, modification time etc) for directory entries.
- The default is "1s" which caches files just long enough to avoid too
- many callbacks to rclone from the kernel.
- In theory 0s should be the correct value for filesystems which can
- change outside the control of the kernel. However this causes quite a
- few problems such as rclone using too much memory, rclone not serving
- files to samba and excessive time listing directories.
- The kernel can cache the info about a file for the time given by
- "--attr-timeout". You may see corruption if the remote file changes
- length during this window. It will show up as either a truncated file or
- a file with garbage on the end. With "--attr-timeout 1s" this is very
- unlikely but not impossible. The higher you set "--attr-timeout" the
- more likely it is. The default setting of "1s" is the lowest setting
- which mitigates the problems above.
- If you set it higher ('10s' or '1m' say) then the kernel will call back
- to rclone less often making it more efficient, however there is more
- chance of the corruption issue above.
- If files don't change on the remote outside of the control of rclone
- then there is no chance of corruption.
- This is the same as setting the attr_timeout option in mount.fuse.
- Filters
- Note that all the rclone filters can be used to select a subset of the
- files to be visible in the mount.
- systemd
- When running rclone mount as a systemd service, it is possible to use
- Type=notify. In this case the service will enter the started state after
- the mountpoint has been successfully set up. Units having the rclone
- mount service specified as a requirement will see all files and folders
- immediately in this mode.
- chunked reading
- --vfs-read-chunk-size will enable reading the source objects in parts.
- This can reduce the used download quota for some remotes by requesting
- only chunks from the remote that are actually read at the cost of an
- increased number of requests.
- When --vfs-read-chunk-size-limit is also specified and greater than
- --vfs-read-chunk-size, the chunk size for each open file will get
- doubled for each chunk read, until the specified value is reached. A
- value of -1 will disable the limit and the chunk size will grow
- indefinitely.
- With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the
- following parts will be downloaded: 0-100M, 100M-200M, 200M-300M,
- 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified,
- the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M,
- 1200M-1700M and so on.
- Chunked reading will only work with --vfs-cache-mode < full, as the file
- will always be copied to the vfs cache before opening with
- --vfs-cache-mode full.
- Directory Cache
- Using the --dir-cache-time flag, you can set how long a directory should
- be considered up to date and not refreshed from the backend. Changes
- made locally in the mount may appear immediately or invalidate the
- cache. However, changes done on the remote will only be picked up once
- the cache expires.
- Alternatively, you can send a SIGHUP signal to rclone for it to flush
- all directory caches, regardless of how old they are. Assuming only one
- rclone instance is running, you can reset the cache like this:
- kill -SIGHUP $(pidof rclone)
- If you configure rclone with a remote control then you can use rclone rc
- to flush the whole directory cache:
- rclone rc vfs/forget
- Or individual files or directories:
- rclone rc vfs/forget file=path/to/file dir=path/to/dir
- File Buffering
- The --buffer-size flag determines the amount of memory, that will be
- used to buffer data in advance.
- Each open file descriptor will try to keep the specified amount of data
- in memory at all times. The buffered data is bound to one file
- descriptor and won't be shared between multiple open file descriptors of
- the same file.
- This flag is a upper limit for the used memory per file descriptor. The
- buffer will only use memory for data that is downloaded but not not yet
- read. If the buffer is empty, only a small amount of memory will be
- used. The maximum memory used by rclone for buffering can be up to
- --buffer-size * open files.
- File Caching
- These flags control the VFS file caching options. The VFS layer is used
- by rclone mount to make a cloud storage system work more like a normal
- file system.
- You'll need to enable VFS caching if you want, for example, to read and
- write simultaneously to a file. See below for more details.
- Note that the VFS cache works in addition to the cache backend and you
- may find that you need one or the other or both.
- --cache-dir string Directory rclone will use for caching.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- If run with -vv rclone will print the location of the file cache. The
- files are stored in the user cache file area which is OS dependent but
- can be controlled with --cache-dir or setting the appropriate
- environment variable.
- The cache has 4 different modes selected by --vfs-cache-mode. The higher
- the cache mode the more compatible rclone becomes at the cost of using
- disk space.
- Note that files are written back to the remote only when they are closed
- so if rclone is quit or dies with open files then these won't get
- written back to the remote. However they will still be in the on disk
- cache.
- --vfs-cache-mode off
- In this mode the cache will read directly from the remote and write
- directly to the remote without caching anything on disk.
- This will mean some operations are not possible
- - Files can't be opened for both read AND write
- - Files opened for write can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files open for read with O_TRUNC will be opened write only
- - Files open for write only will behave as if O_TRUNC was supplied
- - Open modes O_APPEND, O_TRUNC are ignored
- - If an upload fails it can't be retried
- --vfs-cache-mode minimal
- This is very similar to "off" except that files opened for read AND
- write will be buffered to disks. This means that files opened for write
- will be a lot more compatible, but uses the minimal disk space.
- These operations are not possible
- - Files opened for write only can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files opened for write only will ignore O_APPEND, O_TRUNC
- - If an upload fails it can't be retried
- --vfs-cache-mode writes
- In this mode files opened for read only are still read directly from the
- remote, write only and read/write files are buffered to disk first.
- This mode should support all normal file system operations.
- If an upload fails it will be retried up to --low-level-retries times.
- --vfs-cache-mode full
- In this mode all reads and writes are buffered to and from disk. When a
- file is opened for read it will be downloaded in its entirety first.
- This may be appropriate for your needs, or you may prefer to look at the
- cache backend which does a much more sophisticated job of caching,
- including caching directory hierarchies and chunks of files.
- In this mode, unlike the others, when a file is written to the disk, it
- will be kept on the disk after it is written to the remote. It will be
- purged on a schedule according to --vfs-cache-max-age.
- This mode should support all normal file system operations.
- If an upload or download fails it will be retried up to
- --low-level-retries times.
- rclone mount remote:path /path/to/mountpoint [flags]
- Options
- --allow-non-empty Allow mounting over a non-empty directory.
- --allow-other Allow access to other users.
- --allow-root Allow access to root user.
- --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
- --daemon Run mount as a daemon (background mode).
- --daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for mount
- --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
- --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --volname string Set the volume name (not supported by all OSes).
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
- rclone moveto
- Move file or directory from source to dest.
- Synopsis
- If source:path is a file or directory then it moves it to a file or
- directory named dest:path.
- This can be used to rename files or upload single files to other than
- their existing name. If the source is a directory then it acts exacty
- like the move command.
- So
- rclone moveto src dst
- where src and dst are rclone paths, either remote:path or /path/to/local
- or C:.
- This will:
- if src is file
- move it to dst, overwriting an existing file if it exists
- if src is directory
- move it to dst, overwriting existing files if they exist
- see move command for full details
- This doesn't transfer unchanged files, testing by size and modification
- time or MD5SUM. src will be deleted on successful transfer.
- IMPORTANT: Since this can cause data loss, test first with the --dry-run
- flag.
- rclone moveto source:path dest:path [flags]
- Options
- -h, --help help for moveto
- rclone ncdu
- Explore a remote with a text based user interface.
- Synopsis
- This displays a text based user interface allowing the navigation of a
- remote. It is most useful for answering the question - "What is using
- all my disk space?".
- To make the user interface it first scans the entire remote given and
- builds an in memory representation. rclone ncdu can be used during this
- scanning phase and you will see it building up the directory structure
- as it goes along.
- Here are the keys - press '?' to toggle the help on and off
- ↑,↓ or k,j to Move
- →,l to enter
- ←,h to return
- c toggle counts
- g toggle graph
- n,s,C sort by name,size,count
- ^L refresh screen
- ? to toggle help on and off
- q/ESC/c-C to quit
- This an homage to the ncdu tool but for rclone remotes. It is missing
- lots of features at the moment, most importantly deleting files, but is
- useful as it stands.
- rclone ncdu remote:path [flags]
- Options
- -h, --help help for ncdu
- rclone obscure
- Obscure password for use in the rclone.conf
- Synopsis
- Obscure password for use in the rclone.conf
- rclone obscure password [flags]
- Options
- -h, --help help for obscure
- rclone rc
- Run a command against a running rclone.
- Synopsis
- This runs a command against a running rclone. By default it will use
- that specified in the --rc-addr command.
- Arguments should be passed in as parameter=value.
- The result will be returned as a JSON object by default.
- Use "rclone rc" to see a list of all possible commands.
- rclone rc commands parameter [flags]
- Options
- -h, --help help for rc
- --no-output If set don't output the JSON result.
- --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
- rclone rcat
- Copies standard input to file on remote.
- Synopsis
- rclone rcat reads from standard input (stdin) and copies it to a single
- remote file.
- echo "hello world" | rclone rcat remote:path/to/file
- ffmpeg - | rclone rcat remote:path/to/file
- If the remote file already exists, it will be overwritten.
- rcat will try to upload small files in a single request, which is
- usually more efficient than the streaming/chunked upload endpoints,
- which use multiple requests. Exact behaviour depends on the remote. What
- is considered a small file may be set through --streaming-upload-cutoff.
- Uploading only starts after the cutoff is reached or if the file ends
- before that. The data must fit into RAM. The cutoff needs to be small
- enough to adhere the limits of your remote, please see there. Generally
- speaking, setting this cutoff too high will decrease your performance.
- Note that the upload can also not be retried because the data is not
- kept around until the upload succeeds. If you need to transfer a lot of
- data, you're better off caching locally and then rclone move it to the
- destination.
- rclone rcat remote:path [flags]
- Options
- -h, --help help for rcat
- rclone rmdirs
- Remove empty directories under the path.
- Synopsis
- This removes any empty directories (or directories that only contain
- empty directories) under the path that it finds, including the path if
- it has nothing in.
- If you supply the --leave-root flag, it will not remove the root
- directory.
- This is useful for tidying up remotes that rclone has left a lot of
- empty directories in.
- rclone rmdirs remote:path [flags]
- Options
- -h, --help help for rmdirs
- --leave-root Do not remove root directory if empty
- rclone serve
- Serve a remote over a protocol.
- Synopsis
- rclone serve is used to serve a remote over a given protocol. This
- command requires the use of a subcommand to specify the protocol, eg
- rclone serve http remote:
- Each subcommand has its own options which you can see in their help.
- rclone serve <protocol> [opts] <remote> [flags]
- Options
- -h, --help help for serve
- rclone serve ftp
- Serve remote:path over FTP.
- Synopsis
- rclone serve ftp implements a basic ftp server to serve the remote over
- FTP protocol. This can be viewed with a ftp client or you can make a
- remote of type ftp to read and write it.
- Server options
- Use --addr to specify which IP address and port the server should listen
- on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
- default it only listens on localhost. You can use port :0 to let the OS
- choose an available port.
- If you set --addr to listen on a public or LAN accessible IP address
- then using Authentication is advised - see the next section for info.
- Authentication
- By default this will serve files without needing a login.
- You can set a single username and password with the --user and --pass
- flags.
- Directory Cache
- Using the --dir-cache-time flag, you can set how long a directory should
- be considered up to date and not refreshed from the backend. Changes
- made locally in the mount may appear immediately or invalidate the
- cache. However, changes done on the remote will only be picked up once
- the cache expires.
- Alternatively, you can send a SIGHUP signal to rclone for it to flush
- all directory caches, regardless of how old they are. Assuming only one
- rclone instance is running, you can reset the cache like this:
- kill -SIGHUP $(pidof rclone)
- If you configure rclone with a remote control then you can use rclone rc
- to flush the whole directory cache:
- rclone rc vfs/forget
- Or individual files or directories:
- rclone rc vfs/forget file=path/to/file dir=path/to/dir
- File Buffering
- The --buffer-size flag determines the amount of memory, that will be
- used to buffer data in advance.
- Each open file descriptor will try to keep the specified amount of data
- in memory at all times. The buffered data is bound to one file
- descriptor and won't be shared between multiple open file descriptors of
- the same file.
- This flag is a upper limit for the used memory per file descriptor. The
- buffer will only use memory for data that is downloaded but not not yet
- read. If the buffer is empty, only a small amount of memory will be
- used. The maximum memory used by rclone for buffering can be up to
- --buffer-size * open files.
- File Caching
- These flags control the VFS file caching options. The VFS layer is used
- by rclone mount to make a cloud storage system work more like a normal
- file system.
- You'll need to enable VFS caching if you want, for example, to read and
- write simultaneously to a file. See below for more details.
- Note that the VFS cache works in addition to the cache backend and you
- may find that you need one or the other or both.
- --cache-dir string Directory rclone will use for caching.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- If run with -vv rclone will print the location of the file cache. The
- files are stored in the user cache file area which is OS dependent but
- can be controlled with --cache-dir or setting the appropriate
- environment variable.
- The cache has 4 different modes selected by --vfs-cache-mode. The higher
- the cache mode the more compatible rclone becomes at the cost of using
- disk space.
- Note that files are written back to the remote only when they are closed
- so if rclone is quit or dies with open files then these won't get
- written back to the remote. However they will still be in the on disk
- cache.
- --vfs-cache-mode off
- In this mode the cache will read directly from the remote and write
- directly to the remote without caching anything on disk.
- This will mean some operations are not possible
- - Files can't be opened for both read AND write
- - Files opened for write can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files open for read with O_TRUNC will be opened write only
- - Files open for write only will behave as if O_TRUNC was supplied
- - Open modes O_APPEND, O_TRUNC are ignored
- - If an upload fails it can't be retried
- --vfs-cache-mode minimal
- This is very similar to "off" except that files opened for read AND
- write will be buffered to disks. This means that files opened for write
- will be a lot more compatible, but uses the minimal disk space.
- These operations are not possible
- - Files opened for write only can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files opened for write only will ignore O_APPEND, O_TRUNC
- - If an upload fails it can't be retried
- --vfs-cache-mode writes
- In this mode files opened for read only are still read directly from the
- remote, write only and read/write files are buffered to disk first.
- This mode should support all normal file system operations.
- If an upload fails it will be retried up to --low-level-retries times.
- --vfs-cache-mode full
- In this mode all reads and writes are buffered to and from disk. When a
- file is opened for read it will be downloaded in its entirety first.
- This may be appropriate for your needs, or you may prefer to look at the
- cache backend which does a much more sophisticated job of caching,
- including caching directory hierarchies and chunks of files.
- In this mode, unlike the others, when a file is written to the disk, it
- will be kept on the disk after it is written to the remote. It will be
- purged on a schedule according to --vfs-cache-max-age.
- This mode should support all normal file system operations.
- If an upload or download fails it will be retried up to
- --low-level-retries times.
- rclone serve ftp remote:path [flags]
- Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for ftp
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication. (empty value allow every password)
- --passive-port string Passive port range to use. (default "30000-32000")
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
- --user string User name for authentication. (default "anonymous")
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
- --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- rclone serve http
- Serve the remote over HTTP.
- Synopsis
- rclone serve http implements a basic web server to serve the remote over
- HTTP. This can be viewed in a web browser or you can make a remote of
- type http read from it.
- You can use the filter flags (eg --include, --exclude) to control what
- is served.
- The server will log errors. Use -v to see access logs.
- --bwlimit will be respected for file transfers. Use --stats to control
- the stats printing.
- Server options
- Use --addr to specify which IP address and port the server should listen
- on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
- default it only listens on localhost. You can use port :0 to let the OS
- choose an available port.
- If you set --addr to listen on a public or LAN accessible IP address
- then using Authentication is advised - see the next section for info.
- --server-read-timeout and --server-write-timeout can be used to control
- the timeouts on the server. Note that this is the total time for a
- transfer.
- --max-header-bytes controls the maximum number of bytes the server will
- accept in the HTTP header.
- Authentication
- By default this will serve files without needing a login.
- You can either use an htpasswd file which can take lots of users, or set
- a single username and password with the --user and --pass flags.
- Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
- standard apache format and supports MD5, SHA1 and BCrypt for basic
- authentication. Bcrypt is recommended.
- To create an htpasswd file:
- touch htpasswd
- htpasswd -B htpasswd user
- htpasswd -B htpasswd anotherUser
- The password file can be updated while rclone is running.
- Use --realm to set the authentication realm.
- SSL/TLS
- By default this will serve over http. If you want you can serve over
- https. You will need to supply the --cert and --key flags. If you wish
- to do client side certificate validation then you will need to supply
- --client-ca also.
- --cert should be a either a PEM encoded certificate or a concatenation
- of that with the CA certificate. --key should be the PEM encoded private
- key and --client-ca should be the PEM encoded client certificate
- authority certificate.
- Directory Cache
- Using the --dir-cache-time flag, you can set how long a directory should
- be considered up to date and not refreshed from the backend. Changes
- made locally in the mount may appear immediately or invalidate the
- cache. However, changes done on the remote will only be picked up once
- the cache expires.
- Alternatively, you can send a SIGHUP signal to rclone for it to flush
- all directory caches, regardless of how old they are. Assuming only one
- rclone instance is running, you can reset the cache like this:
- kill -SIGHUP $(pidof rclone)
- If you configure rclone with a remote control then you can use rclone rc
- to flush the whole directory cache:
- rclone rc vfs/forget
- Or individual files or directories:
- rclone rc vfs/forget file=path/to/file dir=path/to/dir
- File Buffering
- The --buffer-size flag determines the amount of memory, that will be
- used to buffer data in advance.
- Each open file descriptor will try to keep the specified amount of data
- in memory at all times. The buffered data is bound to one file
- descriptor and won't be shared between multiple open file descriptors of
- the same file.
- This flag is a upper limit for the used memory per file descriptor. The
- buffer will only use memory for data that is downloaded but not not yet
- read. If the buffer is empty, only a small amount of memory will be
- used. The maximum memory used by rclone for buffering can be up to
- --buffer-size * open files.
- File Caching
- These flags control the VFS file caching options. The VFS layer is used
- by rclone mount to make a cloud storage system work more like a normal
- file system.
- You'll need to enable VFS caching if you want, for example, to read and
- write simultaneously to a file. See below for more details.
- Note that the VFS cache works in addition to the cache backend and you
- may find that you need one or the other or both.
- --cache-dir string Directory rclone will use for caching.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- If run with -vv rclone will print the location of the file cache. The
- files are stored in the user cache file area which is OS dependent but
- can be controlled with --cache-dir or setting the appropriate
- environment variable.
- The cache has 4 different modes selected by --vfs-cache-mode. The higher
- the cache mode the more compatible rclone becomes at the cost of using
- disk space.
- Note that files are written back to the remote only when they are closed
- so if rclone is quit or dies with open files then these won't get
- written back to the remote. However they will still be in the on disk
- cache.
- --vfs-cache-mode off
- In this mode the cache will read directly from the remote and write
- directly to the remote without caching anything on disk.
- This will mean some operations are not possible
- - Files can't be opened for both read AND write
- - Files opened for write can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files open for read with O_TRUNC will be opened write only
- - Files open for write only will behave as if O_TRUNC was supplied
- - Open modes O_APPEND, O_TRUNC are ignored
- - If an upload fails it can't be retried
- --vfs-cache-mode minimal
- This is very similar to "off" except that files opened for read AND
- write will be buffered to disks. This means that files opened for write
- will be a lot more compatible, but uses the minimal disk space.
- These operations are not possible
- - Files opened for write only can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files opened for write only will ignore O_APPEND, O_TRUNC
- - If an upload fails it can't be retried
- --vfs-cache-mode writes
- In this mode files opened for read only are still read directly from the
- remote, write only and read/write files are buffered to disk first.
- This mode should support all normal file system operations.
- If an upload fails it will be retried up to --low-level-retries times.
- --vfs-cache-mode full
- In this mode all reads and writes are buffered to and from disk. When a
- file is opened for read it will be downloaded in its entirety first.
- This may be appropriate for your needs, or you may prefer to look at the
- cache backend which does a much more sophisticated job of caching,
- including caching directory hierarchies and chunks of files.
- In this mode, unlike the others, when a file is written to the disk, it
- will be kept on the disk after it is written to the remote. It will be
- purged on a schedule according to --vfs-cache-max-age.
- This mode should support all normal file system operations.
- If an upload or download fails it will be retried up to
- --low-level-retries times.
- rclone serve http remote:path [flags]
- Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
- --cert string SSL PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for http
- --htpasswd string htpasswd file - if not provided no authentication is done
- --key string SSL PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --realm string realm for authentication (default "rclone")
- --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
- --user string User name for authentication.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
- --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- rclone serve restic
- Serve the remote for restic's REST API.
- Synopsis
- rclone serve restic implements restic's REST backend API over HTTP. This
- allows restic to use rclone as a data storage mechanism for cloud
- providers that restic does not support directly.
- Restic is a command line program for doing backups.
- The server will log errors. Use -v to see access logs.
- --bwlimit will be respected for file transfers. Use --stats to control
- the stats printing.
- Setting up rclone for use by restic
- First set up a remote for your chosen cloud provider.
- Once you have set up the remote, check it is working with, for example
- "rclone lsd remote:". You may have called the remote something other
- than "remote:" - just substitute whatever you called it in the following
- instructions.
- Now start the rclone restic server
- rclone serve restic -v remote:backup
- Where you can replace "backup" in the above by whatever path in the
- remote you wish to use.
- By default this will serve on "localhost:8080" you can change this with
- use of the "--addr" flag.
- You might wish to start this server on boot.
- Setting up restic to use rclone
- Now you can follow the restic instructions on setting up restic.
- Note that you will need restic 0.8.2 or later to interoperate with
- rclone.
- For the example above you will want to use "http://localhost:8080/" as
- the URL for the REST server.
- For example:
- $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
- $ export RESTIC_PASSWORD=yourpassword
- $ restic init
- created restic backend 8b1a4b56ae at rest:http://localhost:8080/
- Please note that knowledge of your password is required to access
- the repository. Losing your password means that your data is
- irrecoverably lost.
- $ restic backup /path/to/files/to/backup
- scan [/path/to/files/to/backup]
- scanned 189 directories, 312 files in 0:00
- [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
- duration: 0:00
- snapshot 45c8fdd8 saved
- Multiple repositories
- Note that you can use the endpoint to host multiple repositories. Do
- this by adding a directory name or path after the URL. Note that these
- MUST end with /. Eg
- $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
- # backup user1 stuff
- $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
- # backup user2 stuff
- Server options
- Use --addr to specify which IP address and port the server should listen
- on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
- default it only listens on localhost. You can use port :0 to let the OS
- choose an available port.
- If you set --addr to listen on a public or LAN accessible IP address
- then using Authentication is advised - see the next section for info.
- --server-read-timeout and --server-write-timeout can be used to control
- the timeouts on the server. Note that this is the total time for a
- transfer.
- --max-header-bytes controls the maximum number of bytes the server will
- accept in the HTTP header.
- Authentication
- By default this will serve files without needing a login.
- You can either use an htpasswd file which can take lots of users, or set
- a single username and password with the --user and --pass flags.
- Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
- standard apache format and supports MD5, SHA1 and BCrypt for basic
- authentication. Bcrypt is recommended.
- To create an htpasswd file:
- touch htpasswd
- htpasswd -B htpasswd user
- htpasswd -B htpasswd anotherUser
- The password file can be updated while rclone is running.
- Use --realm to set the authentication realm.
- SSL/TLS
- By default this will serve over http. If you want you can serve over
- https. You will need to supply the --cert and --key flags. If you wish
- to do client side certificate validation then you will need to supply
- --client-ca also.
- --cert should be a either a PEM encoded certificate or a concatenation
- of that with the CA certificate. --key should be the PEM encoded private
- key and --client-ca should be the PEM encoded client certificate
- authority certificate.
- rclone serve restic remote:path [flags]
- Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
- --append-only disallow deletion of repository data
- --cert string SSL PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
- -h, --help help for restic
- --htpasswd string htpasswd file - if not provided no authentication is done
- --key string SSL PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --pass string Password for authentication.
- --realm string realm for authentication (default "rclone")
- --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --stdio run an HTTP2 server on stdin/stdout
- --user string User name for authentication.
- rclone serve webdav
- Serve remote:path over webdav.
- Synopsis
- rclone serve webdav implements a basic webdav server to serve the remote
- over HTTP via the webdav protocol. This can be viewed with a webdav
- client or you can make a remote of type webdav to read and write it.
- Webdav options
- --etag-hash
- This controls the ETag header. Without this flag the ETag will be based
- on the ModTime and Size of the object.
- If this flag is set to "auto" then rclone will choose the first
- supported hash on the backend or you can use a named hash such as "MD5"
- or "SHA-1".
- Use "rclone hashsum" to see the full list.
- Server options
- Use --addr to specify which IP address and port the server should listen
- on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
- default it only listens on localhost. You can use port :0 to let the OS
- choose an available port.
- If you set --addr to listen on a public or LAN accessible IP address
- then using Authentication is advised - see the next section for info.
- --server-read-timeout and --server-write-timeout can be used to control
- the timeouts on the server. Note that this is the total time for a
- transfer.
- --max-header-bytes controls the maximum number of bytes the server will
- accept in the HTTP header.
- Authentication
- By default this will serve files without needing a login.
- You can either use an htpasswd file which can take lots of users, or set
- a single username and password with the --user and --pass flags.
- Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in
- standard apache format and supports MD5, SHA1 and BCrypt for basic
- authentication. Bcrypt is recommended.
- To create an htpasswd file:
- touch htpasswd
- htpasswd -B htpasswd user
- htpasswd -B htpasswd anotherUser
- The password file can be updated while rclone is running.
- Use --realm to set the authentication realm.
- SSL/TLS
- By default this will serve over http. If you want you can serve over
- https. You will need to supply the --cert and --key flags. If you wish
- to do client side certificate validation then you will need to supply
- --client-ca also.
- --cert should be a either a PEM encoded certificate or a concatenation
- of that with the CA certificate. --key should be the PEM encoded private
- key and --client-ca should be the PEM encoded client certificate
- authority certificate.
- Directory Cache
- Using the --dir-cache-time flag, you can set how long a directory should
- be considered up to date and not refreshed from the backend. Changes
- made locally in the mount may appear immediately or invalidate the
- cache. However, changes done on the remote will only be picked up once
- the cache expires.
- Alternatively, you can send a SIGHUP signal to rclone for it to flush
- all directory caches, regardless of how old they are. Assuming only one
- rclone instance is running, you can reset the cache like this:
- kill -SIGHUP $(pidof rclone)
- If you configure rclone with a remote control then you can use rclone rc
- to flush the whole directory cache:
- rclone rc vfs/forget
- Or individual files or directories:
- rclone rc vfs/forget file=path/to/file dir=path/to/dir
- File Buffering
- The --buffer-size flag determines the amount of memory, that will be
- used to buffer data in advance.
- Each open file descriptor will try to keep the specified amount of data
- in memory at all times. The buffered data is bound to one file
- descriptor and won't be shared between multiple open file descriptors of
- the same file.
- This flag is a upper limit for the used memory per file descriptor. The
- buffer will only use memory for data that is downloaded but not not yet
- read. If the buffer is empty, only a small amount of memory will be
- used. The maximum memory used by rclone for buffering can be up to
- --buffer-size * open files.
- File Caching
- These flags control the VFS file caching options. The VFS layer is used
- by rclone mount to make a cloud storage system work more like a normal
- file system.
- You'll need to enable VFS caching if you want, for example, to read and
- write simultaneously to a file. See below for more details.
- Note that the VFS cache works in addition to the cache backend and you
- may find that you need one or the other or both.
- --cache-dir string Directory rclone will use for caching.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- If run with -vv rclone will print the location of the file cache. The
- files are stored in the user cache file area which is OS dependent but
- can be controlled with --cache-dir or setting the appropriate
- environment variable.
- The cache has 4 different modes selected by --vfs-cache-mode. The higher
- the cache mode the more compatible rclone becomes at the cost of using
- disk space.
- Note that files are written back to the remote only when they are closed
- so if rclone is quit or dies with open files then these won't get
- written back to the remote. However they will still be in the on disk
- cache.
- --vfs-cache-mode off
- In this mode the cache will read directly from the remote and write
- directly to the remote without caching anything on disk.
- This will mean some operations are not possible
- - Files can't be opened for both read AND write
- - Files opened for write can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files open for read with O_TRUNC will be opened write only
- - Files open for write only will behave as if O_TRUNC was supplied
- - Open modes O_APPEND, O_TRUNC are ignored
- - If an upload fails it can't be retried
- --vfs-cache-mode minimal
- This is very similar to "off" except that files opened for read AND
- write will be buffered to disks. This means that files opened for write
- will be a lot more compatible, but uses the minimal disk space.
- These operations are not possible
- - Files opened for write only can't be seeked
- - Existing files opened for write must have O_TRUNC set
- - Files opened for write only will ignore O_APPEND, O_TRUNC
- - If an upload fails it can't be retried
- --vfs-cache-mode writes
- In this mode files opened for read only are still read directly from the
- remote, write only and read/write files are buffered to disk first.
- This mode should support all normal file system operations.
- If an upload fails it will be retried up to --low-level-retries times.
- --vfs-cache-mode full
- In this mode all reads and writes are buffered to and from disk. When a
- file is opened for read it will be downloaded in its entirety first.
- This may be appropriate for your needs, or you may prefer to look at the
- cache backend which does a much more sophisticated job of caching,
- including caching directory hierarchies and chunks of files.
- In this mode, unlike the others, when a file is written to the disk, it
- will be kept on the disk after it is written to the remote. It will be
- purged on a schedule according to --vfs-cache-max-age.
- This mode should support all normal file system operations.
- If an upload or download fails it will be retried up to
- --low-level-retries times.
- rclone serve webdav remote:path [flags]
- Options
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
- --cert string SSL PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --etag-hash string Which hash to use for the ETag, or auto or blank for off
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for webdav
- --htpasswd string htpasswd file - if not provided no authentication is done
- --key string SSL PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --realm string realm for authentication (default "rclone")
- --server-read-timeout duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
- --user string User name for authentication.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
- --vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- rclone settier
- Changes storage class/tier of objects in remote.
- Synopsis
- rclone settier changes storage tier or class at remote if supported. Few
- cloud storage services provides different storage classes on objects,
- for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and
- Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
- Note that, certain tier chages make objects not available to access
- immediately. For example tiering to archive in azure blob storage makes
- objects in frozen state, user can restore by setting tier to Hot/Cool,
- similarly S3 to Glacier makes object inaccessible.true
- You can use it to tier single object
- rclone settier Cool remote:path/file
- Or use rclone filters to set tier on only specific files
- rclone --include "*.txt" settier Hot remote:path/dir
- Or just provide remote directory and all files in directory will be
- tiered
- rclone settier tier remote:path/dir
- rclone settier tier remote:path [flags]
- Options
- -h, --help help for settier
- rclone touch
- Create new file or change file modification time.
- Synopsis
- Create new file or change file modification time.
- rclone touch remote:path [flags]
- Options
- -h, --help help for touch
- -C, --no-create Do not create the file if it does not exist.
- -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
- rclone tree
- List the contents of the remote in a tree like fashion.
- Synopsis
- rclone tree lists the contents of a remote in a similar way to the unix
- tree command.
- For example
- $ rclone tree remote:path
- /
- ├── file1
- ├── file2
- ├── file3
- └── subdir
- ├── file4
- └── file5
- 1 directories, 5 files
- You can use any of the filtering options with the tree command (eg
- --include and --exclude). You can also use --fast-list.
- The tree command has many options for controlling the listing which are
- compatible with the tree command. Note that not all of them have short
- options as they conflict with rclone's short options.
- rclone tree remote:path [flags]
- Options
- -a, --all All files are listed (list . files too).
- -C, --color Turn colorization on always.
- -d, --dirs-only List directories only.
- --dirsfirst List directories before files (-U disables).
- --full-path Print the full path prefix for each file.
- -h, --help help for tree
- --human Print the size in a more human readable way.
- --level int Descend only level directories deep.
- -D, --modtime Print the date of last modification.
- -i, --noindent Don't print indentation lines.
- --noreport Turn off file/directory count at end of tree listing.
- -o, --output string Output to file instead of stdout.
- -p, --protections Print the protections for each file.
- -Q, --quote Quote filenames with double quotes.
- -s, --size Print the size in bytes of each file.
- --sort string Select sort: name,version,size,mtime,ctime.
- --sort-ctime Sort files by last status change time.
- -t, --sort-modtime Sort files by last modification time.
- -r, --sort-reverse Reverse the order of the sort.
- -U, --unsorted Leave files unsorted.
- --version Sort files alphanumerically by version.
- Copying single files
- rclone normally syncs or copies directories. However, if the source
- remote points to a file, rclone will just copy that file. The
- destination remote must point to a directory - rclone will give the
- error
- Failed to create file system for "remote:file": is a file not a directory
- if it isn't.
- For example, suppose you have a remote with a file in called test.jpg,
- then you could copy just that file like this
- rclone copy remote:test.jpg /tmp/download
- The file test.jpg will be placed inside /tmp/download.
- This is equivalent to specifying
- rclone copy --files-from /tmp/files remote: /tmp/download
- Where /tmp/files contains the single line
- test.jpg
- It is recommended to use copy when copying individual files, not sync.
- They have pretty much the same effect but copy will use a lot less
- memory.
- Syntax of remote paths
- The syntax of the paths passed to the rclone command are as follows.
- /path/to/dir
- This refers to the local file system.
- On Windows only \ may be used instead of / in local paths ONLY, non
- local paths must use /.
- These paths needn't start with a leading / - if they don't then they
- will be relative to the current directory.
- remote:path/to/dir
- This refers to a directory path/to/dir on remote: as defined in the
- config file (configured with rclone config).
- remote:/path/to/dir
- On most backends this is refers to the same directory as
- remote:path/to/dir and that format should be preferred. On a very small
- number of remotes (FTP, SFTP, Dropbox for business) this will refer to a
- different directory. On these, paths without a leading / will refer to
- your "home" directory and paths with a leading / will refer to the root.
- :backend:path/to/dir
- This is an advanced form for creating remotes on the fly. backend should
- be the name or prefix of a backend (the type in the config file) and all
- the configuration for the backend should be provided on the command line
- (or in environment variables).
- Eg
- rclone lsd --http-url https://pub.rclone.org :http:
- Which lists all the directories in pub.rclone.org.
- Quoting and the shell
- When you are typing commands to your computer you are using something
- called the command line shell. This interprets various characters in an
- OS specific way.
- Here are some gotchas which may help users unfamiliar with the shell
- rules
- Linux / OSX
- If your names have spaces or shell metacharacters (eg *, ?, $, ', " etc)
- then you must quote them. Use single quotes ' by default.
- rclone copy 'Important files?' remote:backup
- If you want to send a ' you will need to use ", eg
- rclone copy "O'Reilly Reviews" remote:backup
- The rules for quoting metacharacters are complicated and if you want the
- full details you'll have to consult the manual page for your shell.
- Windows
- If your names have spaces in you need to put them in ", eg
- rclone copy "E:\folder name\folder name\folder name" remote:backup
- If you are using the root directory on its own then don't quote it (see
- #464 for why), eg
- rclone copy E:\ remote:backup
- Copying files or directories with : in the names
- rclone uses : to mark a remote name. This is, however, a valid filename
- component in non-Windows OSes. The remote name parser will only search
- for a : up to the first / so if you need to act on a file or directory
- like this then use the full path starting with a /, or use ./ as a
- current directory prefix.
- So to sync a directory called sync:me to a remote called remote: use
- rclone sync ./sync:me remote:path
- or
- rclone sync /full/path/to/sync:me remote:path
- Server Side Copy
- Most remotes (but not all - see the overview) support server side copy.
- This means if you want to copy one folder to another then rclone won't
- download all the files and re-upload them; it will instruct the server
- to copy them in place.
- Eg
- rclone copy s3:oldbucket s3:newbucket
- Will copy the contents of oldbucket to newbucket without downloading and
- re-uploading.
- Remotes which don't support server side copy WILL download and re-upload
- in this case.
- Server side copies are used with sync and copy and will be identified in
- the log when using the -v flag. The move command may also use them if
- remote doesn't support server side move directly. This is done by
- issuing a server side copy then a delete which is much quicker than a
- download and re-upload.
- Server side copies will only be attempted if the remote names are the
- same.
- This can be used when scripting to make aged backups efficiently, eg
- rclone sync remote:current-backup remote:previous-backup
- rclone sync /path/to/files remote:current-backup
- Options
- Rclone has a number of options to control its behaviour.
- Options which use TIME use the go time parser. A duration string is a
- possibly signed sequence of decimal numbers, each with optional fraction
- and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units
- are "ns", "us" (or "µs"), "ms", "s", "m", "h".
- Options which use SIZE use kByte by default. However, a suffix of b for
- bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for
- PBytes may be used. These are the binary units, eg 1, 2**10, 2**20,
- 2**30 respectively.
- --backup-dir=DIR
- When using sync, copy or move any files which would have been
- overwritten or deleted are moved in their original hierarchy into this
- directory.
- If --suffix is set, then the moved files will have the suffix added to
- them. If there is a file with the same path (after the suffix has been
- added) in DIR, then it will be overwritten.
- The remote in use must support server side move or copy and you must use
- the same remote as the destination of the sync. The backup directory
- must not overlap the destination directory.
- For example
- rclone sync /path/to/local remote:current --backup-dir remote:old
- will sync /path/to/local to remote:current, but for any files which
- would have been updated or deleted will be stored in remote:old.
- If running rclone from a script you might want to use today's date as
- the directory name passed to --backup-dir to store the old files, or you
- might want to pass --suffix with today's date.
- --bind string
- Local address to bind to for outgoing connections. This can be an IPv4
- address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the
- host name doesn't resolve or resolves to more than one IP address it
- will give an error.
- --bwlimit=BANDWIDTH_SPEC
- This option controls the bandwidth limit. Limits can be specified in two
- ways: As a single limit, or as a timetable.
- Single limits last for the duration of the session. To use a single
- limit, specify the desired bandwidth in kBytes/s, or use a suffix
- b|k|M|G. The default is 0 which means to not limit bandwidth.
- For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
- It is also possible to specify a "timetable" of limits, which will cause
- certain limits to be applied at certain times. To specify a timetable,
- format your entries as "WEEKDAY-HH:MM,BANDWIDTH
- WEEKDAY-HH:MM,BANDWIDTH..." where: WEEKDAY is optional element. It could
- be writen as whole world or only using 3 first characters. HH:MM is an
- hour from 00:00 to 23:59.
- An example of a typical timetable to avoid link saturation during
- daytime working hours could be:
- --bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
- In this example, the transfer bandwidth will be every day set to
- 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop
- back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to
- 30MBytes/s, and at 11pm it will be completely disabled (full speed).
- Anything between 11pm and 8am will remain unlimited.
- An example of timetable with WEEKDAY could be:
- --bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
- It mean that, the transfer bandwidh will be set to 512kBytes/sec on
- Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00
- on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be
- unlimited.
- Timeslots without weekday are extended to whole week. So this one
- example:
- --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
- Is equal to this:
- --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
- Bandwidth limits only apply to the data transfer. They don't apply to
- the bandwidth of the directory listings etc.
- Note that the units are Bytes/s, not Bits/s. Typically connections are
- measured in Bits/s - to convert divide by 8. For example, let's say you
- have a 10 Mbit/s connection and you wish rclone to use half of it - 5
- Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
- parameter for rclone.
- On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled
- by sending a SIGUSR2 signal to rclone. This allows to remove the
- limitations of a long running rclone transfer and to restore it back to
- the value specified with --bwlimit quickly when needed. Assuming there
- is only one rclone instance running, you can toggle the limiter like
- this:
- kill -SIGUSR2 $(pidof rclone)
- If you configure rclone with a remote control then you can use change
- the bwlimit dynamically:
- rclone rc core/bwlimit rate=1M
- --buffer-size=SIZE
- Use this sized buffer to speed up file transfers. Each --transfer will
- use this much memory for buffering.
- When using mount or cmount each open file descriptor will use this much
- memory for buffering. See the mount documentation for more details.
- Set to 0 to disable the buffering for the minimum memory usage.
- --checkers=N
- The number of checkers to run in parallel. Checkers do the equality
- checking of files during a sync. For some storage systems (eg S3, Swift,
- Dropbox) this can take a significant amount of time so they are run in
- parallel.
- The default is to run 8 checkers in parallel.
- -c, --checksum
- Normally rclone will look at modification time and size of files to see
- if they are equal. If you set this flag then rclone will check the file
- hash and size to determine if files are equal.
- This is useful when the remote doesn't support setting modified time and
- a more accurate sync is desired than just checking the file size.
- This is very useful when transferring between remotes which store the
- same hash type on the object, eg Drive and Swift. For details of which
- remotes support which hash type see the table in the overview section.
- Eg rclone --checksum sync s3:/bucket swift:/bucket would run much
- quicker than without the --checksum flag.
- When using this flag, rclone won't update mtimes of remote files if they
- are incorrect as it would normally.
- --config=CONFIG_FILE
- Specify the location of the rclone config file.
- Normally the config file is in your home directory as a file called
- .config/rclone/rclone.conf (or .rclone.conf if created with an older
- version). If $XDG_CONFIG_HOME is set it will be at
- $XDG_CONFIG_HOME/rclone/rclone.conf
- If you run rclone -h and look at the help for the --config option you
- will see where the default location is for you.
- Use this flag to override the config location, eg
- rclone --config=".myconfig" .config.
- --contimeout=TIME
- Set the connection timeout. This should be in go time format which looks
- like 5s for 5 seconds, 10m for 10 minutes, or 3h30m.
- The connection timeout is the amount of time rclone will wait for a
- connection to go through to a remote object storage system. It is 1m by
- default.
- --dedupe-mode MODE
- Mode to run dedupe command in. One of interactive, skip, first, newest,
- oldest, rename. The default is interactive. See the dedupe command for
- more information as to what these options mean.
- --disable FEATURE,FEATURE,...
- This disables a comma separated list of optional features. For example
- to disable server side move and server side copy use:
- --disable move,copy
- The features can be put in in any case.
- To see a list of which features can be disabled use:
- --disable help
- See the overview features and optional features to get an idea of which
- feature does what.
- This flag can be useful for debugging and in exceptional circumstances
- (eg Google Drive limiting the total volume of Server Side Copies to
- 100GB/day).
- -n, --dry-run
- Do a trial run with no permanent changes. Use this to see what rclone
- would do without actually doing it. Useful when setting up the sync
- command which deletes files in the destination.
- --ignore-checksum
- Normally rclone will check that the checksums of transferred files
- match, and give an error "corrupted on transfer" if they don't.
- You can use this option to skip that check. You should only use it if
- you have had the "corrupted on transfer" error message and you are sure
- you might want to transfer potentially corrupted data.
- --ignore-existing
- Using this option will make rclone unconditionally skip all files that
- exist on the destination, no matter the content of these files.
- While this isn't a generally recommended option, it can be useful in
- cases where your files change due to encryption. However, it cannot
- correct partial transfers in case a transfer was interrupted.
- --ignore-size
- Normally rclone will look at modification time and size of files to see
- if they are equal. If you set this flag then rclone will check only the
- modification time. If --checksum is set then it only checks the
- checksum.
- It will also cause rclone to skip verifying the sizes are the same after
- transfer.
- This can be useful for transferring files to and from OneDrive which
- occasionally misreports the size of image files (see #399 for more
- info).
- -I, --ignore-times
- Using this option will cause rclone to unconditionally upload all files
- regardless of the state of files on the destination.
- Normally rclone would skip any files that have the same modification
- time and are the same size (or have the same checksum if using
- --checksum).
- --immutable
- Treat source and destination files as immutable and disallow
- modification.
- With this option set, files will be created and deleted as requested,
- but existing files will never be updated. If an existing file does not
- match between the source and destination, rclone will give the error
- Source and destination exist but do not match: immutable file modified.
- Note that only commands which transfer files (e.g. sync, copy, move) are
- affected by this behavior, and only modification is disallowed. Files
- may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g.
- sync, move). Use copy --immutable if it is desired to avoid deletion as
- well as modification.
- This can be useful as an additional layer of protection for immutable or
- append-only data sets (notably backup archives), where modification
- implies corruption and should not be propagated.
- --leave-root
- During rmdirs it will not remove root directory, even if it's empty.
- --log-file=FILE
- Log all of rclone's output to FILE. This is not active by default. This
- can be useful for tracking down problems with syncs in combination with
- the -v flag. See the Logging section for more info.
- Note that if you are using the logrotate program to manage rclone's
- logs, then you should use the copytruncate option as rclone doesn't have
- a signal to rotate logs.
- --log-format LIST
- Comma separated list of log format options. date, time, microseconds,
- longfile, shortfile, UTC. The default is "date,time".
- --log-level LEVEL
- This sets the log level for rclone. The default log level is NOTICE.
- DEBUG is equivalent to -vv. It outputs lots of debug info - useful for
- bug reports and really finding out what rclone is doing.
- INFO is equivalent to -v. It outputs information about each transfer and
- prints stats once a minute by default.
- NOTICE is the default log level if no logging flags are supplied. It
- outputs very little when things are working normally. It outputs
- warnings and significant events.
- ERROR is equivalent to -q. It only outputs error messages.
- --low-level-retries NUMBER
- This controls the number of low level retries rclone does.
- A low level retry is used to retry a failing operation - typically one
- HTTP request. This might be uploading a chunk of a big file for example.
- You will see low level retries in the log with the -v flag.
- This shouldn't need to be changed from the default in normal operations.
- However, if you get a lot of low level retries you may wish to reduce
- the value so rclone moves on to a high level retry (see the --retries
- flag) quicker.
- Disable low level retries with --low-level-retries 1.
- --max-backlog=N
- This is the maximum allowable backlog of files in a sync/copy/move
- queued for being checked or transferred.
- This can be set arbitrarily large. It will only use memory when the
- queue is in use. Note that it will use in the order of N kB of memory
- when the backlog is in use.
- Setting this large allows rclone to calculate how many files are pending
- more accurately and give a more accurate estimated finish time.
- Setting this small will make rclone more synchronous to the listings of
- the remote which may be desirable.
- --max-delete=N
- This tells rclone not to delete more than N files. If that limit is
- exceeded then a fatal error will be generated and rclone will stop the
- operation in progress.
- --max-depth=N
- This modifies the recursion depth for all the commands except purge.
- So if you do rclone --max-depth 1 ls remote:path you will see only the
- files in the top level directory. Using --max-depth 2 means you will see
- all the files in first two directory levels and so on.
- For historical reasons the lsd command defaults to using a --max-depth
- of 1 - you can override this with the command line flag.
- You can use this command to disable recursion (with --max-depth 1).
- Note that if you use this with sync and --delete-excluded the files not
- recursed through are considered excluded and will be deleted on the
- destination. Test first with --dry-run if you are not sure what will
- happen.
- --max-transfer=SIZE
- Rclone will stop transferring when it has reached the size specified.
- Defaults to off.
- When the limit is reached all transfers will stop immediately.
- Rclone will exit with exit code 8 if the transfer limit is reached.
- --modify-window=TIME
- When checking whether a file has been modified, this is the maximum
- allowed time difference that a file can have and still be considered
- equivalent.
- The default is 1ns unless this is overridden by a remote. For example OS
- X only stores modification times to the nearest second so if you are
- reading and writing to an OS X filing system this will be 1s by default.
- This command line flag allows you to override that computed default.
- --no-gzip-encoding
- Don't set Accept-Encoding: gzip. This means that rclone won't ask the
- server for compressed files automatically. Useful if you've set the
- server to return files with Content-Encoding: gzip but you uploaded
- compressed files.
- There is no need to set this in normal operation, and doing so will
- decrease the network transfer efficiency of rclone.
- --no-update-modtime
- When using this flag, rclone won't update modification times of remote
- files if they are incorrect as it would normally.
- This can be used if the remote is being synced with another tool also
- (eg the Google Drive client).
- -P, --progress
- This flag makes rclone update the stats in a static block in the
- terminal providing a realtime overview of the transfer.
- Any log messages will scroll above the static block. Log messages will
- push the static block down to the bottom of the terminal where it will
- stay.
- Normally this is updated every 500mS but this period can be overridden
- with the --stats flag.
- This can be used with the --stats-one-line flag for a simpler display.
- Note: On Windows untilthis bug is fixed all non-ASCII characters will be
- replaced with . when --progress is in use.
- -q, --quiet
- Normally rclone outputs stats and a completion message. If you set this
- flag it will make as little output as possible.
- --retries int
- Retry the entire sync if it fails this many times it fails (default 3).
- Some remotes can be unreliable and a few retries help pick up the files
- which didn't get transferred because of errors.
- Disable retries with --retries 1.
- --retries-sleep=TIME
- This sets the interval between each retry specified by --retries
- The default is 0. Use 0 to disable.
- --size-only
- Normally rclone will look at modification time and size of files to see
- if they are equal. If you set this flag then rclone will check only the
- size.
- This can be useful transferring files from Dropbox which have been
- modified by the desktop sync client which doesn't set checksums of
- modification times in the same way as rclone.
- --stats=TIME
- Commands which transfer data (sync, copy, copyto, move, moveto) will
- print data transfer stats at regular intervals to show their progress.
- This sets the interval.
- The default is 1m. Use 0 to disable.
- If you set the stats interval then all commands can show stats. This can
- be useful when running other commands, check or mount for example.
- Stats are logged at INFO level by default which means they won't show at
- default log level NOTICE. Use --stats-log-level NOTICE or -v to make
- them show. See the Logging section for more info on log levels.
- Note that on macOS you can send a SIGINFO (which is normally ctrl-T in
- the terminal) to make the stats print immediately.
- --stats-file-name-length integer
- By default, the --stats output will truncate file names and paths longer
- than 40 characters. This is equivalent to providing
- --stats-file-name-length 40. Use --stats-file-name-length 0 to disable
- any truncation of file names printed by stats.
- --stats-log-level string
- Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or
- ERROR. The default is INFO. This means at the default level of logging
- which is NOTICE the stats won't show - if you want them to then use
- --stats-log-level NOTICE. See the Logging section for more info on log
- levels.
- --stats-one-line
- When this is specified, rclone condenses the stats into a single line
- showing the most important stats only.
- --stats-unit=bits|bytes
- By default, data transfer rates will be printed in bytes/second.
- This option allows the data rate to be printed in bits/second.
- Data transfer volume will still be reported in bytes.
- The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals
- 1,048,576 bits/s and not 1,000,000 bits/s.
- The default is bytes.
- --suffix=SUFFIX
- This is for use with --backup-dir only. If this isn't set then
- --backup-dir will move files with their original name. If it is set then
- the files will have SUFFIX added on to them.
- See --backup-dir for more info.
- --syslog
- On capable OSes (not Windows or Plan9) send all log output to syslog.
- This can be useful for running rclone in a script or rclone mount.
- --syslog-facility string
- If using --syslog this sets the syslog facility (eg KERN, USER). See
- man syslog for a list of possible facilities. The default facility is
- DAEMON.
- --tpslimit float
- Limit HTTP transactions per second to this. Default is 0 which is used
- to mean unlimited transactions per second.
- For example to limit rclone to 10 HTTP transactions per second use
- --tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.
- Use this when the number of transactions per second from rclone is
- causing a problem with the cloud storage provider (eg getting you banned
- or rate limited).
- This can be very useful for rclone mount to control the behaviour of
- applications using it.
- See also --tpslimit-burst.
- --tpslimit-burst int
- Max burst of transactions for --tpslimit. (default 1)
- Normally --tpslimit will do exactly the number of transaction per second
- specified. However if you supply --tps-burst then rclone can save up
- some transactions from when it was idle giving a burst of up to the
- parameter supplied.
- For example if you provide --tpslimit-burst 10 then if rclone has been
- idle for more than 10*--tpslimit then it can do 10 transactions very
- quickly before they are limited again.
- This may be used to increase performance of --tpslimit without changing
- the long term average number of transactions per second.
- --track-renames
- By default, rclone doesn't keep track of renamed files, so if you rename
- a file locally then sync it to a remote, rclone will delete the old file
- on the remote and upload a new copy.
- If you use this flag, and the remote supports server side copy or server
- side move, and the source and destination have a compatible hash, then
- this will track renames during sync operations and perform renaming
- server-side.
- Files will be matched by size and hash - if both match then a rename
- will be considered.
- If the destination does not support server-side copy or move, rclone
- will fall back to the default behaviour and log an error level message
- to the console. Note: Encrypted destinations are not supported by
- --track-renames.
- Note that --track-renames uses extra memory to keep track of all the
- rename candidates.
- Note also that --track-renames is incompatible with --delete-before and
- will select --delete-after instead of --delete-during.
- --delete-(before,during,after)
- This option allows you to specify when files on your destination are
- deleted when you sync folders.
- Specifying the value --delete-before will delete all files present on
- the destination, but not on the source _before_ starting the transfer of
- any new or updated files. This uses two passes through the file systems,
- one for the deletions and one for the copies.
- Specifying --delete-during will delete files while checking and
- uploading files. This is the fastest option and uses the least memory.
- Specifying --delete-after (the default value) will delay deletion of
- files until all new/updated files have been successfully transferred.
- The files to be deleted are collected in the copy pass then deleted
- after the copy pass has completed successfully. The files to be deleted
- are held in memory so this mode may use more memory. This is the safest
- mode as it will only delete files if there have been no errors
- subsequent to that. If there have been errors before the deletions start
- then you will get the message
- not deleting files as there were IO errors.
- --fast-list
- When doing anything which involves a directory listing (eg sync, copy,
- ls - in fact nearly every command), rclone normally lists a directory
- and processes it before using more directory lists to process any
- subdirectories. This can be parallelised and works very quickly using
- the least amount of memory.
- However, some remotes have a way of listing all files beneath a
- directory in one (or a small number) of transactions. These tend to be
- the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
- If you use the --fast-list flag then rclone will use this method for
- listing directories. This will have the following consequences for the
- listing:
- - It WILL use fewer transactions (important if you pay for them)
- - It WILL use more memory. Rclone has to load the whole listing into
- memory.
- - It _may_ be faster because it uses fewer transactions
- - It _may_ be slower because it can't be parallelized
- rclone should always give identical results with and without
- --fast-list.
- If you pay for transactions and can fit your entire sync listing into
- memory then --fast-list is recommended. If you have a very big sync to
- do then don't use --fast-list otherwise you will run out of memory.
- If you use --fast-list on a remote which doesn't support it, then rclone
- will just ignore it.
- --timeout=TIME
- This sets the IO idle timeout. If a transfer has started but then
- becomes idle for this long it is considered broken and disconnected.
- The default is 5m. Set to 0 to disable.
- --transfers=N
- The number of file transfers to run in parallel. It can sometimes be
- useful to set this to a smaller number if the remote is giving a lot of
- timeouts or bigger if you have lots of bandwidth and a fast remote.
- The default is to run 4 file transfers in parallel.
- -u, --update
- This forces rclone to skip any files which exist on the destination and
- have a modified time that is newer than the source file.
- If an existing destination file has a modification time equal (within
- the computed modify window precision) to the source file's, it will be
- updated if the sizes are different.
- On remotes which don't support mod time directly the time checked will
- be the uploaded time. This means that if uploading to one of these
- remotes, rclone will skip any files which exist on the destination and
- have an uploaded time that is newer than the modification time of the
- source file.
- This can be useful when transferring to a remote which doesn't support
- mod times directly as it is more accurate than a --size-only check and
- faster than using --checksum.
- --use-server-modtime
- Some object-store backends (e.g, Swift, S3) do not preserve file
- modification times (modtime). On these backends, rclone stores the
- original modtime as additional metadata on the object. By default it
- will make an API call to retrieve the metadata when the modtime is
- needed by an operation.
- Use this flag to disable the extra API call and rely instead on the
- server's modified time. In cases such as a local to remote sync, knowing
- the local file is newer than the time it was last uploaded to the remote
- is sufficient. In those cases, this flag can speed up the process and
- reduce the number of API calls necessary.
- -v, -vv, --verbose
- With -v rclone will tell you about each file that is transferred and a
- small number of significant events.
- With -vv rclone will become very verbose telling you about every file it
- considers and transfers. Please send bug reports with a log with this
- setting.
- -V, --version
- Prints the version number
- Configuration Encryption
- Your configuration file contains information for logging in to your
- cloud services. This means that you should keep your .rclone.conf file
- in a secure location.
- If you are in an environment where that isn't possible, you can add a
- password to your configuration. This means that you will have to enter
- the password every time you start rclone.
- To add a password to your rclone configuration, execute rclone config.
- >rclone config
- Current remotes:
- e) Edit existing remote
- n) New remote
- d) Delete remote
- s) Set configuration password
- q) Quit config
- e/n/d/s/q>
- Go into s, Set configuration password:
- e/n/d/s/q> s
- Your configuration is not encrypted.
- If you add a password, you will protect your login information to cloud services.
- a) Add Password
- q) Quit to main menu
- a/q> a
- Enter NEW configuration password:
- password:
- Confirm NEW password:
- password:
- Password set
- Your configuration is encrypted.
- c) Change Password
- u) Unencrypt configuration
- q) Quit to main menu
- c/u/q>
- Your configuration is now encrypted, and every time you start rclone you
- will now be asked for the password. In the same menu, you can change the
- password or completely remove encryption from your configuration.
- There is no way to recover the configuration if you lose your password.
- rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to
- encrypt and authenticate your configuration with secret-key
- cryptography. The password is SHA-256 hashed, which produces the key for
- secretbox. The hashed password is not stored.
- While this provides very good security, we do not recommend storing your
- encrypted rclone configuration in public if it contains sensitive
- information, maybe except if you use a very strong password.
- If it is safe in your environment, you can set the RCLONE_CONFIG_PASS
- environment variable to contain your password, in which case it will be
- used for decrypting the configuration.
- You can set this for a session from a script. For unix like systems save
- this to a file called set-rclone-password:
- #!/bin/echo Source this file don't run it
- read -s RCLONE_CONFIG_PASS
- export RCLONE_CONFIG_PASS
- Then source the file when you want to use it. From the shell you would
- do source set-rclone-password. It will then ask you for the password and
- set it in the environment variable.
- If you are running rclone inside a script, you might want to disable
- password prompts. To do that, pass the parameter --ask-password=false to
- rclone. This will make rclone fail instead of asking for a password if
- RCLONE_CONFIG_PASS doesn't contain a valid password.
- Developer options
- These options are useful when developing or debugging rclone. There are
- also some more remote specific options which aren't documented here
- which are used for testing. These start with remote name eg
- --drive-test-option - see the docs for the remote in question.
- --cpuprofile=FILE
- Write CPU profile to file. This can be analysed with go tool pprof.
- --dump flag,flag,flag
- The --dump flag takes a comma separated list of flags to dump info
- about. These are:
- --dump headers
- Dump HTTP headers with Authorization: lines removed. May still contain
- sensitive info. Can be very verbose. Useful for debugging only.
- Use --dump auth if you do want the Authorization: headers.
- --dump bodies
- Dump HTTP headers and bodies - may contain sensitive info. Can be very
- verbose. Useful for debugging only.
- Note that the bodies are buffered in memory so don't use this for
- enormous files.
- --dump requests
- Like --dump bodies but dumps the request bodies and the response
- headers. Useful for debugging download problems.
- --dump responses
- Like --dump bodies but dumps the response bodies and the request
- headers. Useful for debugging upload problems.
- --dump auth
- Dump HTTP headers - will contain sensitive info such as Authorization:
- headers - use --dump headers to dump without Authorization: headers. Can
- be very verbose. Useful for debugging only.
- --dump filters
- Dump the filters to the output. Useful to see exactly what include and
- exclude options are filtering on.
- --dump goroutines
- This dumps a list of the running go-routines at the end of the command
- to standard output.
- --dump openfiles
- This dumps a list of the open files at the end of the command. It uses
- the lsof command to do that so you'll need that installed to use it.
- --memprofile=FILE
- Write memory profile to file. This can be analysed with go tool pprof.
- --no-check-certificate=true/false
- --no-check-certificate controls whether a client verifies the server's
- certificate chain and host name. If --no-check-certificate is true, TLS
- accepts any certificate presented by the server and any host name in
- that certificate. In this mode, TLS is susceptible to man-in-the-middle
- attacks.
- This option defaults to false.
- THIS SHOULD BE USED ONLY FOR TESTING.
- Filtering
- For the filtering options
- - --delete-excluded
- - --filter
- - --filter-from
- - --exclude
- - --exclude-from
- - --include
- - --include-from
- - --files-from
- - --min-size
- - --max-size
- - --min-age
- - --max-age
- - --dump filters
- See the filtering section.
- Remote control
- For the remote control options and for instructions on how to remote
- control rclone
- - --rc
- - and anything starting with --rc-
- See the remote control section.
- Logging
- rclone has 4 levels of logging, ERROR, NOTICE, INFO and DEBUG.
- By default, rclone logs to standard error. This means you can redirect
- standard error and still see the normal output of rclone commands (eg
- rclone ls).
- By default, rclone will produce Error and Notice level messages.
- If you use the -q flag, rclone will only produce Error messages.
- If you use the -v flag, rclone will produce Error, Notice and Info
- messages.
- If you use the -vv flag, rclone will produce Error, Notice, Info and
- Debug messages.
- You can also control the log levels with the --log-level flag.
- If you use the --log-file=FILE option, rclone will redirect Error, Info
- and Debug messages along with standard error to FILE.
- If you use the --syslog flag then rclone will log to syslog and the
- --syslog-facility control which facility it uses.
- Rclone prefixes all log messages with their level in capitals, eg INFO
- which makes it easy to grep the log file for different kinds of
- information.
- Exit Code
- If any errors occur during the command execution, rclone will exit with
- a non-zero exit code. This allows scripts to detect when rclone
- operations have failed.
- During the startup phase, rclone will exit immediately if an error is
- detected in the configuration. There will always be a log message
- immediately before exiting.
- When rclone is running it will accumulate errors as it goes along, and
- only exit with a non-zero exit code if (after retries) there were still
- failed transfers. For every error counted there will be a high priority
- log message (visible with -q) showing the message and which file caused
- the problem. A high priority message is also shown when starting a retry
- so the user can see that any previous error messages may not be valid
- after the retry. If rclone has done a retry it will log a high priority
- message if the retry was successful.
- List of exit codes
- - 0 - success
- - 1 - Syntax or usage error
- - 2 - Error not otherwise categorised
- - 3 - Directory not found
- - 4 - File not found
- - 5 - Temporary error (one that more retries might fix) (Retry errors)
- - 6 - Less serious errors (like 461 errors from dropbox) (NoRetry
- errors)
- - 7 - Fatal error (one that more retries won't fix, like account
- suspended) (Fatal errors)
- - 8 - Transfer exceeded - limit set by --max-transfer reached
- Environment Variables
- Rclone can be configured entirely using environment variables. These can
- be used to set defaults for options or config file entries.
- Options
- Every option in rclone can have its default set by environment variable.
- To find the name of the environment variable, first, take the long
- option name, strip the leading --, change - to _, make upper case and
- prepend RCLONE_.
- For example, to always set --stats 5s, set the environment variable
- RCLONE_STATS=5s. If you set stats on the command line this will override
- the environment variable setting.
- Or to always use the trash in drive --drive-use-trash, set
- RCLONE_DRIVE_USE_TRASH=true.
- The same parser is used for the options and the environment variables so
- they take exactly the same form.
- Config file
- You can set defaults for values in the config file on an individual
- remote basis. If you want to use this feature, you will need to discover
- the name of the config items that you want. The easiest way is to run
- through rclone config by hand, then look in the config file to see what
- the values are (the config file can be found by looking at the help for
- --config in rclone help).
- To find the name of the environment variable, you need to set, take
- RCLONE_CONFIG_ + name of remote + _ + name of config file option and
- make it all uppercase.
- For example, to configure an S3 remote named mys3: without a config file
- (using unix ways of setting environment variables):
- $ export RCLONE_CONFIG_MYS3_TYPE=s3
- $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
- $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
- $ rclone lsd MYS3:
- -1 2016-09-21 12:54:21 -1 my-bucket
- $ rclone listremotes | grep mys3
- mys3:
- Note that if you want to create a remote using environment variables you
- must create the ..._TYPE variable as above.
- Other environment variables
- - RCLONE_CONFIG_PASS` set to contain your config file password (see
- Configuration Encryption section)
- - HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions
- thereof).
- - HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
- - The environment values may be either a complete URL or a
- "host[:port]" for, in which case the "http" scheme is assumed.
- CONFIGURING RCLONE ON A REMOTE / HEADLESS MACHINE
- Some of the configurations (those involving oauth2) require an Internet
- connected web browser.
- If you are trying to set rclone up on a remote or headless box with no
- browser available on it (eg a NAS or a server in a datacenter) then you
- will need to use an alternative means of configuration. There are two
- ways of doing it, described below.
- Configuring using rclone authorize
- On the headless box
- ...
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> n
- For this to work, you will need rclone available on a machine that has a web browser available.
- Execute the following on your machine:
- rclone authorize "amazon cloud drive"
- Then paste the result below:
- result>
- Then on your main desktop machine
- rclone authorize "amazon cloud drive"
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- Paste the following into your remote machine --->
- SECRET_TOKEN
- <---End paste
- Then back to the headless box, paste in the code
- result> SECRET_TOKEN
- --------------------
- [acd12]
- client_id =
- client_secret =
- token = SECRET_TOKEN
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d>
- Configuring by copying the config file
- Rclone stores all of its config in a single configuration file. This can
- easily be copied to configure a remote rclone.
- So first configure rclone on your desktop machine
- rclone config
- to set up the config file.
- Find the config file by running rclone -h and looking for the help for
- the --config option
- $ rclone -h
- [snip]
- --config="/home/user/.rclone.conf": Config file.
- [snip]
- Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
- place it in the correct place (use rclone -h on the remote box to find
- out where).
- FILTERING, INCLUDES AND EXCLUDES
- Rclone has a sophisticated set of include and exclude rules. Some of
- these are based on patterns and some on other things like file size.
- The filters are applied for the copy, sync, move, ls, lsl, md5sum,
- sha1sum, size, delete and check operations. Note that purge does not
- obey the filters.
- Each path as it passes through rclone is matched against the include and
- exclude rules like --include, --exclude, --include-from, --exclude-from,
- --filter, or --filter-from. The simplest way to try them out is using
- the ls command, or --dry-run together with -v.
- Patterns
- The patterns used to match files for inclusion or exclusion are based on
- "file globs" as used by the unix shell.
- If the pattern starts with a / then it only matches at the top level of
- the directory tree, RELATIVE TO THE ROOT OF THE REMOTE (not necessarily
- the root of the local drive). If it doesn't start with / then it is
- matched starting at the END OF THE PATH, but it will only match a
- complete path element:
- file.jpg - matches "file.jpg"
- - matches "directory/file.jpg"
- - doesn't match "afile.jpg"
- - doesn't match "directory/afile.jpg"
- /file.jpg - matches "file.jpg" in the root directory of the remote
- - doesn't match "afile.jpg"
- - doesn't match "directory/file.jpg"
- IMPORTANT Note that you must use / in patterns and not \ even if running
- on Windows.
- A * matches anything but not a /.
- *.jpg - matches "file.jpg"
- - matches "directory/file.jpg"
- - doesn't match "file.jpg/something"
- Use ** to match anything, including slashes (/).
- dir/** - matches "dir/file.jpg"
- - matches "dir/dir1/dir2/file.jpg"
- - doesn't match "directory/file.jpg"
- - doesn't match "adir/file.jpg"
- A ? matches any character except a slash /.
- l?ss - matches "less"
- - matches "lass"
- - doesn't match "floss"
- A [ and ] together make a a character class, such as [a-z] or [aeiou] or
- [[:alpha:]]. See the go regexp docs for more info on these.
- h[ae]llo - matches "hello"
- - matches "hallo"
- - doesn't match "hullo"
- A { and } define a choice between elements. It should contain a comma
- separated list of patterns, any of which might match. These patterns can
- contain wildcards.
- {one,two}_potato - matches "one_potato"
- - matches "two_potato"
- - doesn't match "three_potato"
- - doesn't match "_potato"
- Special characters can be escaped with a \ before them.
- \*.jpg - matches "*.jpg"
- \\.jpg - matches "\.jpg"
- \[one\].jpg - matches "[one].jpg"
- Note also that rclone filter globs can only be used in one of the filter
- command line flags, not in the specification of the remote, so
- rclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required
- is rclone --include "*.jpg" copy remote:dir /path/to/dir
- Directories
- Rclone keeps track of directories that could match any file patterns.
- Eg if you add the include rule
- /a/*.jpg
- Rclone will synthesize the directory include rule
- /a/
- If you put any rules which end in / then it will only match directories.
- Directory matches are ONLY used to optimise directory access patterns -
- you must still match the files that you want to match. Directory matches
- won't optimise anything on bucket based remotes (eg s3, swift, google
- compute storage, b2) which don't have a concept of directory.
- Differences between rsync and rclone patterns
- Rclone implements bash style {a,b,c} glob matching which rsync doesn't.
- Rclone always does a wildcard match so \ must always escape a \.
- How the rules are used
- Rclone maintains a combined list of include rules and exclude rules.
- Each file is matched in order, starting from the top, against the rule
- in the list until it finds a match. The file is then included or
- excluded according to the rule type.
- If the matcher fails to find a match after testing against all the
- entries in the list then the path is included.
- For example given the following rules, + being include, - being exclude,
- - secret*.jpg
- + *.jpg
- + *.png
- + file2.avi
- - *
- This would include
- - file1.jpg
- - file3.png
- - file2.avi
- This would exclude
- - secret17.jpg
- - non *.jpg and *.png
- A similar process is done on directory entries before recursing into
- them. This only works on remotes which have a concept of directory (Eg
- local, google drive, onedrive, amazon drive) and not on bucket based
- remotes (eg s3, swift, google compute storage, b2).
- Adding filtering rules
- Filtering rules are added with the following command line flags.
- Repeating options
- You can repeat the following options to add more than one rule of that
- type.
- - --include
- - --include-from
- - --exclude
- - --exclude-from
- - --filter
- - --filter-from
- IMPORTANT You should not use --include* together with --exclude*. It may
- produce different results than you expected. In that case try to use:
- --filter*.
- Note that all the options of the same type are processed together in the
- order above, regardless of what order they were placed on the command
- line.
- So all --include options are processed first in the order they appeared
- on the command line, then all --include-from options etc.
- To mix up the order includes and excludes, the --filter flag can be
- used.
- --exclude - Exclude files matching pattern
- Add a single exclude rule with --exclude.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Eg --exclude *.bak to exclude all bak files from the sync.
- --exclude-from - Read exclude patterns from file
- Add exclude rules from a file.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Prepare a file like this exclude-file.txt
- # a sample exclude rule file
- *.bak
- file2.jpg
- Then use as --exclude-from exclude-file.txt. This will sync all files
- except those ending in bak and file2.jpg.
- This is useful if you have a lot of rules.
- --include - Include files matching pattern
- Add a single include rule with --include.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Eg --include *.{png,jpg} to include all png and jpg files in the backup
- and no others.
- This adds an implicit --exclude * at the very end of the filter list.
- This means you can mix --include and --include-from with the other
- filters (eg --exclude) but you must include all the files you want in
- the include statement. If this doesn't provide enough flexibility then
- you must use --filter-from.
- --include-from - Read include patterns from file
- Add include rules from a file.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Prepare a file like this include-file.txt
- # a sample include rule file
- *.jpg
- *.png
- file2.avi
- Then use as --include-from include-file.txt. This will sync all jpg, png
- files and file2.avi.
- This is useful if you have a lot of rules.
- This adds an implicit --exclude * at the very end of the filter list.
- This means you can mix --include and --include-from with the other
- filters (eg --exclude) but you must include all the files you want in
- the include statement. If this doesn't provide enough flexibility then
- you must use --filter-from.
- --filter - Add a file-filtering rule
- This can be used to add a single include or exclude rule. Include rules
- start with + and exclude rules start with -. A special rule called ! can
- be used to clear the existing rules.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Eg --filter "- *.bak" to exclude all bak files from the sync.
- --filter-from - Read filtering patterns from a file
- Add include/exclude rules from a file.
- This flag can be repeated. See above for the order the flags are
- processed in.
- Prepare a file like this filter-file.txt
- # a sample filter rule file
- - secret*.jpg
- + *.jpg
- + *.png
- + file2.avi
- - /dir/Trash/**
- + /dir/**
- # exclude everything else
- - *
- Then use as --filter-from filter-file.txt. The rules are processed in
- the order that they are defined.
- This example will include all jpg and png files, exclude any files
- matching secret*.jpg and include file2.avi. It will also include
- everything in the directory dir at the root of the sync, except
- dir/Trash which it will exclude. Everything else will be excluded from
- the sync.
- --files-from - Read list of source-file names
- This reads a list of file names from the file passed in and ONLY these
- files are transferred. The FILTERING RULES ARE IGNORED completely if you
- use this option.
- This option can be repeated to read from more than one file. These are
- read in the order that they are placed on the command line.
- Paths within the --files-from file will be interpreted as starting with
- the root specified in the command. Leading / characters are ignored.
- For example, suppose you had files-from.txt with this content:
- # comment
- file1.jpg
- subdir/file2.jpg
- You could then use it like this:
- rclone copy --files-from files-from.txt /home/me/pics remote:pics
- This will transfer these files only (if they exist)
- /home/me/pics/file1.jpg → remote:pics/file1.jpg
- /home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
- To take a more complicated example, let's say you had a few files you
- want to back up regularly with these absolute paths:
- /home/user1/important
- /home/user1/dir/file
- /home/user2/stuff
- To copy these you'd find a common subdirectory - in this case /home and
- put the remaining files in files-from.txt with or without leading /, eg
- user1/important
- user1/dir/file
- user2/stuff
- You could then copy these to a remote like this
- rclone copy --files-from files-from.txt /home remote:backup
- The 3 files will arrive in remote:backup with the paths as in the
- files-from.txt like this:
- /home/user1/important → remote:backup/user1/important
- /home/user1/dir/file → remote:backup/user1/dir/file
- /home/user2/stuff → remote:backup/stuff
- You could of course choose / as the root too in which case your
- files-from.txt might look like this.
- /home/user1/important
- /home/user1/dir/file
- /home/user2/stuff
- And you would transfer it like this
- rclone copy --files-from files-from.txt / remote:backup
- In this case there will be an extra home directory on the remote:
- /home/user1/important → remote:home/backup/user1/important
- /home/user1/dir/file → remote:home/backup/user1/dir/file
- /home/user2/stuff → remote:home/backup/stuff
- --min-size - Don't transfer any file smaller than this
- This option controls the minimum size file which will be transferred.
- This defaults to kBytes but a suffix of k, M, or G can be used.
- For example --min-size 50k means no files smaller than 50kByte will be
- transferred.
- --max-size - Don't transfer any file larger than this
- This option controls the maximum size file which will be transferred.
- This defaults to kBytes but a suffix of k, M, or G can be used.
- For example --max-size 1G means no files larger than 1GByte will be
- transferred.
- --max-age - Don't transfer any file older than this
- This option controls the maximum age of files to transfer. Give in
- seconds or with a suffix of:
- - ms - Milliseconds
- - s - Seconds
- - m - Minutes
- - h - Hours
- - d - Days
- - w - Weeks
- - M - Months
- - y - Years
- For example --max-age 2d means no files older than 2 days will be
- transferred.
- --min-age - Don't transfer any file younger than this
- This option controls the minimum age of files to transfer. Give in
- seconds or with a suffix (see --max-age for list of suffixes)
- For example --min-age 2d means no files younger than 2 days will be
- transferred.
- --delete-excluded - Delete files on dest excluded from sync
- IMPORTANT this flag is dangerous - use with --dry-run and -v first.
- When doing rclone sync this will delete any files which are excluded
- from the sync on the destination.
- If for example you did a sync from A to B without the --min-size 50k
- flag
- rclone sync A: B:
- Then you repeated it like this with the --delete-excluded
- rclone --min-size 50k --delete-excluded sync A: B:
- This would delete all files on B which are less than 50 kBytes as these
- are now excluded from the sync.
- Always test first with --dry-run and -v before using this flag.
- --dump filters - dump the filters to the output
- This dumps the defined filters to the output as regular expressions.
- Useful for debugging.
- Quoting shell metacharacters
- The examples above may not work verbatim in your shell as they have
- shell metacharacters in them (eg *), and may require quoting.
- Eg linux, OSX
- - --include \*.jpg
- - --include '*.jpg'
- - --include='*.jpg'
- In Windows the expansion is done by the command not the shell so this
- should work fine
- - --include *.jpg
- Exclude directory based on a file
- It is possible to exclude a directory based on a file, which is present
- in this directory. Filename should be specified using the
- --exclude-if-present flag. This flag has a priority over the other
- filtering flags.
- Imagine, you have the following directory structure:
- dir1/file1
- dir1/dir2/file2
- dir1/dir2/dir3/file3
- dir1/dir2/dir3/.ignore
- You can exclude dir3 from sync by running the following command:
- rclone sync --exclude-if-present .ignore dir1 remote:backup
- Currently only one filename is supported, i.e. --exclude-if-present
- should not be used multiple times.
- REMOTE CONTROLLING RCLONE
- If rclone is run with the --rc flag then it starts an http server which
- can be used to remote control rclone.
- NB this is experimental and everything here is subject to change!
- Supported parameters
- --rc
- Flag to start the http server listen on remote requests
- --rc-addr=IP
- IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-cert=KEY
- SSL PEM key (concatenation of certificate and CA certificate)
- --rc-client-ca=PATH
- Client certificate authority to verify clients with
- --rc-htpasswd=PATH
- htpasswd file - if not provided no authentication is done
- --rc-key=PATH
- SSL PEM Private key
- --rc-max-header-bytes=VALUE
- Maximum size of request header (default 4096)
- --rc-user=VALUE
- User name for authentication.
- --rc-pass=VALUE
- Password for authentication.
- --rc-realm=VALUE
- Realm for authentication (default "rclone")
- --rc-server-read-timeout=DURATION
- Timeout for server reading data (default 1h0m0s)
- --rc-server-write-timeout=DURATION
- Timeout for server writing data (default 1h0m0s)
- Accessing the remote control via the rclone rc command
- Rclone itself implements the remote control protocol in its rclone rc
- command.
- You can use it like this
- $ rclone rc rc/noop param1=one param2=two
- {
- "param1": "one",
- "param2": "two"
- }
- Run rclone rc on its own to see the help for the installed remote
- control commands.
- Supported commands
- cache/expire: Purge a remote from cache
- Purge a remote from the cache backend. Supports either a directory or a
- file. Params: - remote = path to remote (required) - withData =
- true/false to delete cached data (chunks) as well (optional)
- Eg
- rclone rc cache/expire remote=path/to/sub/folder/
- rclone rc cache/expire remote=/ withData=true
- cache/fetch: Fetch file chunks
- Ensure the specified file chunks are cached on disk.
- The chunks= parameter specifies the file chunks to check. It takes a
- comma separated list of array slice indices. The slice indices are
- similar to Python slices: start[:end]
- start is the 0 based chunk number from the beginning of the file to
- fetch inclusive. end is 0 based chunk number from the beginning of the
- file to fetch exclisive. Both values can be negative, in which case they
- count from the back of the file. The value "-5:" represents the last 5
- chunks of a file.
- Some valid examples are: ":5,-5:" -> the first and last five chunks
- "0,-2" -> the first and the second last chunk "0:10" -> the first ten
- chunks
- Any parameter with a key that starts with "file" can be used to specify
- files to fetch, eg
- rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
- File names will automatically be encrypted when the a crypt remote is
- used on top of the cache.
- cache/stats: Get cache stats
- Show statistics for the cache remote.
- core/bwlimit: Set the bandwidth limit.
- This sets the bandwidth limit to that passed in.
- Eg
- rclone rc core/bwlimit rate=1M
- rclone rc core/bwlimit rate=off
- The format of the parameter is exactly the same as passed to --bwlimit
- except only one bandwidth may be specified.
- core/gc: Runs a garbage collection.
- This tells the go runtime to do a garbage collection run. It isn't
- necessary to call this normally, but it can be useful for debugging
- memory problems.
- core/memstats: Returns the memory statistics
- This returns the memory statistics of the running program. What the
- values mean are explained in the go docs:
- https://golang.org/pkg/runtime/#MemStats
- The most interesting values for most people are:
- - HeapAlloc: This is the amount of memory rclone is actually using
- - HeapSys: This is the amount of memory rclone has obtained from the
- OS
- - Sys: this is the total amount of memory requested from the OS
- - It is virtual memory so may include unused memory
- core/pid: Return PID of current process
- This returns PID of current process. Useful for stopping rclone process.
- core/stats: Returns stats about current transfers.
- This returns all available stats
- rclone rc core/stats
- Returns the following values:
- {
- "speed": average speed in bytes/sec since start of the process,
- "bytes": total transferred bytes since the start of the process,
- "errors": number of errors,
- "fatalError": whether there has been at least one FatalError,
- "retryError": whether there has been at least one non-NoRetryError,
- "checks": number of checked files,
- "transfers": number of transferred files,
- "deletes" : number of deleted files,
- "elapsedTime": time in seconds since the start of the process,
- "lastError": last occurred error,
- "transferring": an array of currently active file transfers:
- [
- {
- "bytes": total transferred bytes for this file,
- "eta": estimated time in seconds until file transfer completion
- "name": name of the file,
- "percentage": progress of the file transfer in percent,
- "speed": speed in bytes/sec,
- "speedAvg": speed in bytes/sec as an exponentially weighted moving average,
- "size": size of the file in bytes
- }
- ],
- "checking": an array of names of currently active file checks
- []
- }
- Values for "transferring", "checking" and "lastError" are only assigned
- if data is available. The value for "eta" is null if an eta cannot be
- determined.
- rc/error: This returns an error
- This returns an error with the input as part of its error string. Useful
- for testing error handling.
- rc/list: List all the registered remote control commands
- This lists all the registered remote control commands as a JSON map in
- the commands response.
- rc/noop: Echo the input to the output parameters
- This echoes the input parameters to the output parameters for testing
- purposes. It can be used to check that rclone is still alive and to
- check that parameter passing is working properly.
- vfs/forget: Forget files or directories in the directory cache.
- This forgets the paths in the directory cache causing them to be re-read
- from the remote when needed.
- If no paths are passed in then it will forget all the paths in the
- directory cache.
- rclone rc vfs/forget
- Otherwise pass files or dirs in as file=path or dir=path. Any parameter
- key starting with file will forget that file and any starting with dir
- will forget that dir, eg
- rclone rc vfs/forget file=hello file2=goodbye dir=home/junk
- vfs/poll-interval: Get the status or update the value of the poll-interval option.
- Without any parameter given this returns the current status of the
- poll-interval setting.
- When the interval=duration parameter is set, the poll-interval value is
- updated and the polling function is notified. Setting interval=0
- disables poll-interval.
- rclone rc vfs/poll-interval interval=5m
- The timeout=duration parameter can be used to specify a time to wait for
- the current poll function to apply the new value. If timeout is less or
- equal 0, which is the default, wait indefinitely.
- The new poll-interval value will only be active when the timeout is not
- reached.
- If poll-interval is updated or disabled temporarily, some changes might
- not get picked up by the polling function, depending on the used remote.
- vfs/refresh: Refresh the directory cache.
- This reads the directories for the specified paths and freshens the
- directory cache.
- If no paths are passed in then it will refresh the root directory.
- rclone rc vfs/refresh
- Otherwise pass directories in as dir=path. Any parameter key starting
- with dir will refresh that directory, eg
- rclone rc vfs/refresh dir=home/junk dir2=data/misc
- If the parameter recursive=true is given the whole directory tree will
- get refreshed. This refresh will use --fast-list if enabled.
- Accessing the remote control via HTTP
- Rclone implements a simple HTTP based protocol.
- Each endpoint takes an JSON object and returns a JSON object or an
- error. The JSON objects are essentially a map of string names to values.
- All calls must made using POST.
- The input objects can be supplied using URL parameters, POST parameters
- or by supplying "Content-Type: application/json" and a JSON blob in the
- body. There are examples of these below using curl.
- The response will be a JSON blob in the body of the response. This is
- formatted to be reasonably human readable.
- If an error occurs then there will be an HTTP error status (usually 400)
- and the body of the response will contain a JSON encoded error object.
- The sever implements basic CORS support and allows all origins for that.
- The response to a preflight OPTIONS request will echo the requested
- "Access-Control-Request-Headers" back.
- Using POST with URL parameters only
- curl -X POST 'http://localhost:5572/rc/noop/?potato=1&sausage=2'
- Response
- {
- "potato": "1",
- "sausage": "2"
- }
- Here is what an error response looks like:
- curl -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
- {
- "error": "arbitrary error on input map[potato:1 sausage:2]",
- "input": {
- "potato": "1",
- "sausage": "2"
- }
- }
- Note that curl doesn't return errors to the shell unless you use the -f
- option
- $ curl -f -X POST 'http://localhost:5572/rc/error/?potato=1&sausage=2'
- curl: (22) The requested URL returned error: 400 Bad Request
- $ echo $?
- 22
- Using POST with a form
- curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop/
- Response
- {
- "potato": "1",
- "sausage": "2"
- }
- Note that you can combine these with URL parameters too with the POST
- parameters taking precedence.
- curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop/?rutabaga=3&sausage=4"
- Response
- {
- "potato": "1",
- "rutabaga": "3",
- "sausage": "4"
- }
- Using POST with a JSON blob
- curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop/
- response
- {
- "password": "xyz",
- "username": "xyz"
- }
- This can be combined with URL parameters too if required. The JSON blob
- takes precedence.
- curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop/?rutabaga=3&potato=4'
- {
- "potato": 2,
- "rutabaga": "3",
- "sausage": 1
- }
- Debugging rclone with pprof
- If you use the --rc flag this will also enable the use of the go
- profiling tools on the same port.
- To use these, first install go.
- Then (for example) to profile rclone's memory use you can run:
- go tool pprof -web http://localhost:5572/debug/pprof/heap
- This should open a page in your browser showing what is using what
- memory.
- You can also use the -text flag to produce a textual summary
- $ go tool pprof -text http://localhost:5572/debug/pprof/heap
- Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- flat flat% sum% cum cum%
- 1024.03kB 66.62% 66.62% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
- 513kB 33.38% 100% 513kB 33.38% net/http.newBufioWriterSize
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/all.init
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/serve.init
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/cmd/serve/restic.init
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2.init
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init
- 0 0% 100% 1024.03kB 66.62% github.com/ncw/rclone/vendor/golang.org/x/net/http2/hpack.init.0
- 0 0% 100% 1024.03kB 66.62% main.init
- 0 0% 100% 513kB 33.38% net/http.(*conn).readRequest
- 0 0% 100% 513kB 33.38% net/http.(*conn).serve
- 0 0% 100% 1024.03kB 66.62% runtime.main
- Possible profiles to look at:
- - Memory: go tool pprof http://localhost:5572/debug/pprof/heap
- - 30-second CPU profile:
- go tool pprof http://localhost:5572/debug/pprof/profile
- - 5-second execution trace:
- wget http://localhost:5572/debug/pprof/trace?seconds=5
- See the net/http/pprof docs for more info on how to use the profiling
- and for a general overview see the Go team's blog post on profiling go
- programs.
- The profiling hook is zero overhead unless it is used.
- OVERVIEW OF CLOUD STORAGE SYSTEMS
- Each cloud storage system is slightly different. Rclone attempts to
- provide a unified interface to them, but some underlying differences
- show through.
- Features
- Here is an overview of the major features of each cloud storage system.
- Name Hash ModTime Case Insensitive Duplicate Files MIME Type
- ------------------------------ ------------- --------- ------------------ ----------------- -----------
- Amazon Drive MD5 No Yes No R
- Amazon S3 MD5 Yes No No R/W
- Backblaze B2 SHA1 Yes No No R/W
- Box SHA1 Yes Yes No -
- Dropbox DBHASH † Yes Yes No -
- FTP - No No No -
- Google Cloud Storage MD5 Yes No No R/W
- Google Drive MD5 Yes No Yes R/W
- HTTP - No No No R
- Hubic MD5 Yes No No R/W
- Jottacloud MD5 Yes Yes No R/W
- Mega - No No Yes -
- Microsoft Azure Blob Storage MD5 Yes No No R/W
- Microsoft OneDrive SHA1 ‡‡ Yes Yes No R
- OpenDrive MD5 Yes Yes No -
- Openstack Swift MD5 Yes No No R/W
- pCloud MD5, SHA1 Yes No No W
- QingStor MD5 No No No R/W
- SFTP MD5, SHA1 ‡ Yes Depends No -
- WebDAV - Yes †† Depends No -
- Yandex Disk MD5 Yes No No R/W
- The local filesystem All Yes Depends No -
- Hash
- The cloud storage system supports various hash types of the objects. The
- hashes are used when transferring data as an integrity check and can be
- specifically used with the --checksum flag in syncs and in the check
- command.
- To use the verify checksums when transferring between cloud storage
- systems they must support a common hash type.
- † Note that Dropbox supports its own custom hash. This is an SHA256 sum
- of all the 4MB block SHA256s.
- ‡ SFTP supports checksums if the same login has shell access and md5sum
- or sha1sum as well as echo are in the remote's PATH.
- †† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
- ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive
- for business and SharePoint server support Microsoft's own QuickXorHash.
- ModTime
- The cloud storage system supports setting modification times on objects.
- If it does then this enables a using the modification times as part of
- the sync. If not then only the size will be checked by default, though
- the MD5SUM can be checked with the --checksum flag.
- All cloud storage systems support some kind of date on the object and
- these will be set when transferring from the cloud storage system.
- Case Insensitive
- If a cloud storage systems is case sensitive then it is possible to have
- two files which differ only in case, eg file.txt and FILE.txt. If a
- cloud storage system is case insensitive then that isn't possible.
- This can cause problems when syncing between a case insensitive system
- and a case sensitive system. The symptom of this is that no matter how
- many times you run the sync it never completes fully.
- The local filesystem and SFTP may or may not be case sensitive depending
- on OS.
- - Windows - usually case insensitive, though case is preserved
- - OSX - usually case insensitive, though it is possible to format case
- sensitive
- - Linux - usually case sensitive, but there are case insensitive file
- systems (eg FAT formatted USB keys)
- Most of the time this doesn't cause any problems as people tend to avoid
- files whose name differs only by case even on case sensitive systems.
- Duplicate files
- If a cloud storage system allows duplicate files then it can have two
- objects with the same name.
- This confuses rclone greatly when syncing - use the rclone dedupe
- command to rename or remove duplicates.
- MIME Type
- MIME types (also known as media types) classify types of documents using
- a simple text classification, eg text/html or application/pdf.
- Some cloud storage systems support reading (R) the MIME type of objects
- and some support writing (W) the MIME type of objects.
- The MIME type can be important if you are serving files directly to HTTP
- from the storage system.
- If you are copying from a remote which supports reading (R) to a remote
- which supports writing (W) then rclone will preserve the MIME types.
- Otherwise they will be guessed from the extension, or the remote itself
- may assign the MIME type.
- Optional Features
- All the remotes support a basic set of features, but there are some
- optional features supported by some remotes used to make some operations
- more efficient.
- Name Purge Copy Move DirMove CleanUp ListR StreamUpload LinkSharing About
- ------------------------------ ------- ------ ------ --------- --------- ------- -------------- ------------- -------
- Amazon Drive Yes No Yes Yes No #575 No No No #2178 No
- Amazon S3 No Yes No No No Yes Yes No #2178 No
- Backblaze B2 No No No No Yes Yes Yes No #2178 No
- Box Yes Yes Yes Yes No #575 No Yes Yes No
- Dropbox Yes Yes Yes Yes No #575 No Yes Yes Yes
- FTP No No Yes Yes No No Yes No #2178 No
- Google Cloud Storage Yes Yes No No No Yes Yes No #2178 No
- Google Drive Yes Yes Yes Yes Yes Yes Yes Yes Yes
- HTTP No No No No No No No No #2178 No
- Hubic Yes † Yes No No No Yes Yes No #2178 Yes
- Jottacloud Yes Yes Yes Yes No Yes No Yes Yes
- Mega Yes No Yes Yes No No No No #2178 Yes
- Microsoft Azure Blob Storage Yes Yes No No No Yes No No #2178 No
- Microsoft OneDrive Yes Yes Yes Yes No #575 No No Yes Yes
- OpenDrive Yes Yes Yes Yes No No No No No
- Openstack Swift Yes † Yes No No No Yes Yes No #2178 Yes
- pCloud Yes Yes Yes Yes Yes No No No #2178 Yes
- QingStor No Yes No No No Yes No No #2178 No
- SFTP No No Yes Yes No No Yes No #2178 No
- WebDAV Yes Yes Yes Yes No No Yes ‡ No #2178 No
- Yandex Disk Yes No No No Yes Yes Yes No #2178 No
- The local filesystem Yes No Yes Yes No No Yes No Yes
- Purge
- This deletes a directory quicker than just deleting all the files in the
- directory.
- † Note Swift and Hubic implement this in order to delete directory
- markers but they don't actually have a quicker way of deleting files
- other than deleting them individually.
- ‡ StreamUpload is not supported with Nextcloud
- Copy
- Used when copying an object to and from the same remote. This known as a
- server side copy so you can copy a file without downloading it and
- uploading it again. It is used if you use rclone copy or rclone move if
- the remote doesn't support Move directly.
- If the server doesn't support Copy directly then for copy operations the
- file is downloaded then re-uploaded.
- Move
- Used when moving/renaming an object on the same remote. This is known as
- a server side move of a file. This is used in rclone move if the server
- doesn't support DirMove.
- If the server isn't capable of Move then rclone simulates it with Copy
- then delete. If the server doesn't support Copy then rclone will
- download the file and re-upload it.
- DirMove
- This is used to implement rclone move to move a directory if possible.
- If it isn't then it will use Move on each file (which falls back to Copy
- then download and upload - see Move section).
- CleanUp
- This is used for emptying the trash for a remote by rclone cleanup.
- If the server can't do CleanUp then rclone cleanup will return an error.
- ListR
- The remote supports a recursive list to list all the contents beneath a
- directory quickly. This enables the --fast-list flag to work. See the
- rclone docs for more details.
- StreamUpload
- Some remotes allow files to be uploaded without knowing the file size in
- advance. This allows certain operations to work without spooling the
- file to local disk first, e.g. rclone rcat.
- LinkSharing
- Sets the necessary permissions on a file or folder and prints a link
- that allows others to access them, even if they don't have an account on
- the particular cloud provider.
- About
- This is used to fetch quota information from the remote, like bytes
- used/free/quota and bytes used in the trash.
- If the server can't do About then rclone about will return an error.
- Alias
- The alias remote provides a new name for another remote.
- Paths may be as deep as required or a local path, eg
- remote:directory/subdirectory or /directory/subdirectory.
- During the initial setup with rclone config you will specify the target
- remote. The target remote can either be a local path or another remote.
- Subfolders can be used in target remote. Asume a alias remote named
- backup with the target mydrive:private/backup. Invoking
- rclone mkdir backup:desktop is exactly the same as invoking
- rclone mkdir mydrive:private/backup/desktop.
- There will be no special handling of paths containing .. segments.
- Invoking rclone mkdir backup:../desktop is exactly the same as invoking
- rclone mkdir mydrive:private/backup/../desktop. The empty path is not
- allowed as a remote. To alias the current directory use . instead.
- Here is an example of how to make a alias called remote for local
- folder. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Alias for a existing remote
- \ "alias"
- 2 / Amazon Drive
- \ "amazon cloud drive"
- 3 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 4 / Backblaze B2
- \ "b2"
- 5 / Box
- \ "box"
- 6 / Cache a remote
- \ "cache"
- 7 / Dropbox
- \ "dropbox"
- 8 / Encrypt/Decrypt a remote
- \ "crypt"
- 9 / FTP Connection
- \ "ftp"
- 10 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 11 / Google Drive
- \ "drive"
- 12 / Hubic
- \ "hubic"
- 13 / Local Disk
- \ "local"
- 14 / Microsoft Azure Blob Storage
- \ "azureblob"
- 15 / Microsoft OneDrive
- \ "onedrive"
- 16 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 17 / Pcloud
- \ "pcloud"
- 18 / QingCloud Object Storage
- \ "qingstor"
- 19 / SSH/SFTP Connection
- \ "sftp"
- 20 / Webdav
- \ "webdav"
- 21 / Yandex Disk
- \ "yandex"
- 22 / http Connection
- \ "http"
- Storage> 1
- Remote or path to alias.
- Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
- remote> /mnt/storage/backup
- Remote config
- --------------------
- [remote]
- remote = /mnt/storage/backup
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Current remotes:
- Name Type
- ==== ====
- remote alias
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> q
- Once configured you can then use rclone like this,
- List directories in top level in /mnt/storage/backup
- rclone lsd remote:
- List all the files in /mnt/storage/backup
- rclone ls remote:
- Copy another local directory to the alias directory called source
- rclone copy /home/source remote:source
- Standard Options
- Here are the standard options specific to alias (Alias for a existing
- remote).
- --alias-remote
- Remote or path to alias. Can be "myremote:path/to/dir",
- "myremote:bucket", "myremote:" or "/local/path".
- - Config: remote
- - Env Var: RCLONE_ALIAS_REMOTE
- - Type: string
- - Default: ""
- Amazon Drive
- Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
- service run by Amazon for consumers.
- Status
- IMPORTANT: rclone supports Amazon Drive only if you have your own set of
- API keys. Unfortunately the Amazon Drive developer program is now closed
- to new entries so if you don't already have your own set of keys you
- will not be able to use rclone with Amazon Drive.
- For the history on why rclone no longer has a set of Amazon Drive API
- keys see the forum.
- If you happen to know anyone who works at Amazon then please ask them to
- re-instate rclone into the Amazon Drive developer program - thanks!
- Setup
- The initial setup for Amazon Drive involves getting a token from Amazon
- which you need to do in your browser. rclone config walks you through
- it.
- The configuration process for Amazon Drive may involve using an oauth
- proxy. This is used to keep the Amazon credentials out of the source
- code. The proxy runs in Google's very secure App Engine environment and
- doesn't store any credentials which pass through it.
- Since rclone doesn't currently have its own Amazon Drive credentials so
- you will either need to have your own client_id and client_secret with
- Amazon Drive, or use a a third party ouath proxy in which case you will
- need to enter client_id, client_secret, auth_url and token_url.
- Note also if you are not using Amazon's auth_url and token_url, (ie you
- filled in something for those) then if setting up on a remote machine
- you can only use the copying the config method of configuration -
- rclone authorize will not work.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- Storage> 1
- Amazon Application Client Id - required.
- client_id> your client ID goes here
- Amazon Application Client Secret - required.
- client_secret> your client secret goes here
- Auth server URL - leave blank to use Amazon's.
- auth_url> Optional auth URL
- Token server url - leave blank to use Amazon's.
- token_url> Optional token URL
- Remote config
- Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id = your client ID goes here
- client_secret = your client secret goes here
- auth_url = Optional auth URL
- token_url = Optional token URL
- token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Amazon. This only runs from the moment it opens
- your browser to the moment you get back the verification code. This is
- on http://127.0.0.1:53682/ and this it may require you to unblock it
- temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- List directories in top level of your Amazon Drive
- rclone lsd remote:
- List all the files in your Amazon Drive
- rclone ls remote:
- To copy a local directory to an Amazon Drive directory called backup
- rclone copy /home/source remote:backup
- Modified time and MD5SUMs
- Amazon Drive doesn't allow modification times to be changed via the API
- so these won't be accurate or used for syncing.
- It does store MD5SUMs so for a more accurate sync, you can use the
- --checksum flag.
- Deleting files
- Any files you delete with rclone will end up in the trash. Amazon don't
- provide an API to permanently delete files, nor to empty the trash, so
- you will have to do that with one of Amazon's apps or via the Amazon
- Drive website. As of November 17, 2016, files are automatically deleted
- by Amazon from the trash after 30 days.
- Using with non .com Amazon accounts
- Let's say you usually use amazon.co.uk. When you authenticate with
- rclone it will take you to an amazon.com page to log in. Your
- amazon.co.uk email and password should work here just fine.
- Standard Options
- Here are the standard options specific to amazon cloud drive (Amazon
- Drive).
- --acd-client-id
- Amazon Application Client ID.
- - Config: client_id
- - Env Var: RCLONE_ACD_CLIENT_ID
- - Type: string
- - Default: ""
- --acd-client-secret
- Amazon Application Client Secret.
- - Config: client_secret
- - Env Var: RCLONE_ACD_CLIENT_SECRET
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to amazon cloud drive (Amazon
- Drive).
- --acd-auth-url
- Auth server URL. Leave blank to use Amazon's.
- - Config: auth_url
- - Env Var: RCLONE_ACD_AUTH_URL
- - Type: string
- - Default: ""
- --acd-token-url
- Token server url. leave blank to use Amazon's.
- - Config: token_url
- - Env Var: RCLONE_ACD_TOKEN_URL
- - Type: string
- - Default: ""
- --acd-checkpoint
- Checkpoint for internal polling (debug).
- - Config: checkpoint
- - Env Var: RCLONE_ACD_CHECKPOINT
- - Type: string
- - Default: ""
- --acd-upload-wait-per-gb
- Additional time per GB to wait after a failed complete upload to see if
- it appears.
- Sometimes Amazon Drive gives an error when a file has been fully
- uploaded but the file appears anyway after a little while. This happens
- sometimes for files over 1GB in size and nearly every time for files
- bigger than 10GB. This parameter controls the time rclone waits for the
- file to appear.
- The default value for this parameter is 3 minutes per GB, so by default
- it will wait 3 minutes for every GB uploaded to see if the file appears.
- You can disable this feature by setting it to 0. This may cause conflict
- errors as rclone retries the failed upload but the file will most likely
- appear correctly eventually.
- These values were determined empirically by observing lots of uploads of
- big files for a range of file sizes.
- Upload with the "-v" flag to see more info about what rclone is doing in
- this situation.
- - Config: upload_wait_per_gb
- - Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
- - Type: Duration
- - Default: 3m0s
- --acd-templink-threshold
- Files >= this size will be downloaded via their tempLink.
- Files this size or more will be downloaded via their "tempLink". This is
- to work around a problem with Amazon Drive which blocks downloads of
- files bigger than about 10GB. The default for this is 9GB which
- shouldn't need to be changed.
- To download files above this threshold, rclone requests a "tempLink"
- which downloads the file through a temporary URL directly from the
- underlying S3 storage.
- - Config: templink_threshold
- - Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- - Type: SizeSuffix
- - Default: 9G
- Limitations
- Note that Amazon Drive is case insensitive so you can't have a file
- called "Hello.doc" and one called "hello.doc".
- Amazon Drive has rate limiting so you may notice errors in the sync (429
- errors). rclone will automatically retry the sync up to 3 times by
- default (see --retries flag) which should hopefully work around this
- problem.
- Amazon Drive has an internal limit of file sizes that can be uploaded to
- the service. This limit is not officially published, but all files
- larger than this will fail.
- At the time of writing (Jan 2016) is in the area of 50GB per file. This
- means that larger files are likely to fail.
- Unfortunately there is no way for rclone to see that this failure is
- because of file size, so it will retry the operation, as any other
- failure. To avoid this problem, use --max-size 50000M option to limit
- the maximum size of uploaded files. Note that --max-size does not split
- files into segments, it only ignores files over this size.
- Amazon S3 Storage Providers
- The S3 backend can be used with a number of different providers:
- - AWS S3
- - Ceph
- - DigitalOcean Spaces
- - Dreamhost
- - IBM COS S3
- - Minio
- - Wasabi
- Paths are specified as remote:bucket (or remote: for the lsd command.)
- You may put subdirectories in too, eg remote:bucket/path/to/dir.
- Once you have made a remote (see the provider specific section above)
- you can use it like this:
- See all buckets
- rclone lsd remote:
- Make a new bucket
- rclone mkdir remote:bucket
- List the contents of a bucket
- rclone ls remote:bucket
- Sync /home/local/directory to the remote bucket, deleting any excess
- files in the bucket.
- rclone sync /home/local/directory remote:bucket
- AWS S3
- Here is an example of making an s3 configuration. First run
- rclone config
- This will guide you through an interactive setup process.
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Alias for a existing remote
- \ "alias"
- 2 / Amazon Drive
- \ "amazon cloud drive"
- 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
- \ "s3"
- 4 / Backblaze B2
- \ "b2"
- [snip]
- 23 / http Connection
- \ "http"
- Storage> s3
- Choose your S3 provider.
- Choose a number from below, or type in your own value
- 1 / Amazon Web Services (AWS) S3
- \ "AWS"
- 2 / Ceph Object Storage
- \ "Ceph"
- 3 / Digital Ocean Spaces
- \ "DigitalOcean"
- 4 / Dreamhost DreamObjects
- \ "Dreamhost"
- 5 / IBM COS S3
- \ "IBMCOS"
- 6 / Minio Object Storage
- \ "Minio"
- 7 / Wasabi Object Storage
- \ "Wasabi"
- 8 / Any other S3 compatible provider
- \ "Other"
- provider> 1
- Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
- Choose a number from below, or type in your own value
- 1 / Enter AWS credentials in the next step
- \ "false"
- 2 / Get AWS credentials from the environment (env vars or IAM)
- \ "true"
- env_auth> 1
- AWS Access Key ID - leave blank for anonymous access or runtime credentials.
- access_key_id> XXX
- AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
- secret_access_key> YYY
- Region to connect to.
- Choose a number from below, or type in your own value
- / The default endpoint - a good choice if you are unsure.
- 1 | US Region, Northern Virginia or Pacific Northwest.
- | Leave location constraint empty.
- \ "us-east-1"
- / US East (Ohio) Region
- 2 | Needs location constraint us-east-2.
- \ "us-east-2"
- / US West (Oregon) Region
- 3 | Needs location constraint us-west-2.
- \ "us-west-2"
- / US West (Northern California) Region
- 4 | Needs location constraint us-west-1.
- \ "us-west-1"
- / Canada (Central) Region
- 5 | Needs location constraint ca-central-1.
- \ "ca-central-1"
- / EU (Ireland) Region
- 6 | Needs location constraint EU or eu-west-1.
- \ "eu-west-1"
- / EU (London) Region
- 7 | Needs location constraint eu-west-2.
- \ "eu-west-2"
- / EU (Frankfurt) Region
- 8 | Needs location constraint eu-central-1.
- \ "eu-central-1"
- / Asia Pacific (Singapore) Region
- 9 | Needs location constraint ap-southeast-1.
- \ "ap-southeast-1"
- / Asia Pacific (Sydney) Region
- 10 | Needs location constraint ap-southeast-2.
- \ "ap-southeast-2"
- / Asia Pacific (Tokyo) Region
- 11 | Needs location constraint ap-northeast-1.
- \ "ap-northeast-1"
- / Asia Pacific (Seoul)
- 12 | Needs location constraint ap-northeast-2.
- \ "ap-northeast-2"
- / Asia Pacific (Mumbai)
- 13 | Needs location constraint ap-south-1.
- \ "ap-south-1"
- / South America (Sao Paulo) Region
- 14 | Needs location constraint sa-east-1.
- \ "sa-east-1"
- region> 1
- Endpoint for S3 API.
- Leave blank if using AWS to use the default endpoint for the region.
- endpoint>
- Location constraint - must be set to match the Region. Used when creating buckets only.
- Choose a number from below, or type in your own value
- 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
- \ ""
- 2 / US East (Ohio) Region.
- \ "us-east-2"
- 3 / US West (Oregon) Region.
- \ "us-west-2"
- 4 / US West (Northern California) Region.
- \ "us-west-1"
- 5 / Canada (Central) Region.
- \ "ca-central-1"
- 6 / EU (Ireland) Region.
- \ "eu-west-1"
- 7 / EU (London) Region.
- \ "eu-west-2"
- 8 / EU Region.
- \ "EU"
- 9 / Asia Pacific (Singapore) Region.
- \ "ap-southeast-1"
- 10 / Asia Pacific (Sydney) Region.
- \ "ap-southeast-2"
- 11 / Asia Pacific (Tokyo) Region.
- \ "ap-northeast-1"
- 12 / Asia Pacific (Seoul)
- \ "ap-northeast-2"
- 13 / Asia Pacific (Mumbai)
- \ "ap-south-1"
- 14 / South America (Sao Paulo) Region.
- \ "sa-east-1"
- location_constraint> 1
- Canned ACL used when creating buckets and/or storing objects in S3.
- For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Choose a number from below, or type in your own value
- 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
- \ "private"
- 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
- \ "public-read"
- / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
- 3 | Granting this on a bucket is generally not recommended.
- \ "public-read-write"
- 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
- \ "authenticated-read"
- / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
- 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- \ "bucket-owner-read"
- / Both the object owner and the bucket owner get FULL_CONTROL over the object.
- 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- \ "bucket-owner-full-control"
- acl> 1
- The server-side encryption algorithm used when storing this object in S3.
- Choose a number from below, or type in your own value
- 1 / None
- \ ""
- 2 / AES256
- \ "AES256"
- server_side_encryption> 1
- The storage class to use when storing objects in S3.
- Choose a number from below, or type in your own value
- 1 / Default
- \ ""
- 2 / Standard storage class
- \ "STANDARD"
- 3 / Reduced redundancy storage class
- \ "REDUCED_REDUNDANCY"
- 4 / Standard Infrequent Access storage class
- \ "STANDARD_IA"
- 5 / One Zone Infrequent Access storage class
- \ "ONEZONE_IA"
- storage_class> 1
- Remote config
- --------------------
- [remote]
- type = s3
- provider = AWS
- env_auth = false
- access_key_id = XXX
- secret_access_key = YYY
- region = us-east-1
- endpoint =
- location_constraint =
- acl = private
- server_side_encryption =
- storage_class =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d>
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- --update and --use-server-modtime
- As noted below, the modified time is stored on metadata on the object.
- It is used by default for all operations that require checking the time
- a file was last updated. It allows rclone to treat the remote more like
- a true filesystem, but it is inefficient because it requires an extra
- API call to retrieve the metadata.
- For many operations, the time the object was last uploaded to the remote
- is sufficient to determine if it is "dirty". By using --update along
- with --use-server-modtime, you can avoid the extra API call and simply
- upload files whose local modtime is newer than the time it was last
- uploaded.
- Modified time
- The modified time is stored as metadata on the object as
- X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.
- Multipart uploads
- rclone supports multipart uploads with S3 which means that it can upload
- files bigger than 5GB. Note that files uploaded _both_ with multipart
- upload _and_ through crypt remotes do not have MD5 sums.
- Buckets and Regions
- With Amazon S3 you can list buckets (rclone lsd) using any region, but
- you can only access the content of a bucket from the region it was
- created in. If you attempt to access a bucket from the wrong region, you
- will get an error, incorrect region, the bucket is not in 'XXX' region.
- Authentication
- There are a number of ways to supply rclone with a set of AWS
- credentials, with and without using the environment.
- The different authentication methods are tried in this order:
- - Directly in the rclone configuration file (env_auth = false in the
- config file):
- - access_key_id and secret_access_key are required.
- - session_token can be optionally set when using AWS STS.
- - Runtime configuration (env_auth = true in the config file):
- - Export the following environment variables before running rclone:
- - Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
- - Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
- - Session Token: AWS_SESSION_TOKEN (optional)
- - Or, use a named profile:
- - Profile files are standard files used by AWS CLI tools
- - By default it will use the profile in your home directory (eg
- ~/.aws/credentials on unix based systems) file and the "default"
- profile, to change set these environment variables:
- - AWS_SHARED_CREDENTIALS_FILE to control which file.
- - AWS_PROFILE to control which profile to use.
- - Or, run rclone in an ECS task with an IAM role (AWS only).
- - Or, run rclone on an EC2 instance with an IAM role (AWS only).
- If none of these option actually end up providing rclone with AWS
- credentials then S3 interaction will be non-authenticated (see below).
- S3 Permissions
- When using the sync subcommand of rclone the following minimum
- permissions are required to be available on the bucket being written to:
- - ListBucket
- - DeleteObject
- - GetObject
- - PutObject
- - PutObjectACL
- Example policy:
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
- },
- "Action": [
- "s3:ListBucket",
- "s3:DeleteObject",
- "s3:GetObject",
- "s3:PutObject",
- "s3:PutObjectAcl"
- ],
- "Resource": [
- "arn:aws:s3:::BUCKET_NAME/*",
- "arn:aws:s3:::BUCKET_NAME"
- ]
- }
- ]
- }
- Notes on above:
- 1. This is a policy that can be used when creating bucket. It assumes
- that USER_NAME has been created.
- 2. The Resource entry must include both resource ARNs, as one implies
- the bucket and the other implies the bucket's objects.
- For reference, here's an Ansible script that will generate one or more
- buckets that will work with rclone sync.
- Key Management System (KMS)
- If you are using server side encryption with KMS then you will find you
- can't transfer small objects. As a work-around you can use the
- --ignore-checksum flag.
- A proper fix is being worked on in issue #1824.
- Glacier
- You can transition objects to glacier storage using a lifecycle policy.
- The bucket can still be synced or copied into normally, but if rclone
- tries to access the data you will see an error like below.
- 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
- In this case you need to restore the object(s) in question before using
- rclone.
- Standard Options
- Here are the standard options specific to s3 (Amazon S3 Compliant
- Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
- --s3-provider
- Choose your S3 provider.
- - Config: provider
- - Env Var: RCLONE_S3_PROVIDER
- - Type: string
- - Default: ""
- - Examples:
- - "AWS"
- - Amazon Web Services (AWS) S3
- - "Ceph"
- - Ceph Object Storage
- - "DigitalOcean"
- - Digital Ocean Spaces
- - "Dreamhost"
- - Dreamhost DreamObjects
- - "IBMCOS"
- - IBM COS S3
- - "Minio"
- - Minio Object Storage
- - "Wasabi"
- - Wasabi Object Storage
- - "Other"
- - Any other S3 compatible provider
- --s3-env-auth
- Get AWS credentials from runtime (environment variables or EC2/ECS meta
- data if no env vars). Only applies if access_key_id and
- secret_access_key is blank.
- - Config: env_auth
- - Env Var: RCLONE_S3_ENV_AUTH
- - Type: bool
- - Default: false
- - Examples:
- - "false"
- - Enter AWS credentials in the next step
- - "true"
- - Get AWS credentials from the environment (env vars or IAM)
- --s3-access-key-id
- AWS Access Key ID. Leave blank for anonymous access or runtime
- credentials.
- - Config: access_key_id
- - Env Var: RCLONE_S3_ACCESS_KEY_ID
- - Type: string
- - Default: ""
- --s3-secret-access-key
- AWS Secret Access Key (password) Leave blank for anonymous access or
- runtime credentials.
- - Config: secret_access_key
- - Env Var: RCLONE_S3_SECRET_ACCESS_KEY
- - Type: string
- - Default: ""
- --s3-region
- Region to connect to.
- - Config: region
- - Env Var: RCLONE_S3_REGION
- - Type: string
- - Default: ""
- - Examples:
- - "us-east-1"
- - The default endpoint - a good choice if you are unsure.
- - US Region, Northern Virginia or Pacific Northwest.
- - Leave location constraint empty.
- - "us-east-2"
- - US East (Ohio) Region
- - Needs location constraint us-east-2.
- - "us-west-2"
- - US West (Oregon) Region
- - Needs location constraint us-west-2.
- - "us-west-1"
- - US West (Northern California) Region
- - Needs location constraint us-west-1.
- - "ca-central-1"
- - Canada (Central) Region
- - Needs location constraint ca-central-1.
- - "eu-west-1"
- - EU (Ireland) Region
- - Needs location constraint EU or eu-west-1.
- - "eu-west-2"
- - EU (London) Region
- - Needs location constraint eu-west-2.
- - "eu-central-1"
- - EU (Frankfurt) Region
- - Needs location constraint eu-central-1.
- - "ap-southeast-1"
- - Asia Pacific (Singapore) Region
- - Needs location constraint ap-southeast-1.
- - "ap-southeast-2"
- - Asia Pacific (Sydney) Region
- - Needs location constraint ap-southeast-2.
- - "ap-northeast-1"
- - Asia Pacific (Tokyo) Region
- - Needs location constraint ap-northeast-1.
- - "ap-northeast-2"
- - Asia Pacific (Seoul)
- - Needs location constraint ap-northeast-2.
- - "ap-south-1"
- - Asia Pacific (Mumbai)
- - Needs location constraint ap-south-1.
- - "sa-east-1"
- - South America (Sao Paulo) Region
- - Needs location constraint sa-east-1.
- --s3-region
- Region to connect to. Leave blank if you are using an S3 clone and you
- don't have a region.
- - Config: region
- - Env Var: RCLONE_S3_REGION
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Use this if unsure. Will use v4 signatures and an empty
- region.
- - "other-v2-signature"
- - Use this only if v4 signatures don't work, eg pre Jewel/v10
- CEPH.
- --s3-endpoint
- Endpoint for S3 API. Leave blank if using AWS to use the default
- endpoint for the region.
- - Config: endpoint
- - Env Var: RCLONE_S3_ENDPOINT
- - Type: string
- - Default: ""
- --s3-endpoint
- Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
- - Config: endpoint
- - Env Var: RCLONE_S3_ENDPOINT
- - Type: string
- - Default: ""
- - Examples:
- - "s3-api.us-geo.objectstorage.softlayer.net"
- - US Cross Region Endpoint
- - "s3-api.dal.us-geo.objectstorage.softlayer.net"
- - US Cross Region Dallas Endpoint
- - "s3-api.wdc-us-geo.objectstorage.softlayer.net"
- - US Cross Region Washington DC Endpoint
- - "s3-api.sjc-us-geo.objectstorage.softlayer.net"
- - US Cross Region San Jose Endpoint
- - "s3-api.us-geo.objectstorage.service.networklayer.com"
- - US Cross Region Private Endpoint
- - "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
- - US Cross Region Dallas Private Endpoint
- - "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
- - US Cross Region Washington DC Private Endpoint
- - "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
- - US Cross Region San Jose Private Endpoint
- - "s3.us-east.objectstorage.softlayer.net"
- - US Region East Endpoint
- - "s3.us-east.objectstorage.service.networklayer.com"
- - US Region East Private Endpoint
- - "s3.us-south.objectstorage.softlayer.net"
- - US Region South Endpoint
- - "s3.us-south.objectstorage.service.networklayer.com"
- - US Region South Private Endpoint
- - "s3.eu-geo.objectstorage.softlayer.net"
- - EU Cross Region Endpoint
- - "s3.fra-eu-geo.objectstorage.softlayer.net"
- - EU Cross Region Frankfurt Endpoint
- - "s3.mil-eu-geo.objectstorage.softlayer.net"
- - EU Cross Region Milan Endpoint
- - "s3.ams-eu-geo.objectstorage.softlayer.net"
- - EU Cross Region Amsterdam Endpoint
- - "s3.eu-geo.objectstorage.service.networklayer.com"
- - EU Cross Region Private Endpoint
- - "s3.fra-eu-geo.objectstorage.service.networklayer.com"
- - EU Cross Region Frankfurt Private Endpoint
- - "s3.mil-eu-geo.objectstorage.service.networklayer.com"
- - EU Cross Region Milan Private Endpoint
- - "s3.ams-eu-geo.objectstorage.service.networklayer.com"
- - EU Cross Region Amsterdam Private Endpoint
- - "s3.eu-gb.objectstorage.softlayer.net"
- - Great Britan Endpoint
- - "s3.eu-gb.objectstorage.service.networklayer.com"
- - Great Britan Private Endpoint
- - "s3.ap-geo.objectstorage.softlayer.net"
- - APAC Cross Regional Endpoint
- - "s3.tok-ap-geo.objectstorage.softlayer.net"
- - APAC Cross Regional Tokyo Endpoint
- - "s3.hkg-ap-geo.objectstorage.softlayer.net"
- - APAC Cross Regional HongKong Endpoint
- - "s3.seo-ap-geo.objectstorage.softlayer.net"
- - APAC Cross Regional Seoul Endpoint
- - "s3.ap-geo.objectstorage.service.networklayer.com"
- - APAC Cross Regional Private Endpoint
- - "s3.tok-ap-geo.objectstorage.service.networklayer.com"
- - APAC Cross Regional Tokyo Private Endpoint
- - "s3.hkg-ap-geo.objectstorage.service.networklayer.com"
- - APAC Cross Regional HongKong Private Endpoint
- - "s3.seo-ap-geo.objectstorage.service.networklayer.com"
- - APAC Cross Regional Seoul Private Endpoint
- - "s3.mel01.objectstorage.softlayer.net"
- - Melbourne Single Site Endpoint
- - "s3.mel01.objectstorage.service.networklayer.com"
- - Melbourne Single Site Private Endpoint
- - "s3.tor01.objectstorage.softlayer.net"
- - Toronto Single Site Endpoint
- - "s3.tor01.objectstorage.service.networklayer.com"
- - Toronto Single Site Private Endpoint
- --s3-endpoint
- Endpoint for S3 API. Required when using an S3 clone.
- - Config: endpoint
- - Env Var: RCLONE_S3_ENDPOINT
- - Type: string
- - Default: ""
- - Examples:
- - "objects-us-west-1.dream.io"
- - Dream Objects endpoint
- - "nyc3.digitaloceanspaces.com"
- - Digital Ocean Spaces New York 3
- - "ams3.digitaloceanspaces.com"
- - Digital Ocean Spaces Amsterdam 3
- - "sgp1.digitaloceanspaces.com"
- - Digital Ocean Spaces Singapore 1
- - "s3.wasabisys.com"
- - Wasabi Object Storage
- --s3-location-constraint
- Location constraint - must be set to match the Region. Used when
- creating buckets only.
- - Config: location_constraint
- - Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Empty for US Region, Northern Virginia or Pacific Northwest.
- - "us-east-2"
- - US East (Ohio) Region.
- - "us-west-2"
- - US West (Oregon) Region.
- - "us-west-1"
- - US West (Northern California) Region.
- - "ca-central-1"
- - Canada (Central) Region.
- - "eu-west-1"
- - EU (Ireland) Region.
- - "eu-west-2"
- - EU (London) Region.
- - "EU"
- - EU Region.
- - "ap-southeast-1"
- - Asia Pacific (Singapore) Region.
- - "ap-southeast-2"
- - Asia Pacific (Sydney) Region.
- - "ap-northeast-1"
- - Asia Pacific (Tokyo) Region.
- - "ap-northeast-2"
- - Asia Pacific (Seoul)
- - "ap-south-1"
- - Asia Pacific (Mumbai)
- - "sa-east-1"
- - South America (Sao Paulo) Region.
- --s3-location-constraint
- Location constraint - must match endpoint when using IBM Cloud Public.
- For on-prem COS, do not make a selection from this list, hit enter
- - Config: location_constraint
- - Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- - Type: string
- - Default: ""
- - Examples:
- - "us-standard"
- - US Cross Region Standard
- - "us-vault"
- - US Cross Region Vault
- - "us-cold"
- - US Cross Region Cold
- - "us-flex"
- - US Cross Region Flex
- - "us-east-standard"
- - US East Region Standard
- - "us-east-vault"
- - US East Region Vault
- - "us-east-cold"
- - US East Region Cold
- - "us-east-flex"
- - US East Region Flex
- - "us-south-standard"
- - US Sout hRegion Standard
- - "us-south-vault"
- - US South Region Vault
- - "us-south-cold"
- - US South Region Cold
- - "us-south-flex"
- - US South Region Flex
- - "eu-standard"
- - EU Cross Region Standard
- - "eu-vault"
- - EU Cross Region Vault
- - "eu-cold"
- - EU Cross Region Cold
- - "eu-flex"
- - EU Cross Region Flex
- - "eu-gb-standard"
- - Great Britan Standard
- - "eu-gb-vault"
- - Great Britan Vault
- - "eu-gb-cold"
- - Great Britan Cold
- - "eu-gb-flex"
- - Great Britan Flex
- - "ap-standard"
- - APAC Standard
- - "ap-vault"
- - APAC Vault
- - "ap-cold"
- - APAC Cold
- - "ap-flex"
- - APAC Flex
- - "mel01-standard"
- - Melbourne Standard
- - "mel01-vault"
- - Melbourne Vault
- - "mel01-cold"
- - Melbourne Cold
- - "mel01-flex"
- - Melbourne Flex
- - "tor01-standard"
- - Toronto Standard
- - "tor01-vault"
- - Toronto Vault
- - "tor01-cold"
- - Toronto Cold
- - "tor01-flex"
- - Toronto Flex
- --s3-location-constraint
- Location constraint - must be set to match the Region. Leave blank if
- not sure. Used when creating buckets only.
- - Config: location_constraint
- - Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- - Type: string
- - Default: ""
- --s3-acl
- Canned ACL used when creating buckets and/or storing objects in S3. For
- more info visit
- https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- - Config: acl
- - Env Var: RCLONE_S3_ACL
- - Type: string
- - Default: ""
- - Examples:
- - "private"
- - Owner gets FULL_CONTROL. No one else has access rights
- (default).
- - "public-read"
- - Owner gets FULL_CONTROL. The AllUsers group gets READ
- access.
- - "public-read-write"
- - Owner gets FULL_CONTROL. The AllUsers group gets READ and
- WRITE access.
- - Granting this on a bucket is generally not recommended.
- - "authenticated-read"
- - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
- READ access.
- - "bucket-owner-read"
- - Object owner gets FULL_CONTROL. Bucket owner gets READ
- access.
- - If you specify this canned ACL when creating a bucket,
- Amazon S3 ignores it.
- - "bucket-owner-full-control"
- - Both the object owner and the bucket owner get FULL_CONTROL
- over the object.
- - If you specify this canned ACL when creating a bucket,
- Amazon S3 ignores it.
- - "private"
- - Owner gets FULL_CONTROL. No one else has access rights
- (default). This acl is available on IBM Cloud (Infra), IBM
- Cloud (Storage), On-Premise COS
- - "public-read"
- - Owner gets FULL_CONTROL. The AllUsers group gets READ
- access. This acl is available on IBM Cloud (Infra), IBM
- Cloud (Storage), On-Premise IBM COS
- - "public-read-write"
- - Owner gets FULL_CONTROL. The AllUsers group gets READ and
- WRITE access. This acl is available on IBM Cloud (Infra),
- On-Premise IBM COS
- - "authenticated-read"
- - Owner gets FULL_CONTROL. The AuthenticatedUsers group gets
- READ access. Not supported on Buckets. This acl is available
- on IBM Cloud (Infra) and On-Premise IBM COS
- --s3-server-side-encryption
- The server-side encryption algorithm used when storing this object in
- S3.
- - Config: server_side_encryption
- - Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - None
- - "AES256"
- - AES256
- - "aws:kms"
- - aws:kms
- --s3-sse-kms-key-id
- If using KMS ID you must provide the ARN of Key.
- - Config: sse_kms_key_id
- - Env Var: RCLONE_S3_SSE_KMS_KEY_ID
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - None
- - "arn:aws:kms:us-east-1:*"
- - arn:aws:kms:*
- --s3-storage-class
- The storage class to use when storing new objects in S3.
- - Config: storage_class
- - Env Var: RCLONE_S3_STORAGE_CLASS
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "REDUCED_REDUNDANCY"
- - Reduced redundancy storage class
- - "STANDARD_IA"
- - Standard Infrequent Access storage class
- - "ONEZONE_IA"
- - One Zone Infrequent Access storage class
- Advanced Options
- Here are the advanced options specific to s3 (Amazon S3 Compliant
- Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)).
- --s3-chunk-size
- Chunk size to use for uploading.
- Any files larger than this will be uploaded in chunks of this size. The
- default is 5MB. The minimum is 5MB.
- Note that "--s3-upload-concurrency" chunks of this size are buffered in
- memory per transfer.
- If you are transferring large files over high speed links and you have
- enough memory, then increasing this will speed up the transfers.
- - Config: chunk_size
- - Env Var: RCLONE_S3_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 5M
- --s3-disable-checksum
- Don't store MD5 checksum with object metadata
- - Config: disable_checksum
- - Env Var: RCLONE_S3_DISABLE_CHECKSUM
- - Type: bool
- - Default: false
- --s3-session-token
- An AWS session token
- - Config: session_token
- - Env Var: RCLONE_S3_SESSION_TOKEN
- - Type: string
- - Default: ""
- --s3-upload-concurrency
- Concurrency for multipart uploads.
- This is the number of chunks of the same file that are uploaded
- concurrently.
- If you are uploading small numbers of large file over high speed link
- and these uploads do not fully utilize your bandwidth, then increasing
- this may help to speed up the transfers.
- - Config: upload_concurrency
- - Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
- - Type: int
- - Default: 2
- --s3-force-path-style
- If true use path style access if false use virtual hosted style.
- If this is true (the default) then rclone will use path style access, if
- false then rclone will use virtual path style. See the AWS S3 docs for
- more info.
- Some providers (eg Aliyun OSS or Netease COS) require this set to false.
- - Config: force_path_style
- - Env Var: RCLONE_S3_FORCE_PATH_STYLE
- - Type: bool
- - Default: true
- --s3-v2-auth
- If true use v2 authentication.
- If this is false (the default) then rclone will use v4 authentication.
- If it is set then rclone will use v2 authentication.
- Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
- - Config: v2_auth
- - Env Var: RCLONE_S3_V2_AUTH
- - Type: bool
- - Default: false
- Anonymous access to public buckets
- If you want to use rclone to access a public bucket, configure with a
- blank access_key_id and secret_access_key. Your config should end up
- looking like this:
- [anons3]
- type = s3
- provider = AWS
- env_auth = false
- access_key_id =
- secret_access_key =
- region = us-east-1
- endpoint =
- location_constraint =
- acl = private
- server_side_encryption =
- storage_class =
- Then use it as normal with the name of the public bucket, eg
- rclone lsd anons3:1000genomes
- You will be able to list and copy data but not upload it.
- Ceph
- Ceph is an open source unified, distributed storage system designed for
- excellent performance, reliability and scalability. It has an S3
- compatible object storage interface.
- To use rclone with Ceph, configure as above but leave the region blank
- and set the endpoint. You should end up with something like this in your
- config:
- [ceph]
- type = s3
- provider = Ceph
- env_auth = false
- access_key_id = XXX
- secret_access_key = YYY
- region =
- endpoint = https://ceph.endpoint.example.com
- location_constraint =
- acl =
- server_side_encryption =
- storage_class =
- Note also that Ceph sometimes puts / in the passwords it gives users. If
- you read the secret access key using the command line tools you will get
- a JSON blob with the / escaped as \/. Make sure you only write / in the
- secret access key.
- Eg the dump from Ceph looks something like this (irrelevant keys
- removed).
- {
- "user_id": "xxx",
- "display_name": "xxxx",
- "keys": [
- {
- "user": "xxx",
- "access_key": "xxxxxx",
- "secret_key": "xxxxxx\/xxxx"
- }
- ],
- }
- Because this is a json dump, it is encoding the / as \/, so if you use
- the secret key as xxxxxx/xxxx it will work fine.
- Dreamhost
- Dreamhost DreamObjects is an object storage system based on CEPH.
- To use rclone with Dreamhost, configure as above but leave the region
- blank and set the endpoint. You should end up with something like this
- in your config:
- [dreamobjects]
- type = s3
- provider = DreamHost
- env_auth = false
- access_key_id = your_access_key
- secret_access_key = your_secret_key
- region =
- endpoint = objects-us-west-1.dream.io
- location_constraint =
- acl = private
- server_side_encryption =
- storage_class =
- DigitalOcean Spaces
- Spaces is an S3-interoperable object storage service from cloud provider
- DigitalOcean.
- To connect to DigitalOcean Spaces you will need an access key and secret
- key. These can be retrieved on the "Applications & API" page of the
- DigitalOcean control panel. They will be needed when promted by
- rclone config for your access_key_id and secret_access_key.
- When prompted for a region or location_constraint, press enter to use
- the default value. The region must be included in the endpoint setting
- (e.g. nyc3.digitaloceanspaces.com). The defualt values can be used for
- other settings.
- Going through the whole process of creating a new remote by running
- rclone config, each prompt should be answered as shown below:
- Storage> s3
- env_auth> 1
- access_key_id> YOUR_ACCESS_KEY
- secret_access_key> YOUR_SECRET_KEY
- region>
- endpoint> nyc3.digitaloceanspaces.com
- location_constraint>
- acl>
- storage_class>
- The resulting configuration file should look like:
- [spaces]
- type = s3
- provider = DigitalOcean
- env_auth = false
- access_key_id = YOUR_ACCESS_KEY
- secret_access_key = YOUR_SECRET_KEY
- region =
- endpoint = nyc3.digitaloceanspaces.com
- location_constraint =
- acl =
- server_side_encryption =
- storage_class =
- Once configured, you can create a new Space and begin copying files. For
- example:
- rclone mkdir spaces:my-new-space
- rclone copy /path/to/files spaces:my-new-space
- IBM COS (S3)
- Information stored with IBM Cloud Object Storage is encrypted and
- dispersed across multiple geographic locations, and accessed through an
- implementation of the S3 API. This service makes use of the distributed
- storage technologies provided by IBM’s Cloud Object Storage System
- (formerly Cleversafe). For more information visit:
- (http://www.ibm.com/cloud/object-storage)
- To configure access to IBM COS S3, follow the steps below:
- 1. Run rclone config and select n for a new remote.
- 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- 2. Enter the name for the configuration
- name> <YOUR NAME>
- 3. Select "s3" storage.
- Choose a number from below, or type in your own value
- 1 / Alias for a existing remote
- \ "alias"
- 2 / Amazon Drive
- \ "amazon cloud drive"
- 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
- \ "s3"
- 4 / Backblaze B2
- \ "b2"
- [snip]
- 23 / http Connection
- \ "http"
- Storage> 3
- 4. Select IBM COS as the S3 Storage Provider.
- Choose the S3 provider.
- Choose a number from below, or type in your own value
- 1 / Choose this option to configure Storage to AWS S3
- \ "AWS"
- 2 / Choose this option to configure Storage to Ceph Systems
- \ "Ceph"
- 3 / Choose this option to configure Storage to Dreamhost
- \ "Dreamhost"
- 4 / Choose this option to the configure Storage to IBM COS S3
- \ "IBMCOS"
- 5 / Choose this option to the configure Storage to Minio
- \ "Minio"
- Provider>4
- 5. Enter the Access Key and Secret.
- AWS Access Key ID - leave blank for anonymous access or runtime credentials.
- access_key_id> <>
- AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
- secret_access_key> <>
- 6. Specify the endpoint for IBM COS. For Public IBM COS, choose from
- the option below. For On Premise IBM COS, enter an enpoint address.
- Endpoint for IBM COS S3 API.
- Specify if using an IBM COS On Premise.
- Choose a number from below, or type in your own value
- 1 / US Cross Region Endpoint
- \ "s3-api.us-geo.objectstorage.softlayer.net"
- 2 / US Cross Region Dallas Endpoint
- \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
- 3 / US Cross Region Washington DC Endpoint
- \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
- 4 / US Cross Region San Jose Endpoint
- \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
- 5 / US Cross Region Private Endpoint
- \ "s3-api.us-geo.objectstorage.service.networklayer.com"
- 6 / US Cross Region Dallas Private Endpoint
- \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
- 7 / US Cross Region Washington DC Private Endpoint
- \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
- 8 / US Cross Region San Jose Private Endpoint
- \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
- 9 / US Region East Endpoint
- \ "s3.us-east.objectstorage.softlayer.net"
- 10 / US Region East Private Endpoint
- \ "s3.us-east.objectstorage.service.networklayer.com"
- 11 / US Region South Endpoint
- [snip]
- 34 / Toronto Single Site Private Endpoint
- \ "s3.tor01.objectstorage.service.networklayer.com"
- endpoint>1
- 7. Specify a IBM COS Location Constraint. The location constraint must
- match endpoint when using IBM Cloud Public. For on-prem COS, do not
- make a selection from this list, hit enter
- 1 / US Cross Region Standard
- \ "us-standard"
- 2 / US Cross Region Vault
- \ "us-vault"
- 3 / US Cross Region Cold
- \ "us-cold"
- 4 / US Cross Region Flex
- \ "us-flex"
- 5 / US East Region Standard
- \ "us-east-standard"
- 6 / US East Region Vault
- \ "us-east-vault"
- 7 / US East Region Cold
- \ "us-east-cold"
- 8 / US East Region Flex
- \ "us-east-flex"
- 9 / US South Region Standard
- \ "us-south-standard"
- 10 / US South Region Vault
- \ "us-south-vault"
- [snip]
- 32 / Toronto Flex
- \ "tor01-flex"
- location_constraint>1
- 8. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read"
- and "private". IBM Cloud(Infra) supports all the canned ACLs.
- On-Premise COS supports all the canned ACLs.
- Canned ACL used when creating buckets and/or storing objects in S3.
- For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Choose a number from below, or type in your own value
- 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
- \ "private"
- 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
- \ "public-read"
- 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
- \ "public-read-write"
- 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
- \ "authenticated-read"
- acl> 1
- 9. Review the displayed configuration and accept to save the "remote"
- then quit. The config file should look like this
- [xxx]
- type = s3
- Provider = IBMCOS
- access_key_id = xxx
- secret_access_key = yyy
- endpoint = s3-api.us-geo.objectstorage.softlayer.net
- location_constraint = us-standard
- acl = private
- 10. Execute rclone commands
- 1) Create a bucket.
- rclone mkdir IBM-COS-XREGION:newbucket
- 2) List available buckets.
- rclone lsd IBM-COS-XREGION:
- -1 2017-11-08 21:16:22 -1 test
- -1 2018-02-14 20:16:39 -1 newbucket
- 3) List contents of a bucket.
- rclone ls IBM-COS-XREGION:newbucket
- 18685952 test.exe
- 4) Copy a file from local to remote.
- rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
- 5) Copy a file from remote to local.
- rclone copy IBM-COS-XREGION:newbucket/file.txt .
- 6) Delete a file on remote.
- rclone delete IBM-COS-XREGION:newbucket/file.txt
- Minio
- Minio is an object storage server built for cloud application developers
- and devops.
- It is very easy to install and provides an S3 compatible server which
- can be used by rclone.
- To use it, install Minio following the instructions here.
- When it configures itself Minio will print something like this
- Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
- AccessKey: USWUXHGYZQYFYFFIT3RE
- SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
- Region: us-east-1
- SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
- Browser Access:
- http://192.168.1.106:9000 http://172.23.0.1:9000
- Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
- $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
- Object API (Amazon S3 compatible):
- Go: https://docs.minio.io/docs/golang-client-quickstart-guide
- Java: https://docs.minio.io/docs/java-client-quickstart-guide
- Python: https://docs.minio.io/docs/python-client-quickstart-guide
- JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
- .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
- Drive Capacity: 26 GiB Free, 165 GiB Total
- These details need to go into rclone config like this. Note that it is
- important to put the region in as stated above.
- env_auth> 1
- access_key_id> USWUXHGYZQYFYFFIT3RE
- secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
- region> us-east-1
- endpoint> http://192.168.1.106:9000
- location_constraint>
- server_side_encryption>
- Which makes the config file look like this
- [minio]
- type = s3
- provider = Minio
- env_auth = false
- access_key_id = USWUXHGYZQYFYFFIT3RE
- secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
- region = us-east-1
- endpoint = http://192.168.1.106:9000
- location_constraint =
- server_side_encryption =
- So once set up, for example to copy files into a bucket
- rclone copy /path/to/files minio:bucket
- Wasabi
- Wasabi is a cloud-based object storage service for a broad range of
- applications and use cases. Wasabi is designed for individuals and
- organizations that require a high-performance, reliable, and secure data
- storage infrastructure at minimal cost.
- Wasabi provides an S3 interface which can be configured for use with
- rclone like this.
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- n/s> n
- name> wasabi
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- [snip]
- Storage> s3
- Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
- Choose a number from below, or type in your own value
- 1 / Enter AWS credentials in the next step
- \ "false"
- 2 / Get AWS credentials from the environment (env vars or IAM)
- \ "true"
- env_auth> 1
- AWS Access Key ID - leave blank for anonymous access or runtime credentials.
- access_key_id> YOURACCESSKEY
- AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
- secret_access_key> YOURSECRETACCESSKEY
- Region to connect to.
- Choose a number from below, or type in your own value
- / The default endpoint - a good choice if you are unsure.
- 1 | US Region, Northern Virginia or Pacific Northwest.
- | Leave location constraint empty.
- \ "us-east-1"
- [snip]
- region> us-east-1
- Endpoint for S3 API.
- Leave blank if using AWS to use the default endpoint for the region.
- Specify if using an S3 clone such as Ceph.
- endpoint> s3.wasabisys.com
- Location constraint - must be set to match the Region. Used when creating buckets only.
- Choose a number from below, or type in your own value
- 1 / Empty for US Region, Northern Virginia or Pacific Northwest.
- \ ""
- [snip]
- location_constraint>
- Canned ACL used when creating buckets and/or storing objects in S3.
- For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Choose a number from below, or type in your own value
- 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
- \ "private"
- [snip]
- acl>
- The server-side encryption algorithm used when storing this object in S3.
- Choose a number from below, or type in your own value
- 1 / None
- \ ""
- 2 / AES256
- \ "AES256"
- server_side_encryption>
- The storage class to use when storing objects in S3.
- Choose a number from below, or type in your own value
- 1 / Default
- \ ""
- 2 / Standard storage class
- \ "STANDARD"
- 3 / Reduced redundancy storage class
- \ "REDUCED_REDUNDANCY"
- 4 / Standard Infrequent Access storage class
- \ "STANDARD_IA"
- storage_class>
- Remote config
- --------------------
- [wasabi]
- env_auth = false
- access_key_id = YOURACCESSKEY
- secret_access_key = YOURSECRETACCESSKEY
- region = us-east-1
- endpoint = s3.wasabisys.com
- location_constraint =
- acl =
- server_side_encryption =
- storage_class =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This will leave the config file looking like this.
- [wasabi]
- type = s3
- provider = Wasabi
- env_auth = false
- access_key_id = YOURACCESSKEY
- secret_access_key = YOURSECRETACCESSKEY
- region =
- endpoint = s3.wasabisys.com
- location_constraint =
- acl =
- server_side_encryption =
- storage_class =
- Aliyun OSS / Netease NOS
- This describes how to set up Aliyun OSS - Netease NOS is the same except
- for different endpoints.
- Note this is a pretty standard S3 setup, except for the setting of
- force_path_style = false in the advanced config.
- # rclone config
- e/n/d/r/c/s/q> n
- name> oss
- Type of storage to configure.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
- \ "s3"
- Storage> s3
- Choose your S3 provider.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- 8 / Any other S3 compatible provider
- \ "Other"
- provider> other
- Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- Only applies if access_key_id and secret_access_key is blank.
- Enter a boolean value (true or false). Press Enter for the default ("false").
- Choose a number from below, or type in your own value
- 1 / Enter AWS credentials in the next step
- \ "false"
- 2 / Get AWS credentials from the environment (env vars or IAM)
- \ "true"
- env_auth> 1
- AWS Access Key ID.
- Leave blank for anonymous access or runtime credentials.
- Enter a string value. Press Enter for the default ("").
- access_key_id> xxxxxxxxxxxx
- AWS Secret Access Key (password)
- Leave blank for anonymous access or runtime credentials.
- Enter a string value. Press Enter for the default ("").
- secret_access_key> xxxxxxxxxxxxxxxxx
- Region to connect to.
- Leave blank if you are using an S3 clone and you don't have a region.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- 1 / Use this if unsure. Will use v4 signatures and an empty region.
- \ ""
- 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
- \ "other-v2-signature"
- region> 1
- Endpoint for S3 API.
- Required when using an S3 clone.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- endpoint> oss-cn-shenzhen.aliyuncs.com
- Location constraint - must be set to match the Region.
- Leave blank if not sure. Used when creating buckets only.
- Enter a string value. Press Enter for the default ("").
- location_constraint>
- Canned ACL used when creating buckets and/or storing objects in S3.
- For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
- \ "private"
- acl> 1
- Edit advanced config? (y/n)
- y) Yes
- n) No
- y/n> y
- Chunk size to use for uploading
- Enter a size with suffix k,M,G,T. Press Enter for the default ("5M").
- chunk_size>
- Don't store MD5 checksum with object metadata
- Enter a boolean value (true or false). Press Enter for the default ("false").
- disable_checksum>
- An AWS session token
- Enter a string value. Press Enter for the default ("").
- session_token>
- Concurrency for multipart uploads.
- Enter a signed integer. Press Enter for the default ("2").
- upload_concurrency>
- If true use path style access if false use virtual hosted style.
- Some providers (eg Aliyun OSS or Netease COS) require this.
- Enter a boolean value (true or false). Press Enter for the default ("true").
- force_path_style> false
- Remote config
- --------------------
- [oss]
- type = s3
- provider = Other
- env_auth = false
- access_key_id = xxxxxxxxx
- secret_access_key = xxxxxxxxxxxxx
- endpoint = oss-cn-shenzhen.aliyuncs.com
- acl = private
- force_path_style = false
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Backblaze B2
- B2 is Backblaze's cloud storage system.
- Paths are specified as remote:bucket (or remote: for the lsd command.)
- You may put subdirectories in too, eg remote:bucket/path/to/dir.
- Here is an example of making a b2 configuration. First run
- rclone config
- This will guide you through an interactive setup process. You will need
- your account number (a short hex number) and key (a long hex number)
- which you can get from the b2 control panel.
- No remotes found - make a new one
- n) New remote
- q) Quit config
- n/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 3
- Account ID or Application Key ID
- account> 123456789abc
- Application Key
- key> 0123456789abcdef0123456789abcdef0123456789
- Endpoint for the service - leave blank normally.
- endpoint>
- Remote config
- --------------------
- [remote]
- account = 123456789abc
- key = 0123456789abcdef0123456789abcdef0123456789
- endpoint =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This remote is called remote and can now be used like this
- See all buckets
- rclone lsd remote:
- Create a new bucket
- rclone mkdir remote:bucket
- List the contents of a bucket
- rclone ls remote:bucket
- Sync /home/local/directory to the remote bucket, deleting any excess
- files in the bucket.
- rclone sync /home/local/directory remote:bucket
- Application Keys
- B2 supports multiple Application Keys for different access permission to
- B2 Buckets.
- You can use these with rclone too.
- Follow Backblaze's docs to create an Application Key with the required
- permission and add the Application Key ID as the account and the
- Application Key itself as the key.
- Note that you must put the Application Key ID as the account - you can't
- use the master Account ID. If you try then B2 will return 401 errors.
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Modified time
- The modified time is stored as metadata on the object as
- X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in
- the Backblaze standard. Other tools should be able to use this as a
- modified time.
- Modified times are used in syncing and are fully supported except in the
- case of updating a modification time on an existing object. In this case
- the object will be uploaded again as B2 doesn't have an API method to
- set the modification time independent of doing an upload.
- SHA1 checksums
- The SHA1 checksums of the files are checked on upload and download and
- will be used in the syncing process.
- Large files (bigger than the limit in --b2-upload-cutoff) which are
- uploaded in chunks will store their SHA1 on the object as
- X-Bz-Info-large_file_sha1 as recommended by Backblaze.
- For a large file to be uploaded with an SHA1 checksum, the source needs
- to support SHA1 checksums. The local disk supports SHA1 checksums so
- large file transfers from local disk will have an SHA1. See the overview
- for exactly which remotes support SHA1.
- Sources which don't support SHA1, in particular crypt will upload large
- files without SHA1 checksums. This may be fixed in the future (see
- #1767).
- Files sizes below --b2-upload-cutoff will always have an SHA1 regardless
- of the source.
- Transfers
- Backblaze recommends that you do lots of transfers simultaneously for
- maximum speed. In tests from my SSD equipped laptop the optimum setting
- is about --transfers 32 though higher numbers may be used for a slight
- speed improvement. The optimum number for you may vary depending on your
- hardware, how big the files are, how much you want to load your
- computer, etc. The default of --transfers 4 is definitely too low for
- Backblaze B2 though.
- Note that uploading big files (bigger than 200 MB by default) will use a
- 96 MB RAM buffer by default. There can be at most --transfers of these
- in use at any moment, so this sets the upper limit on the memory used.
- Versions
- When rclone uploads a new version of a file it creates a new version of
- it. Likewise when you delete a file, the old version will be marked
- hidden and still be available. Conversely, you may opt in to a "hard
- delete" of files with the --b2-hard-delete flag which would permanently
- remove the file instead of hiding it.
- Old versions of files, where available, are visible using the
- --b2-versions flag.
- If you wish to remove all the old versions then you can use the
- rclone cleanup remote:bucket command which will delete all the old
- versions of files, leaving the current ones intact. You can also supply
- a path and only old versions under that path will be deleted, eg
- rclone cleanup remote:bucket/path/to/stuff.
- Note that cleanup does not remove partially uploaded files from the
- bucket.
- When you purge a bucket, the current and the old versions will be
- deleted then the bucket will be deleted.
- However delete will cause the current versions of the files to become
- hidden old versions.
- Here is a session showing the listing and retrieval of an old version
- followed by a cleanup of the old versions.
- Show current version and all the versions with --b2-versions flag.
- $ rclone -q ls b2:cleanup-test
- 9 one.txt
- $ rclone -q --b2-versions ls b2:cleanup-test
- 9 one.txt
- 8 one-v2016-07-04-141032-000.txt
- 16 one-v2016-07-04-141003-000.txt
- 15 one-v2016-07-02-155621-000.txt
- Retrieve an old version
- $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
- $ ls -l /tmp/one-v2016-07-04-141003-000.txt
- -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
- Clean up all the old versions and show that they've gone.
- $ rclone -q cleanup b2:cleanup-test
- $ rclone -q ls b2:cleanup-test
- 9 one.txt
- $ rclone -q --b2-versions ls b2:cleanup-test
- 9 one.txt
- Data usage
- It is useful to know how many requests are sent to the server in
- different scenarios.
- All copy commands send the following 4 requests:
- /b2api/v1/b2_authorize_account
- /b2api/v1/b2_create_bucket
- /b2api/v1/b2_list_buckets
- /b2api/v1/b2_list_file_names
- The b2_list_file_names request will be sent once for every 1k files in
- the remote path, providing the checksum and modification time of the
- listed files. As of version 1.33 issue #818 causes extra requests to be
- sent when using B2 with Crypt. When a copy operation does not require
- any files to be uploaded, no more requests will be sent.
- Uploading files that do not require chunking, will send 2 requests per
- file upload:
- /b2api/v1/b2_get_upload_url
- /b2api/v1/b2_upload_file/
- Uploading files requiring chunking, will send 2 requests (one each to
- start and finish the upload) and another 2 requests for each chunk:
- /b2api/v1/b2_start_large_file
- /b2api/v1/b2_get_upload_part_url
- /b2api/v1/b2_upload_part/
- /b2api/v1/b2_finish_large_file
- Versions
- Versions can be viewd with the --b2-versions flag. When it is set rclone
- will show and act on older versions of files. For example
- Listing without --b2-versions
- $ rclone -q ls b2:cleanup-test
- 9 one.txt
- And with
- $ rclone -q --b2-versions ls b2:cleanup-test
- 9 one.txt
- 8 one-v2016-07-04-141032-000.txt
- 16 one-v2016-07-04-141003-000.txt
- 15 one-v2016-07-02-155621-000.txt
- Showing that the current version is unchanged but older versions can be
- seen. These have the UTC date that they were uploaded to the server to
- the nearest millisecond appended to them.
- Note that when using --b2-versions no file write operations are
- permitted, so you can't upload files or delete them.
- Standard Options
- Here are the standard options specific to b2 (Backblaze B2).
- --b2-account
- Account ID or Application Key ID
- - Config: account
- - Env Var: RCLONE_B2_ACCOUNT
- - Type: string
- - Default: ""
- --b2-key
- Application Key
- - Config: key
- - Env Var: RCLONE_B2_KEY
- - Type: string
- - Default: ""
- --b2-hard-delete
- Permanently delete files on remote removal, otherwise hide files.
- - Config: hard_delete
- - Env Var: RCLONE_B2_HARD_DELETE
- - Type: bool
- - Default: false
- Advanced Options
- Here are the advanced options specific to b2 (Backblaze B2).
- --b2-endpoint
- Endpoint for the service. Leave blank normally.
- - Config: endpoint
- - Env Var: RCLONE_B2_ENDPOINT
- - Type: string
- - Default: ""
- --b2-test-mode
- A flag string for X-Bz-Test-Mode header for debugging.
- This is for debugging purposes only. Setting it to one of the strings
- below will cause b2 to return specific errors:
- - "fail_some_uploads"
- - "expire_some_account_authorization_tokens"
- - "force_cap_exceeded"
- These will be set in the "X-Bz-Test-Mode" header which is documented in
- the b2 integrations checklist.
- - Config: test_mode
- - Env Var: RCLONE_B2_TEST_MODE
- - Type: string
- - Default: ""
- --b2-versions
- Include old versions in directory listings. Note that when using this no
- file write operations are permitted, so you can't upload files or delete
- them.
- - Config: versions
- - Env Var: RCLONE_B2_VERSIONS
- - Type: bool
- - Default: false
- --b2-upload-cutoff
- Cutoff for switching to chunked upload.
- Files above this size will be uploaded in chunks of "--b2-chunk-size".
- This value should be set no larger than 4.657GiB (== 5GB).
- - Config: upload_cutoff
- - Env Var: RCLONE_B2_UPLOAD_CUTOFF
- - Type: SizeSuffix
- - Default: 200M
- --b2-chunk-size
- Upload chunk size. Must fit in memory.
- When uploading large files, chunk the file into this size. Note that
- these chunks are buffered in memory and there might a maximum of
- "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimim
- size.
- - Config: chunk_size
- - Env Var: RCLONE_B2_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 96M
- Box
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- The initial setup for Box involves getting a token from Box which you
- need to do in your browser. rclone config walks you through it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Dropbox
- \ "dropbox"
- 6 / Encrypt/Decrypt a remote
- \ "crypt"
- 7 / FTP Connection
- \ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 9 / Google Drive
- \ "drive"
- 10 / Hubic
- \ "hubic"
- 11 / Local Disk
- \ "local"
- 12 / Microsoft OneDrive
- \ "onedrive"
- 13 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 14 / SSH/SFTP Connection
- \ "sftp"
- 15 / Yandex Disk
- \ "yandex"
- 16 / http Connection
- \ "http"
- Storage> box
- Box App Client Id - leave blank normally.
- client_id>
- Box App Client Secret - leave blank normally.
- client_secret>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Box. This only runs from the moment it opens your
- browser to the moment you get back the verification code. This is on
- http://127.0.0.1:53682/ and this it may require you to unblock it
- temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- List directories in top level of your Box
- rclone lsd remote:
- List all the files in your Box
- rclone ls remote:
- To copy a local directory to an Box directory called backup
- rclone copy /home/source remote:backup
- Invalid refresh token
- According to the box docs:
- Each refresh_token is valid for one use in 60 days.
- This means that if you
- - Don't use the box remote for 60 days
- - Copy the config file with a box refresh token in and use it in two
- places
- - Get an error on a token refresh
- then rclone will return an error which includes the text
- Invalid refresh token.
- To fix this you will need to use oauth2 again to update the refresh
- token. You can use the methods in the remote setup docs, bearing in mind
- that if you use the copy the config file method, you should not use that
- remote on the computer you did the authentication on.
- Here is how to do it.
- $ rclone config
- Current remotes:
- Name Type
- ==== ====
- remote box
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> e
- Choose a number from below, or type in an existing value
- 1 > remote
- remote> remote
- --------------------
- [remote]
- type = box
- token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"2017-07-08T23:40:08.059167677+01:00"}
- --------------------
- Edit remote
- Value "client_id" = ""
- Edit? (y/n)>
- y) Yes
- n) No
- y/n> n
- Value "client_secret" = ""
- Edit? (y/n)>
- y) Yes
- n) No
- y/n> n
- Remote config
- Already have a token - refresh?
- y) Yes
- n) No
- y/n> y
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- type = box
- token = {"access_token":"YYY","token_type":"bearer","refresh_token":"YYY","expiry":"2017-07-23T12:22:29.259137901+01:00"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Modified time and hashes
- Box allows modification times to be set on objects accurate to 1 second.
- These will be used to detect whether objects need syncing or not.
- Box supports SHA1 type hashes, so you can use the --checksum flag.
- Transfers
- For files above 50MB rclone will use a chunked transfer. Rclone will
- upload up to --transfers chunks at the same time (shared among all the
- multipart uploads). Chunks are buffered in memory and are normally 8MB
- so increasing --transfers will increase memory use.
- Deleting files
- Depending on the enterprise settings for your user, the item will either
- be actually deleted from Box or moved to the trash.
- Standard Options
- Here are the standard options specific to box (Box).
- --box-client-id
- Box App Client Id. Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_BOX_CLIENT_ID
- - Type: string
- - Default: ""
- --box-client-secret
- Box App Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_BOX_CLIENT_SECRET
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to box (Box).
- --box-upload-cutoff
- Cutoff for switching to multipart upload (>= 50MB).
- - Config: upload_cutoff
- - Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- - Type: SizeSuffix
- - Default: 50M
- --box-commit-retries
- Max number of times to try committing a multipart file.
- - Config: commit_retries
- - Env Var: RCLONE_BOX_COMMIT_RETRIES
- - Type: int
- - Default: 100
- Limitations
- Note that Box is case insensitive so you can't have a file called
- "Hello.doc" and one called "hello.doc".
- Box file names can't have the \ character in. rclone maps this to and
- from an identical looking unicode equivalent \.
- Box only supports filenames up to 255 characters in length.
- Cache (BETA)
- The cache remote wraps another existing remote and stores file structure
- and its data for long running tasks like rclone mount.
- To get started you just need to have an existing remote which can be
- configured with cache.
- Here is an example of how to make a remote called test-cache. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> test-cache
- Type of storage to configure.
- Choose a number from below, or type in your own value
- ...
- 5 / Cache a remote
- \ "cache"
- ...
- Storage> 5
- Remote to cache.
- Normally should contain a ':' and a path, eg "myremote:path/to/dir",
- "myremote:bucket" or maybe "myremote:" (not recommended).
- remote> local:/test
- Optional: The URL of the Plex server
- plex_url> http://127.0.0.1:32400
- Optional: The username of the Plex user
- plex_username> dummyusername
- Optional: The password of the Plex user
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> y
- Enter the password:
- password:
- Confirm the password:
- password:
- The size of a chunk. Lower value good for slow connections but can affect seamless reading.
- Default: 5M
- Choose a number from below, or type in your own value
- 1 / 1MB
- \ "1m"
- 2 / 5 MB
- \ "5M"
- 3 / 10 MB
- \ "10M"
- chunk_size> 2
- How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
- Accepted units are: "s", "m", "h".
- Default: 5m
- Choose a number from below, or type in your own value
- 1 / 1 hour
- \ "1h"
- 2 / 24 hours
- \ "24h"
- 3 / 24 hours
- \ "48h"
- info_age> 2
- The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
- Default: 10G
- Choose a number from below, or type in your own value
- 1 / 500 MB
- \ "500M"
- 2 / 1 GB
- \ "1G"
- 3 / 10 GB
- \ "10G"
- chunk_total_size> 3
- Remote config
- --------------------
- [test-cache]
- remote = local:/test
- plex_url = http://127.0.0.1:32400
- plex_username = dummyusername
- plex_password = *** ENCRYPTED ***
- chunk_size = 5M
- info_age = 48h
- chunk_total_size = 10G
- You can then use it like this,
- List directories in top level of your drive
- rclone lsd test-cache:
- List all the files in your drive
- rclone ls test-cache:
- To start a cached mount
- rclone mount --allow-other test-cache: /var/tmp/test-cache
- Write Features
- Offline uploading
- In an effort to make writing through cache more reliable, the backend
- now supports this feature which can be activated by specifying a
- cache-tmp-upload-path.
- A files goes through these states when using this feature:
- 1. An upload is started (usually by copying a file on the cache remote)
- 2. When the copy to the temporary location is complete the file is part
- of the cached remote and looks and behaves like any other file
- (reading included)
- 3. After cache-tmp-wait-time passes and the file is next in line,
- rclone move is used to move the file to the cloud provider
- 4. Reading the file still works during the upload but most
- modifications on it will be prohibited
- 5. Once the move is complete the file is unlocked for modifications as
- it becomes as any other regular file
- 6. If the file is being read through cache when it's actually deleted
- from the temporary path then cache will simply swap the source to
- the cloud provider without interrupting the reading (small blip can
- happen though)
- Files are uploaded in sequence and only one file is uploaded at a time.
- Uploads will be stored in a queue and be processed based on the order
- they were added. The queue and the temporary storage is persistent
- across restarts but can be cleared on startup with the --cache-db-purge
- flag.
- Write Support
- Writes are supported through cache. One caveat is that a mounted cache
- remote does not add any retry or fallback mechanism to the upload
- operation. This will depend on the implementation of the wrapped remote.
- Consider using Offline uploading for reliable writes.
- One special case is covered with cache-writes which will cache the file
- data at the same time as the upload when it is enabled making it
- available from the cache store immediately once the upload is finished.
- Read Features
- Multiple connections
- To counter the high latency between a local PC where rclone is running
- and cloud providers, the cache remote can split multiple requests to the
- cloud provider for smaller file chunks and combines them together
- locally where they can be available almost immediately before the reader
- usually needs them.
- This is similar to buffering when media files are played online. Rclone
- will stay around the current marker but always try its best to stay
- ahead and prepare the data before.
- Plex Integration
- There is a direct integration with Plex which allows cache to detect
- during reading if the file is in playback or not. This helps cache to
- adapt how it queries the cloud provider depending on what is needed for.
- Scans will have a minimum amount of workers (1) while in a confirmed
- playback cache will deploy the configured number of workers.
- This integration opens the doorway to additional performance
- improvements which will be explored in the near future.
- NOTE: If Plex options are not configured, cache will function with its
- configured options without adapting any of its settings.
- How to enable? Run rclone config and add all the Plex options (endpoint,
- username and password) in your remote and it will be automatically
- enabled.
- Affected settings: - cache-workers: _Configured value_ during confirmed
- playback or _1_ all the other times
- Certificate Validation
- When the Plex server is configured to only accept secure connections, it
- is possible to use .plex.direct URL's to ensure certificate validation
- succeeds. These URL's are used by Plex internally to connect to the Plex
- server securely.
- The format for this URL's is the following:
- https://ip-with-dots-replaced.server-hash.plex.direct:32400/
- The ip-with-dots-replaced part can be any IPv4 address, where the dots
- have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.
- To get the server-hash part, the easiest way is to visit
- https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
- This page will list all the available Plex servers for your account with
- at least one .plex.direct link for each. Copy one URL and replace the IP
- address with the desired address. This can be used as the plex_url
- value.
- Known issues
- Mount and --dir-cache-time
- --dir-cache-time controls the first layer of directory caching which
- works at the mount layer. Being an independent caching mechanism from
- the cache backend, it will manage its own entries based on the
- configured time.
- To avoid getting in a scenario where dir cache has obsolete data and
- cache would have the correct one, try to set --dir-cache-time to a lower
- time than --cache-info-age. Default values are already configured in
- this way.
- Windows support - Experimental
- There are a couple of issues with Windows mount functionality that still
- require some investigations. It should be considered as experimental
- thus far as fixes come in for this OS.
- Most of the issues seem to be related to the difference between
- filesystems on Linux flavors and Windows as cache is heavily dependant
- on them.
- Any reports or feedback on how cache behaves on this OS is greatly
- appreciated.
- - https://github.com/ncw/rclone/issues/1935
- - https://github.com/ncw/rclone/issues/1907
- - https://github.com/ncw/rclone/issues/1834
- Risk of throttling
- Future iterations of the cache backend will make use of the pooling
- functionality of the cloud provider to synchronize and at the same time
- make writing through it more tolerant to failures.
- There are a couple of enhancements in track to add these but in the
- meantime there is a valid concern that the expiring cache listings can
- lead to cloud provider throttles or bans due to repeated queries on it
- for very large mounts.
- Some recommendations: - don't use a very small interval for entry
- informations (--cache-info-age) - while writes aren't yet optimised, you
- can still write through cache which gives you the advantage of adding
- the file in the cache at the same time if configured to do so.
- Future enhancements:
- - https://github.com/ncw/rclone/issues/1937
- - https://github.com/ncw/rclone/issues/1936
- cache and crypt
- One common scenario is to keep your data encrypted in the cloud provider
- using the crypt remote. crypt uses a similar technique to wrap around an
- existing remote and handles this translation in a seamless way.
- There is an issue with wrapping the remotes in this order: CLOUD REMOTE
- -> CRYPT -> CACHE
- During testing, I experienced a lot of bans with the remotes in this
- order. I suspect it might be related to how crypt opens files on the
- cloud provider which makes it think we're downloading the full file
- instead of small chunks. Organizing the remotes in this order yelds
- better results: CLOUD REMOTE -> CACHE -> CRYPT
- absolute remote paths
- cache can not differentiate between relative and absolute paths for the
- wrapped remote. Any path given in the remote config setting and on the
- command line will be passed to the wrapped remote as is, but for storing
- the chunks on disk the path will be made relative by removing any
- leading / character.
- This behavior is irrelevant for most backend types, but there are
- backends where a leading / changes the effective directory, e.g. in the
- sftp backend paths starting with a / are relative to the root of the SSH
- server and paths without are relative to the user home directory. As a
- result sftp:bin and sftp:/bin will share the same cache folder, even if
- they represent a different directory on the SSH server.
- Cache and Remote Control (--rc)
- Cache supports the new --rc mode in rclone and can be remote controlled
- through the following end points: By default, the listener is disabled
- if you do not add the flag.
- rc cache/expire
- Purge a remote from the cache backend. Supports either a directory or a
- file. It supports both encrypted and unencrypted file names if cache is
- wrapped by crypt.
- Params: - REMOTE = path to remote (REQUIRED) - WITHDATA = true/false to
- delete cached data (chunks) as well _(optional, false by default)_
- Standard Options
- Here are the standard options specific to cache (Cache a remote).
- --cache-remote
- Remote to cache. Normally should contain a ':' and a path, eg
- "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
- recommended).
- - Config: remote
- - Env Var: RCLONE_CACHE_REMOTE
- - Type: string
- - Default: ""
- --cache-plex-url
- The URL of the Plex server
- - Config: plex_url
- - Env Var: RCLONE_CACHE_PLEX_URL
- - Type: string
- - Default: ""
- --cache-plex-username
- The username of the Plex user
- - Config: plex_username
- - Env Var: RCLONE_CACHE_PLEX_USERNAME
- - Type: string
- - Default: ""
- --cache-plex-password
- The password of the Plex user
- - Config: plex_password
- - Env Var: RCLONE_CACHE_PLEX_PASSWORD
- - Type: string
- - Default: ""
- --cache-chunk-size
- The size of a chunk (partial file data).
- Use lower numbers for slower connections. If the chunk size is changed,
- any downloaded chunks will be invalid and cache-chunk-path will need to
- be cleared or unexpected EOF errors will occur.
- - Config: chunk_size
- - Env Var: RCLONE_CACHE_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 5M
- - Examples:
- - "1m"
- - 1MB
- - "5M"
- - 5 MB
- - "10M"
- - 10 MB
- --cache-info-age
- How long to cache file structure information (directory listings, file
- size, times etc). If all write operations are done through the cache
- then you can safely make this value very large as the cache store will
- also be updated in real time.
- - Config: info_age
- - Env Var: RCLONE_CACHE_INFO_AGE
- - Type: Duration
- - Default: 6h0m0s
- - Examples:
- - "1h"
- - 1 hour
- - "24h"
- - 24 hours
- - "48h"
- - 48 hours
- --cache-chunk-total-size
- The total size that the chunks can take up on the local disk.
- If the cache exceeds this value then it will start to delete the oldest
- chunks until it goes under this value.
- - Config: chunk_total_size
- - Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- - Type: SizeSuffix
- - Default: 10G
- - Examples:
- - "500M"
- - 500 MB
- - "1G"
- - 1 GB
- - "10G"
- - 10 GB
- Advanced Options
- Here are the advanced options specific to cache (Cache a remote).
- --cache-plex-token
- The plex token for authentication - auto set normally
- - Config: plex_token
- - Env Var: RCLONE_CACHE_PLEX_TOKEN
- - Type: string
- - Default: ""
- --cache-plex-insecure
- Skip all certificate verifications when connecting to the Plex server
- - Config: plex_insecure
- - Env Var: RCLONE_CACHE_PLEX_INSECURE
- - Type: string
- - Default: ""
- --cache-db-path
- Directory to store file structure metadata DB. The remote name is used
- as the DB file name.
- - Config: db_path
- - Env Var: RCLONE_CACHE_DB_PATH
- - Type: string
- - Default: "/home/ncw/.cache/rclone/cache-backend"
- --cache-chunk-path
- Directory to cache chunk files.
- Path to where partial file data (chunks) are stored locally. The remote
- name is appended to the final path.
- This config follows the "--cache-db-path". If you specify a custom
- location for "--cache-db-path" and don't specify one for
- "--cache-chunk-path" then "--cache-chunk-path" will use the same path as
- "--cache-db-path".
- - Config: chunk_path
- - Env Var: RCLONE_CACHE_CHUNK_PATH
- - Type: string
- - Default: "/home/ncw/.cache/rclone/cache-backend"
- --cache-db-purge
- Clear all the cached data for this remote on start.
- - Config: db_purge
- - Env Var: RCLONE_CACHE_DB_PURGE
- - Type: bool
- - Default: false
- --cache-chunk-clean-interval
- How often should the cache perform cleanups of the chunk storage. The
- default value should be ok for most people. If you find that the cache
- goes over "cache-chunk-total-size" too often then try to lower this
- value to force it to perform cleanups more often.
- - Config: chunk_clean_interval
- - Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
- - Type: Duration
- - Default: 1m0s
- --cache-read-retries
- How many times to retry a read from a cache storage.
- Since reading from a cache stream is independent from downloading file
- data, readers can get to a point where there's no more data in the
- cache. Most of the times this can indicate a connectivity issue if cache
- isn't able to provide file data anymore.
- For really slow connections, increase this to a point where the stream
- is able to provide data but your experience will be very stuttering.
- - Config: read_retries
- - Env Var: RCLONE_CACHE_READ_RETRIES
- - Type: int
- - Default: 10
- --cache-workers
- How many workers should run in parallel to download chunks.
- Higher values will mean more parallel processing (better CPU needed) and
- more concurrent requests on the cloud provider. This impacts several
- aspects like the cloud provider API limits, more stress on the hardware
- that rclone runs on but it also means that streams will be more fluid
- and data will be available much more faster to readers.
- NOTE: If the optional Plex integration is enabled then this setting will
- adapt to the type of reading performed and the value specified here will
- be used as a maximum number of workers to use.
- - Config: workers
- - Env Var: RCLONE_CACHE_WORKERS
- - Type: int
- - Default: 4
- --cache-chunk-no-memory
- Disable the in-memory cache for storing chunks during streaming.
- By default, cache will keep file data during streaming in RAM as well to
- provide it to readers as fast as possible.
- This transient data is evicted as soon as it is read and the number of
- chunks stored doesn't exceed the number of workers. However, depending
- on other settings like "cache-chunk-size" and "cache-workers" this
- footprint can increase if there are parallel streams too (multiple files
- being read at the same time).
- If the hardware permits it, use this feature to provide an overall
- better performance during streaming but it can also be disabled if RAM
- is not available on the local machine.
- - Config: chunk_no_memory
- - Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
- - Type: bool
- - Default: false
- --cache-rps
- Limits the number of requests per second to the source FS (-1 to
- disable)
- This setting places a hard limit on the number of requests per second
- that cache will be doing to the cloud provider remote and try to respect
- that value by setting waits between reads.
- If you find that you're getting banned or limited on the cloud provider
- through cache and know that a smaller number of requests per second will
- allow you to work with it then you can use this setting for that.
- A good balance of all the other settings should make this setting
- useless but it is available to set for more special cases.
- NOTE: This will limit the number of requests during streams but other
- API calls to the cloud provider like directory listings will still pass.
- - Config: rps
- - Env Var: RCLONE_CACHE_RPS
- - Type: int
- - Default: -1
- --cache-writes
- Cache file data on writes through the FS
- If you need to read files immediately after you upload them through
- cache you can enable this flag to have their data stored in the cache
- store at the same time during upload.
- - Config: writes
- - Env Var: RCLONE_CACHE_WRITES
- - Type: bool
- - Default: false
- --cache-tmp-upload-path
- Directory to keep temporary files until they are uploaded.
- This is the path where cache will use as a temporary storage for new
- files that need to be uploaded to the cloud provider.
- Specifying a value will enable this feature. Without it, it is
- completely disabled and files will be uploaded directly to the cloud
- provider
- - Config: tmp_upload_path
- - Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
- - Type: string
- - Default: ""
- --cache-tmp-wait-time
- How long should files be stored in local cache before being uploaded
- This is the duration that a file must wait in the temporary location
- _cache-tmp-upload-path_ before it is selected for upload.
- Note that only one file is uploaded at a time and it can take longer to
- start the upload if a queue formed for this purpose.
- - Config: tmp_wait_time
- - Env Var: RCLONE_CACHE_TMP_WAIT_TIME
- - Type: Duration
- - Default: 15s
- --cache-db-wait-time
- How long to wait for the DB to be available - 0 is unlimited
- Only one process can have the DB open at any one time, so rclone waits
- for this duration for the DB to become available before it gives an
- error.
- If you set it to 0 then it will wait forever.
- - Config: db_wait_time
- - Env Var: RCLONE_CACHE_DB_WAIT_TIME
- - Type: Duration
- - Default: 1s
- Crypt
- The crypt remote encrypts and decrypts another remote.
- To use it first set up the underlying remote following the config
- instructions for that remote. You can also use a local pathname instead
- of a remote which will encrypt and decrypt from that directory which
- might be useful for encrypting onto a USB stick for example.
- First check your chosen remote is working - we'll call it remote:path in
- these docs. Note that anything inside remote:path will be encrypted and
- anything outside won't. This means that if you are using a bucket based
- remote (eg S3, B2, swift) then you should probably put the bucket in the
- remote s3:bucket. If you just use s3: then rclone will make encrypted
- bucket names too (if using file name encryption) which may or may not be
- what you want.
- Now configure crypt using rclone config. We will call this one secret to
- differentiate it from the remote.
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> secret
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 5
- Remote to encrypt/decrypt.
- Normally should contain a ':' and a path, eg "myremote:path/to/dir",
- "myremote:bucket" or maybe "myremote:" (not recommended).
- remote> remote:path
- How to encrypt the filenames.
- Choose a number from below, or type in your own value
- 1 / Don't encrypt the file names. Adds a ".bin" extension only.
- \ "off"
- 2 / Encrypt the filenames see the docs for the details.
- \ "standard"
- 3 / Very simple filename obfuscation.
- \ "obfuscate"
- filename_encryption> 2
- Option to either encrypt directory names or leave them intact.
- Choose a number from below, or type in your own value
- 1 / Encrypt directory names.
- \ "true"
- 2 / Don't encrypt directory names, leave them intact.
- \ "false"
- filename_encryption> 1
- Password or pass phrase for encryption.
- y) Yes type in my own password
- g) Generate random password
- y/g> y
- Enter the password:
- password:
- Confirm the password:
- password:
- Password or pass phrase for salt. Optional but recommended.
- Should be different to the previous password.
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> g
- Password strength in bits.
- 64 is just about memorable
- 128 is secure
- 1024 is the maximum
- Bits> 128
- Your password is: JAsJvRcgR-_veXNfy_sGmQ
- Use this password?
- y) Yes
- n) No
- y/n> y
- Remote config
- --------------------
- [secret]
- remote = remote:path
- filename_encryption = standard
- password = *** ENCRYPTED ***
- password2 = *** ENCRYPTED ***
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- IMPORTANT The password is stored in the config file is lightly obscured
- so it isn't immediately obvious what it is. It is in no way secure
- unless you use config file encryption.
- A long passphrase is recommended, or you can use a random one. Note that
- if you reconfigure rclone with the same passwords/passphrases elsewhere
- it will be compatible - all the secrets used are derived from those two
- passwords/passphrases.
- Note that rclone does not encrypt
- - file length - this can be calcuated within 16 bytes
- - modification time - used for syncing
- Specifying the remote
- In normal use, make sure the remote has a : in. If you specify the
- remote without a : then rclone will use a local directory of that name.
- So if you use a remote of /path/to/secret/files then rclone will encrypt
- stuff to that directory. If you use a remote of name then rclone will
- put files in a directory called name in the current directory.
- If you specify the remote as remote:path/to/dir then rclone will store
- encrypted files in path/to/dir on the remote. If you are using file name
- encryption, then when you save files to secret:subdir/subfile this will
- store them in the unencrypted path path/to/dir but the subdir/subpath
- bit will be encrypted.
- Note that unless you want encrypted bucket names (which are difficult to
- manage because you won't know what directory they represent in web
- interfaces etc), you should probably specify a bucket, eg
- remote:secretbucket when using bucket based remotes such as S3, Swift,
- Hubic, B2, GCS.
- Example
- To test I made a little directory of files using "standard" file name
- encryption.
- plaintext/
- ├── file0.txt
- ├── file1.txt
- └── subdir
- ├── file2.txt
- ├── file3.txt
- └── subsubdir
- └── file4.txt
- Copy these to the remote and list them back
- $ rclone -q copy plaintext secret:
- $ rclone -q ls secret:
- 7 file1.txt
- 6 file0.txt
- 8 subdir/file2.txt
- 10 subdir/subsubdir/file4.txt
- 9 subdir/file3.txt
- Now see what that looked like when encrypted
- $ rclone -q ls remote:path
- 55 hagjclgavj2mbiqm6u6cnjjqcg
- 54 v05749mltvv1tf4onltun46gls
- 57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
- 58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
- 56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps
- Note that this retains the directory structure which means you can do
- this
- $ rclone -q ls secret:subdir
- 8 file2.txt
- 9 file3.txt
- 10 subsubdir/file4.txt
- If don't use file name encryption then the remote will look like this -
- note the .bin extensions added to prevent the cloud provider attempting
- to interpret the data.
- $ rclone -q ls remote:path
- 54 file0.txt.bin
- 57 subdir/file3.txt.bin
- 56 subdir/file2.txt.bin
- 58 subdir/subsubdir/file4.txt.bin
- 55 file1.txt.bin
- File name encryption modes
- Here are some of the features of the file name encryption modes
- Off
- - doesn't hide file names or directory structure
- - allows for longer file names (~246 characters)
- - can use sub paths and copy single files
- Standard
- - file names encrypted
- - file names can't be as long (~143 characters)
- - can use sub paths and copy single files
- - directory structure visible
- - identical files names will have identical uploaded names
- - can use shortcuts to shorten the directory recursion
- Obfuscation
- This is a simple "rotate" of the filename, with each file having a rot
- distance based on the filename. We store the distance at the beginning
- of the filename. So a file called "hello" may become "53.jgnnq"
- This is not a strong encryption of filenames, but it may stop automated
- scanning tools from picking up on filename patterns. As such it's an
- intermediate between "off" and "standard". The advantage is that it
- allows for longer path segment names.
- There is a possibility with some unicode based filenames that the
- obfuscation is weak and may map lower case characters to upper case
- equivalents. You can not rely on this for strong protection.
- - file names very lightly obfuscated
- - file names can be longer than standard encryption
- - can use sub paths and copy single files
- - directory structure visible
- - identical files names will have identical uploaded names
- Cloud storage systems have various limits on file name length and total
- path length which you are more likely to hit using "Standard" file name
- encryption. If you keep your file names to below 156 characters in
- length then you should be OK on all providers.
- There may be an even more secure file name encryption mode in the future
- which will address the long file name problem.
- Directory name encryption
- Crypt offers the option of encrypting dir names or leaving them intact.
- There are two options:
- True
- Encrypts the whole file path including directory names Example:
- 1/12/123.txt is encrypted to
- p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
- False
- Only encrypts file names, skips directory names Example: 1/12/123.txt is
- encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
- Modified time and hashes
- Crypt stores modification times using the underlying remote so support
- depends on that.
- Hashes are not stored for crypt. However the data integrity is protected
- by an extremely strong crypto authenticator.
- Note that you should use the rclone cryptcheck command to check the
- integrity of a crypted remote instead of rclone check which can't check
- the checksums properly.
- Standard Options
- Here are the standard options specific to crypt (Encrypt/Decrypt a
- remote).
- --crypt-remote
- Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg
- "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not
- recommended).
- - Config: remote
- - Env Var: RCLONE_CRYPT_REMOTE
- - Type: string
- - Default: ""
- --crypt-filename-encryption
- How to encrypt the filenames.
- - Config: filename_encryption
- - Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
- - Type: string
- - Default: "standard"
- - Examples:
- - "off"
- - Don't encrypt the file names. Adds a ".bin" extension only.
- - "standard"
- - Encrypt the filenames see the docs for the details.
- - "obfuscate"
- - Very simple filename obfuscation.
- --crypt-directory-name-encryption
- Option to either encrypt directory names or leave them intact.
- - Config: directory_name_encryption
- - Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
- - Type: bool
- - Default: true
- - Examples:
- - "true"
- - Encrypt directory names.
- - "false"
- - Don't encrypt directory names, leave them intact.
- --crypt-password
- Password or pass phrase for encryption.
- - Config: password
- - Env Var: RCLONE_CRYPT_PASSWORD
- - Type: string
- - Default: ""
- --crypt-password2
- Password or pass phrase for salt. Optional but recommended. Should be
- different to the previous password.
- - Config: password2
- - Env Var: RCLONE_CRYPT_PASSWORD2
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to crypt (Encrypt/Decrypt a
- remote).
- --crypt-show-mapping
- For all files listed show how the names encrypt.
- If this flag is set then for each file that the remote is asked to list,
- it will log (at level INFO) a line stating the decrypted file name and
- the encrypted file name.
- This is so you can work out which encrypted names are which decrypted
- names just in case you need to do something with the encrypted file
- names, or for debugging purposes.
- - Config: show_mapping
- - Env Var: RCLONE_CRYPT_SHOW_MAPPING
- - Type: bool
- - Default: false
- Backing up a crypted remote
- If you wish to backup a crypted remote, it it recommended that you use
- rclone sync on the encrypted files, and make sure the passwords are the
- same in the new encrypted remote.
- This will have the following advantages
- - rclone sync will check the checksums while copying
- - you can use rclone check between the encrypted remotes
- - you don't decrypt and encrypt unnecessarily
- For example, let's say you have your original remote at remote: with the
- encrypted version at eremote: with path remote:crypt. You would then set
- up the new remote remote2: and then the encrypted version eremote2: with
- path remote2:crypt using the same passwords as eremote:.
- To sync the two remotes you would do
- rclone sync remote:crypt remote2:crypt
- And to check the integrity you would do
- rclone check remote:crypt remote2:crypt
- File formats
- File encryption
- Files are encrypted 1:1 source file to destination object. The file has
- a header and is divided into chunks.
- Header
- - 8 bytes magic string RCLONE\x00\x00
- - 24 bytes Nonce (IV)
- The initial nonce is generated from the operating systems crypto strong
- random number generator. The nonce is incremented for each chunk read
- making sure each nonce is unique for each block written. The chance of a
- nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸
- bytes) you would have a probability of approximately 2×10⁻³² of re-using
- a nonce.
- Chunk
- Each chunk will contain 64kB of data, except for the last one which may
- have less data. The data chunk is in standard NACL secretbox format.
- Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate
- messages.
- Each chunk contains:
- - 16 Bytes of Poly1305 authenticator
- - 1 - 65536 bytes XSalsa20 encrypted data
- 64k chunk size was chosen as the best performing chunk size (the
- authenticator takes too much time below this and the performance drops
- off due to cache effects above this). Note that these chunks are
- buffered in memory so they can't be too big.
- This uses a 32 byte (256 bit key) key derived from the user password.
- Examples
- 1 byte file will encrypt to
- - 32 bytes header
- - 17 bytes data chunk
- 49 bytes total
- 1MB (1048576 bytes) file will encrypt to
- - 32 bytes header
- - 16 chunks of 65568 bytes
- 1049120 bytes total (a 0.05% overhead). This is the overhead for big
- files.
- Name encryption
- File names are encrypted segment by segment - the path is broken up into
- / separated strings and these are encrypted individually.
- File segments are padded using using PKCS#7 to a multiple of 16 bytes
- before encryption.
- They are then encrypted with EME using AES with 256 bit key. EME
- (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
- paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
- This makes for deterministic encryption which is what we want - the same
- filename must encrypt to the same thing otherwise we can't find it on
- the cloud storage system.
- This means that
- - filenames with the same name will encrypt the same
- - filenames which start the same won't have a common prefix
- This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of
- which are derived from the user password.
- After encryption they are written out using a modified version of
- standard base32 encoding as described in RFC4648. The standard encoding
- is modified in two ways:
- - it becomes lower case (no-one likes upper case filenames!)
- - we strip the padding character =
- base32 is used rather than the more efficient base64 so rclone can be
- used on case insensitive remotes (eg Windows, Amazon Drive).
- Key derivation
- Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional
- user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key
- material required. If the user doesn't supply a salt then rclone uses an
- internal one.
- scrypt makes it impractical to mount a dictionary attack on rclone
- encrypted data. For full protection against this you should always use a
- salt.
- Dropbox
- Paths are specified as remote:path
- Dropbox paths may be as deep as required, eg
- remote:directory/subdirectory.
- The initial setup for dropbox involves getting a token from Dropbox
- which you need to do in your browser. rclone config walks you through
- it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- n) New remote
- d) Delete remote
- q) Quit config
- e/n/d/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 4
- Dropbox App Key - leave blank normally.
- app_key>
- Dropbox App Secret - leave blank normally.
- app_secret>
- Remote config
- Please visit:
- https://www.dropbox.com/1/oauth2/authorize?client_id=XXXXXXXXXXXXXXX&response_type=code
- Enter the code: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXXXXXXXX
- --------------------
- [remote]
- app_key =
- app_secret =
- token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- You can then use it like this,
- List directories in top level of your dropbox
- rclone lsd remote:
- List all the files in your dropbox
- rclone ls remote:
- To copy a local directory to a dropbox directory called backup
- rclone copy /home/source remote:backup
- Dropbox for business
- Rclone supports Dropbox for business and Team Folders.
- When using Dropbox for business remote: and remote:path/to/file will
- refer to your personal folder.
- If you wish to see Team Folders you must use a leading / in the path, so
- rclone lsd remote:/ will refer to the root and show you all Team Folders
- and your User Folder.
- You can then use team folders like this remote:/TeamFolder and
- remote:/TeamFolder/path/to/file.
- A leading / for a Dropbox personal account will do nothing, but it will
- take an extra HTTP transaction so it should be avoided.
- Modified time and Hashes
- Dropbox supports modified times, but the only way to set a modification
- time is to re-upload the file.
- This means that if you uploaded your data with an older version of
- rclone which didn't support the v2 API and modified times, rclone will
- decide to upload all your old data to fix the modification times. If you
- don't want this to happen use --size-only or --checksum flag to stop it.
- Dropbox supports its own hash type which is checked for all transfers.
- Standard Options
- Here are the standard options specific to dropbox (Dropbox).
- --dropbox-client-id
- Dropbox App Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_DROPBOX_CLIENT_ID
- - Type: string
- - Default: ""
- --dropbox-client-secret
- Dropbox App Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_DROPBOX_CLIENT_SECRET
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to dropbox (Dropbox).
- --dropbox-chunk-size
- Upload chunk size. (< 150M).
- Any files larger than this will be uploaded in chunks of this size.
- Note that chunks are buffered in memory (one at a time) so rclone can
- deal with retries. Setting this larger will increase the speed slightly
- (at most 10% for 128MB in tests) at the cost of using more memory. It
- can be set smaller if you are tight on memory.
- - Config: chunk_size
- - Env Var: RCLONE_DROPBOX_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 48M
- Limitations
- Note that Dropbox is case insensitive so you can't have a file called
- "Hello.doc" and one called "hello.doc".
- There are some file names such as thumbs.db which Dropbox can't store.
- There is a full list of them in the "Ignored Files" section of this
- document. Rclone will issue an error message
- File name disallowed - not uploading if it attempts to upload one of
- those file names, but the sync won't fail.
- If you have more than 10,000 files in a directory then
- rclone purge dropbox:dir will return the error
- Failed to purge: There are too many files involved in this operation. As
- a work-around do an rclone delete dropbox:dir followed by an
- rclone rmdir dropbox:dir.
- FTP
- FTP is the File Transfer Protocol. FTP support is provided using the
- github.com/jlaffaye/ftp package.
- Here is an example of making an FTP configuration. First run
- rclone config
- This will guide you through an interactive setup process. An FTP remote
- only needs a host together with and a username and a password. With
- anonymous FTP server, you will need to use anonymous as username and
- your email address as the password.
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- Storage> ftp
- FTP host to connect to
- Choose a number from below, or type in your own value
- 1 / Connect to ftp.example.com
- \ "ftp.example.com"
- host> ftp.example.com
- FTP username, leave blank for current username, ncw
- user>
- FTP port, leave blank to use default (21)
- port>
- FTP password
- y) Yes type in my own password
- g) Generate random password
- y/g> y
- Enter the password:
- password:
- Confirm the password:
- password:
- Remote config
- --------------------
- [remote]
- host = ftp.example.com
- user =
- port =
- pass = *** ENCRYPTED ***
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This remote is called remote and can now be used like this
- See all directories in the home directory
- rclone lsd remote:
- Make a new directory
- rclone mkdir remote:path/to/directory
- List the contents of a directory
- rclone ls remote:path/to/directory
- Sync /home/local/directory to the remote directory, deleting any excess
- files in the directory.
- rclone sync /home/local/directory remote:directory
- Modified time
- FTP does not support modified times. Any times you see on the server
- will be time of upload.
- Checksums
- FTP does not support any checksums.
- Standard Options
- Here are the standard options specific to ftp (FTP Connection).
- --ftp-host
- FTP host to connect to
- - Config: host
- - Env Var: RCLONE_FTP_HOST
- - Type: string
- - Default: ""
- - Examples:
- - "ftp.example.com"
- - Connect to ftp.example.com
- --ftp-user
- FTP username, leave blank for current username, ncw
- - Config: user
- - Env Var: RCLONE_FTP_USER
- - Type: string
- - Default: ""
- --ftp-port
- FTP port, leave blank to use default (21)
- - Config: port
- - Env Var: RCLONE_FTP_PORT
- - Type: string
- - Default: ""
- --ftp-pass
- FTP password
- - Config: pass
- - Env Var: RCLONE_FTP_PASS
- - Type: string
- - Default: ""
- Limitations
- Note that since FTP isn't HTTP based the following flags don't work with
- it: --dump-headers, --dump-bodies, --dump-auth
- Note that --timeout isn't supported (but --contimeout is).
- Note that --bind isn't supported.
- FTP could support server side move but doesn't yet.
- Google Cloud Storage
- Paths are specified as remote:bucket (or remote: for the lsd command.)
- You may put subdirectories in too, eg remote:bucket/path/to/dir.
- The initial setup for google cloud storage involves getting a token from
- Google Cloud Storage which you need to do in your browser. rclone config
- walks you through it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- n) New remote
- d) Delete remote
- q) Quit config
- e/n/d/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 6
- Google Application Client Id - leave blank normally.
- client_id>
- Google Application Client Secret - leave blank normally.
- client_secret>
- Project number optional - needed only for list/create/delete buckets - see your developer console.
- project_number> 12345678
- Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
- service_account_file>
- Access Control List for new objects.
- Choose a number from below, or type in your own value
- 1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
- \ "authenticatedRead"
- 2 / Object owner gets OWNER access, and project team owners get OWNER access.
- \ "bucketOwnerFullControl"
- 3 / Object owner gets OWNER access, and project team owners get READER access.
- \ "bucketOwnerRead"
- 4 / Object owner gets OWNER access [default if left blank].
- \ "private"
- 5 / Object owner gets OWNER access, and project team members get access according to their roles.
- \ "projectPrivate"
- 6 / Object owner gets OWNER access, and all Users get READER access.
- \ "publicRead"
- object_acl> 4
- Access Control List for new buckets.
- Choose a number from below, or type in your own value
- 1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
- \ "authenticatedRead"
- 2 / Project team owners get OWNER access [default if left blank].
- \ "private"
- 3 / Project team members get access according to their roles.
- \ "projectPrivate"
- 4 / Project team owners get OWNER access, and all Users get READER access.
- \ "publicRead"
- 5 / Project team owners get OWNER access, and all Users get WRITER access.
- \ "publicReadWrite"
- bucket_acl> 2
- Location for the newly created buckets.
- Choose a number from below, or type in your own value
- 1 / Empty for default location (US).
- \ ""
- 2 / Multi-regional location for Asia.
- \ "asia"
- 3 / Multi-regional location for Europe.
- \ "eu"
- 4 / Multi-regional location for United States.
- \ "us"
- 5 / Taiwan.
- \ "asia-east1"
- 6 / Tokyo.
- \ "asia-northeast1"
- 7 / Singapore.
- \ "asia-southeast1"
- 8 / Sydney.
- \ "australia-southeast1"
- 9 / Belgium.
- \ "europe-west1"
- 10 / London.
- \ "europe-west2"
- 11 / Iowa.
- \ "us-central1"
- 12 / South Carolina.
- \ "us-east1"
- 13 / Northern Virginia.
- \ "us-east4"
- 14 / Oregon.
- \ "us-west1"
- location> 12
- The storage class to use when storing objects in Google Cloud Storage.
- Choose a number from below, or type in your own value
- 1 / Default
- \ ""
- 2 / Multi-regional storage class
- \ "MULTI_REGIONAL"
- 3 / Regional storage class
- \ "REGIONAL"
- 4 / Nearline storage class
- \ "NEARLINE"
- 5 / Coldline storage class
- \ "COLDLINE"
- 6 / Durable reduced availability storage class
- \ "DURABLE_REDUCED_AVAILABILITY"
- storage_class> 5
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine or Y didn't work
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- type = google cloud storage
- client_id =
- client_secret =
- token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
- project_number = 12345678
- object_acl = private
- bucket_acl = private
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Google if you use auto config mode. This only
- runs from the moment it opens your browser to the moment you get back
- the verification code. This is on http://127.0.0.1:53682/ and this it
- may require you to unblock it temporarily if you are running a host
- firewall, or use manual mode.
- This remote is called remote and can now be used like this
- See all the buckets in your project
- rclone lsd remote:
- Make a new bucket
- rclone mkdir remote:bucket
- List the contents of a bucket
- rclone ls remote:bucket
- Sync /home/local/directory to the remote bucket, deleting any excess
- files in the bucket.
- rclone sync /home/local/directory remote:bucket
- Service Account support
- You can set up rclone with Google Cloud Storage in an unattended mode,
- i.e. not tied to a specific end-user Google account. This is useful when
- you want to synchronise files onto machines that don't have actively
- logged-in users, for example build machines.
- To get credentials for Google Cloud Platform IAM Service Accounts,
- please head to the Service Account section of the Google Developer
- Console. Service Accounts behave just like normal User permissions in
- Google Cloud Storage ACLs, so you can limit their access (e.g. make them
- read only). After creating an account, a JSON file containing the
- Service Account's credentials will be downloaded onto your machines.
- These credentials are what rclone will use for authentication.
- To use a Service Account instead of OAuth2 token flow, enter the path to
- your Service Account credentials at the service_account_file prompt and
- rclone won't use the browser based authentication flow. If you'd rather
- stuff the contents of the credentials file into the rclone config file,
- you can set service_account_credentials with the actual contents of the
- file instead, or set the equivalent environment variable.
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Modified time
- Google google cloud storage stores md5sums natively and rclone stores
- modification times as metadata on the object, under the "mtime" key in
- RFC3339 format accurate to 1ns.
- Standard Options
- Here are the standard options specific to google cloud storage (Google
- Cloud Storage (this is not Google Drive)).
- --gcs-client-id
- Google Application Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_GCS_CLIENT_ID
- - Type: string
- - Default: ""
- --gcs-client-secret
- Google Application Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_GCS_CLIENT_SECRET
- - Type: string
- - Default: ""
- --gcs-project-number
- Project number. Optional - needed only for list/create/delete buckets -
- see your developer console.
- - Config: project_number
- - Env Var: RCLONE_GCS_PROJECT_NUMBER
- - Type: string
- - Default: ""
- --gcs-service-account-file
- Service Account Credentials JSON file path Leave blank normally. Needed
- only if you want use SA instead of interactive login.
- - Config: service_account_file
- - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
- - Type: string
- - Default: ""
- --gcs-service-account-credentials
- Service Account Credentials JSON blob Leave blank normally. Needed only
- if you want use SA instead of interactive login.
- - Config: service_account_credentials
- - Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
- - Type: string
- - Default: ""
- --gcs-object-acl
- Access Control List for new objects.
- - Config: object_acl
- - Env Var: RCLONE_GCS_OBJECT_ACL
- - Type: string
- - Default: ""
- - Examples:
- - "authenticatedRead"
- - Object owner gets OWNER access, and all Authenticated Users
- get READER access.
- - "bucketOwnerFullControl"
- - Object owner gets OWNER access, and project team owners get
- OWNER access.
- - "bucketOwnerRead"
- - Object owner gets OWNER access, and project team owners get
- READER access.
- - "private"
- - Object owner gets OWNER access [default if left blank].
- - "projectPrivate"
- - Object owner gets OWNER access, and project team members get
- access according to their roles.
- - "publicRead"
- - Object owner gets OWNER access, and all Users get READER
- access.
- --gcs-bucket-acl
- Access Control List for new buckets.
- - Config: bucket_acl
- - Env Var: RCLONE_GCS_BUCKET_ACL
- - Type: string
- - Default: ""
- - Examples:
- - "authenticatedRead"
- - Project team owners get OWNER access, and all Authenticated
- Users get READER access.
- - "private"
- - Project team owners get OWNER access [default if left
- blank].
- - "projectPrivate"
- - Project team members get access according to their roles.
- - "publicRead"
- - Project team owners get OWNER access, and all Users get
- READER access.
- - "publicReadWrite"
- - Project team owners get OWNER access, and all Users get
- WRITER access.
- --gcs-location
- Location for the newly created buckets.
- - Config: location
- - Env Var: RCLONE_GCS_LOCATION
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Empty for default location (US).
- - "asia"
- - Multi-regional location for Asia.
- - "eu"
- - Multi-regional location for Europe.
- - "us"
- - Multi-regional location for United States.
- - "asia-east1"
- - Taiwan.
- - "asia-northeast1"
- - Tokyo.
- - "asia-southeast1"
- - Singapore.
- - "australia-southeast1"
- - Sydney.
- - "europe-west1"
- - Belgium.
- - "europe-west2"
- - London.
- - "us-central1"
- - Iowa.
- - "us-east1"
- - South Carolina.
- - "us-east4"
- - Northern Virginia.
- - "us-west1"
- - Oregon.
- --gcs-storage-class
- The storage class to use when storing objects in Google Cloud Storage.
- - Config: storage_class
- - Env Var: RCLONE_GCS_STORAGE_CLASS
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Default
- - "MULTI_REGIONAL"
- - Multi-regional storage class
- - "REGIONAL"
- - Regional storage class
- - "NEARLINE"
- - Nearline storage class
- - "COLDLINE"
- - Coldline storage class
- - "DURABLE_REDUCED_AVAILABILITY"
- - Durable reduced availability storage class
- Google Drive
- Paths are specified as drive:path
- Drive paths may be as deep as required, eg drive:directory/subdirectory.
- The initial setup for drive involves getting a token from Google drive
- which you need to do in your browser. rclone config walks you through
- it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- [snip]
- 10 / Google Drive
- \ "drive"
- [snip]
- Storage> drive
- Google Application Client Id - leave blank normally.
- client_id>
- Google Application Client Secret - leave blank normally.
- client_secret>
- Scope that rclone should use when requesting access from drive.
- Choose a number from below, or type in your own value
- 1 / Full access all files, excluding Application Data Folder.
- \ "drive"
- 2 / Read-only access to file metadata and file contents.
- \ "drive.readonly"
- / Access to files created by rclone only.
- 3 | These are visible in the drive website.
- | File authorization is revoked when the user deauthorizes the app.
- \ "drive.file"
- / Allows read and write access to the Application Data folder.
- 4 | This is not visible in the drive website.
- \ "drive.appfolder"
- / Allows read-only access to file metadata but
- 5 | does not allow any access to read or download file content.
- \ "drive.metadata.readonly"
- scope> 1
- ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
- root_folder_id>
- Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
- service_account_file>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine or Y didn't work
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- Configure this as a team drive?
- y) Yes
- n) No
- y/n> n
- --------------------
- [remote]
- client_id =
- client_secret =
- scope = drive
- root_folder_id =
- service_account_file =
- token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2014-03-16T13:57:58.955387075Z"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Google if you use auto config mode. This only
- runs from the moment it opens your browser to the moment you get back
- the verification code. This is on http://127.0.0.1:53682/ and this it
- may require you to unblock it temporarily if you are running a host
- firewall, or use manual mode.
- You can then use it like this,
- List directories in top level of your drive
- rclone lsd remote:
- List all the files in your drive
- rclone ls remote:
- To copy a local directory to a drive directory called backup
- rclone copy /home/source remote:backup
- Scopes
- Rclone allows you to select which scope you would like for rclone to
- use. This changes what type of token is granted to rclone. The scopes
- are defined here..
- The scope are
- drive
- This is the default scope and allows full access to all files, except
- for the Application Data Folder (see below).
- Choose this one if you aren't sure.
- drive.readonly
- This allows read only access to all files. Files may be listed and
- downloaded but not uploaded, renamed or deleted.
- drive.file
- With this scope rclone can read/view/modify only those files and folders
- it creates.
- So if you uploaded files to drive via the web interface (or any other
- means) they will not be visible to rclone.
- This can be useful if you are using rclone to backup data and you want
- to be sure confidential data on your drive is not visible to rclone.
- Files created with this scope are visible in the web interface.
- drive.appfolder
- This gives rclone its own private area to store files. Rclone will not
- be able to see any other files on your drive and you won't be able to
- see rclone's files from the web interface either.
- drive.metadata.readonly
- This allows read only access to file names only. It does not allow
- rclone to download or upload data, or rename or delete files or
- directories.
- Root folder ID
- You can set the root_folder_id for rclone. This is the directory
- (identified by its Folder ID) that rclone considers to be a the root of
- your drive.
- Normally you will leave this blank and rclone will determine the correct
- root to use itself.
- However you can set this to restrict rclone to a specific folder
- hierarchy or to access data within the "Computers" tab on the drive web
- interface (where files from Google's Backup and Sync desktop program
- go).
- In order to do this you will have to find the Folder ID of the directory
- you wish rclone to display. This will be the last segment of the URL
- when you open the relevant folder in the drive web interface.
- So if the folder you want rclone to use has a URL which looks like
- https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
- in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the
- root_folder_id in the config.
- NB folders under the "Computers" tab seem to be read only (drive gives a
- 500 error) when using rclone.
- There doesn't appear to be an API to discover the folder IDs of the
- "Computers" tab - please contact us if you know otherwise!
- Note also that rclone can't access any data under the "Backups" tab on
- the google drive web interface yet.
- Service Account support
- You can set up rclone with Google Drive in an unattended mode, i.e. not
- tied to a specific end-user Google account. This is useful when you want
- to synchronise files onto machines that don't have actively logged-in
- users, for example build machines.
- To use a Service Account instead of OAuth2 token flow, enter the path to
- your Service Account credentials at the service_account_file prompt
- during rclone config and rclone won't use the browser based
- authentication flow. If you'd rather stuff the contents of the
- credentials file into the rclone config file, you can set
- service_account_credentials with the actual contents of the file
- instead, or set the equivalent environment variable.
- Use case - Google Apps/G-suite account and individual Drive
- Let's say that you are the administrator of a Google Apps (old) or
- G-suite account. The goal is to store data on an individual's Drive
- account, who IS a member of the domain. We'll call the domain
- EXAMPLE.COM, and the user FOO@EXAMPLE.COM.
- There's a few steps we need to go through to accomplish this:
- 1. Create a service account for example.com
- - To create a service account and obtain its credentials, go to the
- Google Developer Console.
- - You must have a project - create one if you don't.
- - Then go to "IAM & admin" -> "Service Accounts".
- - Use the "Create Credentials" button. Fill in "Service account name"
- with something that identifies your client. "Role" can be empty.
- - Tick "Furnish a new private key" - select "Key type JSON".
- - Tick "Enable G Suite Domain-wide Delegation". This option makes
- "impersonation" possible, as documented here: Delegating domain-wide
- authority to the service account
- - These credentials are what rclone will use for authentication. If
- you ever need to remove access, press the "Delete service account
- key" button.
- 2. Allowing API access to example.com Google Drive
- - Go to example.com's admin console
- - Go into "Security" (or use the search bar)
- - Select "Show more" and then "Advanced settings"
- - Select "Manage API client access" in the "Authentication" section
- - In the "Client Name" field enter the service account's "Client ID" -
- this can be found in the Developer Console under "IAM & Admin" ->
- "Service Accounts", then "View Client ID" for the newly created
- service account. It is a ~21 character numerical string.
- - In the next field, "One or More API Scopes", enter
- https://www.googleapis.com/auth/drive to grant access to Google
- Drive specifically.
- 3. Configure rclone, assuming a new install
- rclone config
- n/s/q> n # New
- name>gdrive # Gdrive is an example name
- Storage> # Select the number shown for Google Drive
- client_id> # Can be left blank
- client_secret> # Can be left blank
- scope> # Select your scope, 1 for example
- root_folder_id> # Can be left blank
- service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
- y/n> # Auto config, y
- 4. Verify that it's working
- - rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
- - The arguments do:
- - -v - verbose logging
- - --drive-impersonate foo@example.com - this is what does the
- magic, pretending to be user foo.
- - lsf - list files in a parsing friendly way
- - gdrive:backup - use the remote called gdrive, work in the folder
- named backup.
- Team drives
- If you want to configure the remote to point to a Google Team Drive then
- answer y to the question Configure this as a team drive?.
- This will fetch the list of Team Drives from google and allow you to
- configure which one you want to use. You can also type in a team drive
- ID if you prefer.
- For example:
- Configure this as a team drive?
- y) Yes
- n) No
- y/n> y
- Fetching team drive list...
- Choose a number from below, or type in your own value
- 1 / Rclone Test
- \ "xxxxxxxxxxxxxxxxxxxx"
- 2 / Rclone Test 2
- \ "yyyyyyyyyyyyyyyyyyyy"
- 3 / Rclone Test 3
- \ "zzzzzzzzzzzzzzzzzzzz"
- Enter a Team Drive ID> 1
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"AccessToken":"xxxx.x.xxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
- team_drive = xxxxxxxxxxxxxxxxxxxx
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- It does this by combining multiple list calls into a single API request.
- This works by combining many '%s' in parents filters into one
- expression. To list the contents of directories a, b and c, the the
- following requests will be send by the regular List function:
- trashed=false and 'a' in parents
- trashed=false and 'b' in parents
- trashed=false and 'c' in parents
- These can now be combined into a single request:
- trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
- The implementation of ListR will put up to 50 parents filters into one
- request. It will use the --checkers value to specify the number of
- requests to run in parallel.
- In tests, these batch requests were up to 20x faster than the regular
- method. Running the following command against different sized folders
- gives:
- rclone lsjson -vv -R --checkers=6 gdrive:folder
- small folder (220 directories, 700 files):
- - without --fast-list: 38s
- - with --fast-list: 10s
- large folder (10600 directories, 39000 files):
- - without --fast-list: 22:05 min
- - with --fast-list: 58s
- Modified time
- Google drive stores modification times accurate to 1 ms.
- Revisions
- Google drive stores revisions of files. When you upload a change to an
- existing file to google drive using rclone it will create a new revision
- of that file.
- Revisions follow the standard google policy which at time of writing was
- - They are deleted after 30 days or 100 revisions (whatever comes
- first).
- - They do not count towards a user storage quota.
- Deleting files
- By default rclone will send all files to the trash when deleting files.
- If deleting them permanently is required then use the
- --drive-use-trash=false flag, or set the equivalent environment
- variable.
- Emptying trash
- If you wish to empty your trash you can use the rclone cleanup remote:
- command which will permanently delete all your trashed files. This
- command does not take any path arguments.
- Quota information
- To view your current quota you can use the rclone about remote: command
- which will display your usage limit (quota), the usage in Google Drive,
- the size of all files in the Trash and the space used by other Google
- services such as Gmail. This command does not take any path arguments.
- Import/Export of google documents
- Google documents can be exported from and uploaded to Google Drive.
- When rclone downloads a Google doc it chooses a format to download
- depending upon the --drive-export-formats setting. By default the export
- formats are docx,xlsx,pptx,svg which are a sensible default for an
- editable document.
- When choosing a format, rclone runs down the list provided in order and
- chooses the first file format the doc can be exported as from the list.
- If the file can't be exported to a format on the formats list, then
- rclone will choose a format from the default list.
- If you prefer an archive copy then you might use
- --drive-export-formats pdf, or if you prefer openoffice/libreoffice
- formats you might use --drive-export-formats ods,odt,odp.
- Note that rclone adds the extension to the google doc, so if it is
- calles My Spreadsheet on google docs, it will be exported as
- My Spreadsheet.xlsx or My Spreadsheet.pdf etc.
- When importing files into Google Drive, rclone will conververt all files
- with an extension in --drive-import-formats to their associated document
- type. rclone will not convert any files by default, since the conversion
- is lossy process.
- The conversion must result in a file with the same extension when the
- --drive-export-formats rules are applied to the uploded document.
- Here are some examples for allowed and prohibited conversions.
- export-formats import-formats Upload Ext Document Ext Allowed
- ---------------- ---------------- ------------ -------------- ---------
- odt odt odt odt Yes
- odt docx,odt odt odt Yes
- docx docx docx Yes
- odt odt docx No
- odt,docx docx,odt docx odt No
- docx,odt docx,odt docx docx Yes
- docx,odt docx,odt odt docx No
- This limitation can be disabled by specifying
- --drive-allow-import-name-change. When using this flag, rclone can
- convert multiple files types resulting in the same document type at
- once, eg with --drive-import-formats docx,odt,txt, all files having
- these extension would result in a doument represented as a docx file.
- This brings the additional risk of overwriting a document, if multiple
- files have the same stem. Many rclone operations will not handle this
- name change in any way. They assume an equal name when copying files and
- might copy the file again or delete them when the name changes.
- Here are the possible export extensions with their corresponding mime
- types. Most of these can also be used for importing, but there more that
- are not listed here. Some of these additional ones might only be
- available when the operating system provides the correct MIME type
- entries.
- This list can be changed by Google Drive at any time and might not
- represent the currently available converions.
- Extension Mime Type Description
- --------------------------------------------------------------------------- ------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------
- csv text/csv Standard CSV format for Spreadsheets
- docx application/vnd.openxmlformats-officedocument.wordprocessingml.document Microsoft Office Document
- epub application/epub+zip E-book format
- html text/html An HTML Document
- jpg image/jpeg A JPEG Image File
- json application/vnd.google-apps.script+json JSON Text Format
- odp application/vnd.oasis.opendocument.presentation Openoffice Presentation
- ods application/vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
- ods application/x-vnd.oasis.opendocument.spreadsheet Openoffice Spreadsheet
- odt application/vnd.oasis.opendocument.text Openoffice Document
- pdf application/pdf Adobe PDF Format
- png image/png PNG Image Format
- pptx application/vnd.openxmlformats-officedocument.presentationml.presentation Microsoft Office Powerpoint
- rtf application/rtf Rich Text Format
- svg image/svg+xml Scalable Vector Graphics Format
- tsv text/tab-separated-values Standard TSV format for spreadsheets
- txt text/plain Plain Text
- xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Microsoft Office Spreadsheet
- zip application/zip A ZIP file of HTML, Images CSS
- Google douments can also be exported as link files. These files will
- open a browser window for the Google Docs website of that dument when
- opened. The link file extension has to be specified as a
- --drive-export-formats parameter. They will match all available Google
- Documents.
- Extension Description OS Support
- ----------- ----------------------------------------- ----------------
- desktop freedesktop.org specified desktop entry Linux
- link.html An HTML Document with a redirect All
- url INI style link file macOS, Windows
- webloc macOS specific XML format macOS
- Standard Options
- Here are the standard options specific to drive (Google Drive).
- --drive-client-id
- Google Application Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_DRIVE_CLIENT_ID
- - Type: string
- - Default: ""
- --drive-client-secret
- Google Application Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_DRIVE_CLIENT_SECRET
- - Type: string
- - Default: ""
- --drive-scope
- Scope that rclone should use when requesting access from drive.
- - Config: scope
- - Env Var: RCLONE_DRIVE_SCOPE
- - Type: string
- - Default: ""
- - Examples:
- - "drive"
- - Full access all files, excluding Application Data Folder.
- - "drive.readonly"
- - Read-only access to file metadata and file contents.
- - "drive.file"
- - Access to files created by rclone only.
- - These are visible in the drive website.
- - File authorization is revoked when the user deauthorizes the
- app.
- - "drive.appfolder"
- - Allows read and write access to the Application Data folder.
- - This is not visible in the drive website.
- - "drive.metadata.readonly"
- - Allows read-only access to file metadata but
- - does not allow any access to read or download file content.
- --drive-root-folder-id
- ID of the root folder Leave blank normally. Fill in to access
- "Computers" folders. (see docs).
- - Config: root_folder_id
- - Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
- - Type: string
- - Default: ""
- --drive-service-account-file
- Service Account Credentials JSON file path Leave blank normally. Needed
- only if you want use SA instead of interactive login.
- - Config: service_account_file
- - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to drive (Google Drive).
- --drive-service-account-credentials
- Service Account Credentials JSON blob Leave blank normally. Needed only
- if you want use SA instead of interactive login.
- - Config: service_account_credentials
- - Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
- - Type: string
- - Default: ""
- --drive-team-drive
- ID of the Team Drive
- - Config: team_drive
- - Env Var: RCLONE_DRIVE_TEAM_DRIVE
- - Type: string
- - Default: ""
- --drive-auth-owner-only
- Only consider files owned by the authenticated user.
- - Config: auth_owner_only
- - Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
- - Type: bool
- - Default: false
- --drive-use-trash
- Send files to the trash instead of deleting permanently. Defaults to
- true, namely sending files to the trash. Use --drive-use-trash=false to
- delete files permanently instead.
- - Config: use_trash
- - Env Var: RCLONE_DRIVE_USE_TRASH
- - Type: bool
- - Default: true
- --drive-skip-gdocs
- Skip google documents in all listings. If given, gdocs practically
- become invisible to rclone.
- - Config: skip_gdocs
- - Env Var: RCLONE_DRIVE_SKIP_GDOCS
- - Type: bool
- - Default: false
- --drive-shared-with-me
- Only show files that are shared with me.
- Instructs rclone to operate on your "Shared with me" folder (where
- Google Drive lets you access the files and folders others have shared
- with you).
- This works both with the "list" (lsd, lsl, etc) and the "copy" commands
- (copy, sync, etc), and with all other commands too.
- - Config: shared_with_me
- - Env Var: RCLONE_DRIVE_SHARED_WITH_ME
- - Type: bool
- - Default: false
- --drive-trashed-only
- Only show files that are in the trash. This will show trashed files in
- their original directory structure.
- - Config: trashed_only
- - Env Var: RCLONE_DRIVE_TRASHED_ONLY
- - Type: bool
- - Default: false
- --drive-formats
- Deprecated: see export_formats
- - Config: formats
- - Env Var: RCLONE_DRIVE_FORMATS
- - Type: string
- - Default: ""
- --drive-export-formats
- Comma separated list of preferred formats for downloading Google docs.
- - Config: export_formats
- - Env Var: RCLONE_DRIVE_EXPORT_FORMATS
- - Type: string
- - Default: "docx,xlsx,pptx,svg"
- --drive-import-formats
- Comma separated list of preferred formats for uploading Google docs.
- - Config: import_formats
- - Env Var: RCLONE_DRIVE_IMPORT_FORMATS
- - Type: string
- - Default: ""
- --drive-allow-import-name-change
- Allow the filetype to change when uploading Google docs (e.g. file.doc
- to file.docx). This will confuse sync and reupload every time.
- - Config: allow_import_name_change
- - Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
- - Type: bool
- - Default: false
- --drive-use-created-date
- Use file created date instead of modified date.,
- Useful when downloading data and you want the creation date used in
- place of the last modified date.
- WARNING: This flag may have some unexpected consequences.
- When uploading to your drive all files will be overwritten unless they
- haven't been modified since their creation. And the inverse will occur
- while downloading. This side effect can be avoided by using the
- "--checksum" flag.
- This feature was implemented to retain photos capture date as recorded
- by google photos. You will first need to check the "Create a Google
- Photos folder" option in your google drive settings. You can then copy
- or move the photos locally and use the date the image was taken
- (created) set as the modification date.
- - Config: use_created_date
- - Env Var: RCLONE_DRIVE_USE_CREATED_DATE
- - Type: bool
- - Default: false
- --drive-list-chunk
- Size of listing chunk 100-1000. 0 to disable.
- - Config: list_chunk
- - Env Var: RCLONE_DRIVE_LIST_CHUNK
- - Type: int
- - Default: 1000
- --drive-impersonate
- Impersonate this user when using a service account.
- - Config: impersonate
- - Env Var: RCLONE_DRIVE_IMPERSONATE
- - Type: string
- - Default: ""
- --drive-alternate-export
- Use alternate export URLs for google documents export.,
- If this option is set this instructs rclone to use an alternate set of
- export URLs for drive documents. Users have reported that the official
- export URLs can't export large documents, whereas these unofficial ones
- can.
- See rclone issue #2243 for background, this google drive issue and this
- helpful post.
- - Config: alternate_export
- - Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
- - Type: bool
- - Default: false
- --drive-upload-cutoff
- Cutoff for switching to chunked upload
- - Config: upload_cutoff
- - Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
- - Type: SizeSuffix
- - Default: 8M
- --drive-chunk-size
- Upload chunk size. Must a power of 2 >= 256k.
- Making this larger will improve performance, but note that each chunk is
- buffered in memory one per transfer.
- Reducing this will reduce memory usage but decrease performance.
- - Config: chunk_size
- - Env Var: RCLONE_DRIVE_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 8M
- --drive-acknowledge-abuse
- Set to allow files which return cannotDownloadAbusiveFile to be
- downloaded.
- If downloading a file returns the error "This file has been identified
- as malware or spam and cannot be downloaded" with the error code
- "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate
- you acknowledge the risks of downloading the file and rclone will
- download it anyway.
- - Config: acknowledge_abuse
- - Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
- - Type: bool
- - Default: false
- --drive-keep-revision-forever
- Keep new head revision of each file forever.
- - Config: keep_revision_forever
- - Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
- - Type: bool
- - Default: false
- --drive-v2-download-min-size
- If Object's are greater, use drive v2 API to download.
- - Config: v2_download_min_size
- - Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
- - Type: SizeSuffix
- - Default: off
- Limitations
- Drive has quite a lot of rate limiting. This causes rclone to be limited
- to transferring about 2 files per second only. Individual files may be
- transferred much faster at 100s of MBytes/s but lots of small files can
- take a long time.
- Server side copies are also subject to a separate rate limit. If you see
- User rate limit exceeded errors, wait at least 24 hours and retry. You
- can disable server side copies with --disable copy to download and
- upload the files if you prefer.
- Limitations of Google Docs
- Google docs will appear as size -1 in rclone ls and as size 0 in
- anything which uses the VFS layer, eg rclone mount, rclone serve.
- This is because rclone can't find out the size of the Google docs
- without downloading them.
- Google docs will transfer correctly with rclone sync, rclone copy etc as
- rclone knows to ignore the size when doing the transfer.
- However an unfortunate consequence of this is that you can't download
- Google docs using rclone mount - you will get a 0 sized file. If you try
- again the doc may gain its correct size and be downloadable.
- Duplicated files
- Sometimes, for no reason I've been able to track down, drive will
- duplicate a file that rclone uploads. Drive unlike all the other remotes
- can have duplicated files.
- Duplicated files cause problems with the syncing and you will see
- messages in the log about duplicates.
- Use rclone dedupe to fix duplicated files.
- Note that this isn't just a problem with rclone, even Google Photos on
- Android duplicates files on drive sometimes.
- Rclone appears to be re-copying files it shouldn't
- The most likely cause of this is the duplicated file issue above - run
- rclone dedupe and check your logs for duplicate object or directory
- messages.
- Making your own client_id
- When you use rclone with Google drive in its default configuration you
- are using rclone's client_id. This is shared between all the rclone
- users. There is a global rate limit on the number of queries per second
- that each client_id can do set by Google. rclone already has a high
- quota and I will continue to make sure it is high enough by contacting
- Google.
- However you might find you get better performance making your own
- client_id if you are a heavy user. Or you may not depending on exactly
- how Google have been raising rclone's rate limit.
- Here is how to create your own Google Drive client ID for rclone:
- 1. Log into the Google API Console with your Google account. It doesn't
- matter what Google account you use. (It need not be the same account
- as the Google Drive you want to access)
- 2. Select a project or create a new project.
- 3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the
- then "Google Drive API".
- 4. Click "Credentials" in the left-side panel (not "Create
- credentials", which opens the wizard), then "Create credentials",
- then "OAuth client ID". It will prompt you to set the OAuth consent
- screen product name, if you haven't set one already.
- 5. Choose an application type of "other", and click "Create". (the
- default name is fine)
- 6. It will show you a client ID and client secret. Use these values in
- rclone config to add a new remote or edit an existing remote.
- (Thanks to @balazer on github for these instructions.)
- HTTP
- The HTTP remote is a read only remote for reading files of a webserver.
- The webserver should provide file listings which rclone will read and
- turn into a remote. This has been tested with common webservers such as
- Apache/Nginx/Caddy and will likely work with file listings from most web
- servers. (If it doesn't then please file an issue, or send a pull
- request!)
- Paths are specified as remote: or remote:path/to/dir.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- 15 / http Connection
- \ "http"
- Storage> http
- URL of http host to connect to
- Choose a number from below, or type in your own value
- 1 / Connect to example.com
- \ "https://example.com"
- url> https://beta.rclone.org
- Remote config
- --------------------
- [remote]
- url = https://beta.rclone.org
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Current remotes:
- Name Type
- ==== ====
- remote http
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> q
- This remote is called remote and can now be used like this
- See all the top level directories
- rclone lsd remote:
- List the contents of a directory
- rclone ls remote:directory
- Sync the remote directory to /home/local/directory, deleting any excess
- files.
- rclone sync remote:directory /home/local/directory
- Read only
- This remote is read only - you can't upload files to an HTTP server.
- Modified time
- Most HTTP servers store time accurate to 1 second.
- Checksum
- No checksums are stored.
- Usage without a config file
- Since the http remote only has one config parameter it is easy to use
- without a config file:
- rclone lsd --http-url https://beta.rclone.org :http:
- Standard Options
- Here are the standard options specific to http (http Connection).
- --http-url
- URL of http host to connect to
- - Config: url
- - Env Var: RCLONE_HTTP_URL
- - Type: string
- - Default: ""
- - Examples:
- - "https://example.com"
- - Connect to example.com
- Hubic
- Paths are specified as remote:path
- Paths are specified as remote:container (or remote: for the lsd
- command.) You may put subdirectories in too, eg
- remote:container/path/to/dir.
- The initial setup for Hubic involves getting a token from Hubic which
- you need to do in your browser. rclone config walks you through it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- n) New remote
- s) Set configuration password
- n/s> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 8
- Hubic Client Id - leave blank normally.
- client_id>
- Hubic Client Secret - leave blank normally.
- client_secret>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"access_token":"XXXXXX"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Hubic. This only runs from the moment it opens
- your browser to the moment you get back the verification code. This is
- on http://127.0.0.1:53682/ and this it may require you to unblock it
- temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- List containers in the top level of your Hubic
- rclone lsd remote:
- List all the files in your Hubic
- rclone ls remote:
- To copy a local directory to an Hubic directory called backup
- rclone copy /home/source remote:backup
- If you want the directory to be visible in the official _Hubic browser_,
- you need to copy your files to the default directory
- rclone copy /home/source remote:default/backup
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Modified time
- The modified time is stored as metadata on the object as
- X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
- This is a defacto standard (used in the official python-swiftclient
- amongst others) for storing the modification time for an object.
- Note that Hubic wraps the Swift backend, so most of the properties of
- are the same.
- Standard Options
- Here are the standard options specific to hubic (Hubic).
- --hubic-client-id
- Hubic Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_HUBIC_CLIENT_ID
- - Type: string
- - Default: ""
- --hubic-client-secret
- Hubic Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_HUBIC_CLIENT_SECRET
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to hubic (Hubic).
- --hubic-chunk-size
- Above this size files will be chunked into a _segments container.
- Above this size files will be chunked into a _segments container. The
- default for this is 5GB which is its maximum value.
- - Config: chunk_size
- - Env Var: RCLONE_HUBIC_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 5G
- Limitations
- This uses the normal OpenStack Swift mechanism to refresh the Swift API
- credentials and ignores the expires field returned by the Hubic API.
- The Swift API doesn't return a correct MD5SUM for segmented files
- (Dynamic or Static Large Objects) so rclone won't check or use the
- MD5SUM for these.
- Jottacloud
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- To configure Jottacloud you will need to enter your username and
- password and select a mountpoint.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- [snip]
- 13 / JottaCloud
- \ "jottacloud"
- [snip]
- Storage> jottacloud
- User Name
- Enter a string value. Press Enter for the default ("").
- user> user
- Password.
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> y
- Enter the password:
- password:
- Confirm the password:
- password:
- The mountpoint to use.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- 1 / Will be synced by the official client.
- \ "Sync"
- 2 / Archive
- \ "Archive"
- mountpoint> Archive
- Remote config
- --------------------
- [remote]
- type = jottacloud
- user = user
- pass = *** ENCRYPTED ***
- mountpoint = Archive
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Once configured you can then use rclone like this,
- List directories in top level of your Jottacloud
- rclone lsd remote:
- List all the files in your Jottacloud
- rclone ls remote:
- To copy a local directory to an Jottacloud directory called backup
- rclone copy /home/source remote:backup
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Note that the implementation in Jottacloud always uses only a single API
- request to get the entire list, so for large folders this could lead to
- long wait time before the first results are shown.
- Modified time and hashes
- Jottacloud allows modification times to be set on objects accurate to 1
- second. These will be used to detect whether objects need syncing or
- not.
- Jottacloud supports MD5 type hashes, so you can use the --checksum flag.
- Note that Jottacloud requires the MD5 hash before upload so if the
- source does not have an MD5 checksum then the file will be cached
- temporarily on disk (wherever the TMPDIR environment variable points to)
- before it is uploaded. Small files will be cached in memory - see the
- --jottacloud-md5-memory-limit flag.
- Deleting files
- By default rclone will send all files to the trash when deleting files.
- Due to a lack of API documentation emptying the trash is currently only
- possible via the Jottacloud website. If deleting permanently is required
- then use the --jottacloud-hard-delete flag, or set the equivalent
- environment variable.
- Versions
- Jottacloud supports file versioning. When rclone uploads a new version
- of a file it creates a new version of it. Currently rclone only supports
- retrieving the current version but older versions can be accessed via
- the Jottacloud Website.
- Quota information
- To view your current quota you can use the rclone about remote: command
- which will display your usage limit (unless it is unlimited) and the
- current usage.
- Standard Options
- Here are the standard options specific to jottacloud (JottaCloud).
- --jottacloud-user
- User Name
- - Config: user
- - Env Var: RCLONE_JOTTACLOUD_USER
- - Type: string
- - Default: ""
- --jottacloud-pass
- Password.
- - Config: pass
- - Env Var: RCLONE_JOTTACLOUD_PASS
- - Type: string
- - Default: ""
- --jottacloud-mountpoint
- The mountpoint to use.
- - Config: mountpoint
- - Env Var: RCLONE_JOTTACLOUD_MOUNTPOINT
- - Type: string
- - Default: ""
- - Examples:
- - "Sync"
- - Will be synced by the official client.
- - "Archive"
- - Archive
- Advanced Options
- Here are the advanced options specific to jottacloud (JottaCloud).
- --jottacloud-md5-memory-limit
- Files bigger than this will be cached on disk to calculate the MD5 if
- required.
- - Config: md5_memory_limit
- - Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
- - Type: SizeSuffix
- - Default: 10M
- --jottacloud-hard-delete
- Delete files permanently rather than putting them into the trash.
- - Config: hard_delete
- - Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
- - Type: bool
- - Default: false
- --jottacloud-unlink
- Remove existing public link to file/folder with link command rather than
- creating. Default is false, meaning link command will create or retrieve
- public link.
- - Config: unlink
- - Env Var: RCLONE_JOTTACLOUD_UNLINK
- - Type: bool
- - Default: false
- Limitations
- Note that Jottacloud is case insensitive so you can't have a file called
- "Hello.doc" and one called "hello.doc".
- There are quite a few characters that can't be in Jottacloud file names.
- Rclone will map these names to and from an identical looking unicode
- equivalent. For example if a file has a ? in it will be mapped to ?
- instead.
- Jottacloud only supports filenames up to 255 characters in length.
- Troubleshooting
- Jottacloud exhibits some inconsistent behaviours regarding deleted files
- and folders which may cause Copy, Move and DirMove operations to
- previously deleted paths to fail. Emptying the trash should help in such
- cases.
- Mega
- Mega is a cloud storage and file hosting service known for its security
- feature where all files are encrypted locally before they are uploaded.
- This prevents anyone (including employees of Mega) from accessing the
- files without knowledge of the key used for encryption.
- This is an rclone backend for Mega which supports the file transfer
- features of Mega using the same client side encryption.
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Alias for a existing remote
- \ "alias"
- [snip]
- 14 / Mega
- \ "mega"
- [snip]
- 23 / http Connection
- \ "http"
- Storage> mega
- User name
- user> you@example.com
- Password.
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> y
- Enter the password:
- password:
- Confirm the password:
- password:
- Remote config
- --------------------
- [remote]
- type = mega
- user = you@example.com
- pass = *** ENCRYPTED ***
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Once configured you can then use rclone like this,
- List directories in top level of your Mega
- rclone lsd remote:
- List all the files in your Mega
- rclone ls remote:
- To copy a local directory to an Mega directory called backup
- rclone copy /home/source remote:backup
- Modified time and hashes
- Mega does not support modification times or hashes yet.
- Duplicated files
- Mega can have two files with exactly the same name and path (unlike a
- normal file system).
- Duplicated files cause problems with the syncing and you will see
- messages in the log about duplicates.
- Use rclone dedupe to fix duplicated files.
- Standard Options
- Here are the standard options specific to mega (Mega).
- --mega-user
- User name
- - Config: user
- - Env Var: RCLONE_MEGA_USER
- - Type: string
- - Default: ""
- --mega-pass
- Password.
- - Config: pass
- - Env Var: RCLONE_MEGA_PASS
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to mega (Mega).
- --mega-debug
- Output more debug from Mega.
- If this flag is set (along with -vv) it will print further debugging
- information from the mega backend.
- - Config: debug
- - Env Var: RCLONE_MEGA_DEBUG
- - Type: bool
- - Default: false
- --mega-hard-delete
- Delete files permanently rather than putting them into the trash.
- Normally the mega backend will put all deletions into the trash rather
- than permanently deleting them. If you specify this then rclone will
- permanently delete objects instead.
- - Config: hard_delete
- - Env Var: RCLONE_MEGA_HARD_DELETE
- - Type: bool
- - Default: false
- Limitations
- This backend uses the go-mega go library which is an opensource go
- library implementing the Mega API. There doesn't appear to be any
- documentation for the mega protocol beyond the mega C++ SDK source code
- so there are likely quite a few errors still remaining in this library.
- Mega allows duplicate files which may confuse rclone.
- Microsoft Azure Blob Storage
- Paths are specified as remote:container (or remote: for the lsd
- command.) You may put subdirectories in too, eg
- remote:container/path/to/dir.
- Here is an example of making a Microsoft Azure Blob Storage
- configuration. For a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Dropbox
- \ "dropbox"
- 6 / Encrypt/Decrypt a remote
- \ "crypt"
- 7 / FTP Connection
- \ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 9 / Google Drive
- \ "drive"
- 10 / Hubic
- \ "hubic"
- 11 / Local Disk
- \ "local"
- 12 / Microsoft Azure Blob Storage
- \ "azureblob"
- 13 / Microsoft OneDrive
- \ "onedrive"
- 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 15 / SSH/SFTP Connection
- \ "sftp"
- 16 / Yandex Disk
- \ "yandex"
- 17 / http Connection
- \ "http"
- Storage> azureblob
- Storage Account Name
- account> account_name
- Storage Account Key
- key> base64encodedkey==
- Endpoint for the service - leave blank normally.
- endpoint>
- Remote config
- --------------------
- [remote]
- account = account_name
- key = base64encodedkey==
- endpoint =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See all containers
- rclone lsd remote:
- Make a new container
- rclone mkdir remote:container
- List the contents of a container
- rclone ls remote:container
- Sync /home/local/directory to the remote container, deleting any excess
- files in the container.
- rclone sync /home/local/directory remote:container
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Modified time
- The modified time is stored as metadata on the object with the mtime
- key. It is stored using RFC3339 Format time with nanosecond precision.
- The metadata is supplied during directory listings so there is no
- overhead to using it.
- Hashes
- MD5 hashes are stored with blobs. However blobs that were uploaded in
- chunks only have an MD5 if the source remote was capable of MD5 hashes,
- eg the local disk.
- Authenticating with Azure Blob Storage
- Rclone has 3 ways of authenticating with Azure Blob Storage:
- Account and Key
- This is the most straight forward and least flexible way. Just fill in
- the account and key lines and leave the rest blank.
- SAS URL
- This can be an account level SAS URL or container level SAS URL
- To use it leave account, key blank and fill in sas_url.
- Account level SAS URL or container level SAS URL can be obtained from
- Azure portal or Azure Storage Explorer. To get a container level SAS URL
- right click on a container in the Azure Blob explorer in the Azure
- portal.
- If You use container level SAS URL, rclone operations are permitted only
- on particular container, eg
- rclone ls azureblob:container or rclone ls azureblob:
- Since container name already exists in SAS URL, you can leave it empty
- as well.
- However these will not work
- rclone lsd azureblob:
- rclone ls azureblob:othercontainer
- This would be useful for temporarily allowing third parties access to a
- single container or putting credentials into an untrusted environment.
- Multipart uploads
- Rclone supports multipart uploads with Azure Blob storage. Files bigger
- than 256MB will be uploaded using chunked upload by default.
- The files will be uploaded in parallel in 4MB chunks (by default). Note
- that these chunks are buffered in memory and there may be up to
- --transfers of them being uploaded at once.
- Files can't be split into more than 50,000 chunks so by default, so the
- largest file that can be uploaded with 4MB chunk size is 195GB. Above
- this rclone will double the chunk size until it creates less than 50,000
- chunks. By default this will mean a maximum file size of 3.2TB can be
- uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M.
- Note that rclone doesn't commit the block list until the end of the
- upload which means that there is a limit of 9.5TB of multipart uploads
- in progress as Azure won't allow more than that amount of uncommitted
- blocks.
- Standard Options
- Here are the standard options specific to azureblob (Microsoft Azure
- Blob Storage).
- --azureblob-account
- Storage Account Name (leave blank to use connection string or SAS URL)
- - Config: account
- - Env Var: RCLONE_AZUREBLOB_ACCOUNT
- - Type: string
- - Default: ""
- --azureblob-key
- Storage Account Key (leave blank to use connection string or SAS URL)
- - Config: key
- - Env Var: RCLONE_AZUREBLOB_KEY
- - Type: string
- - Default: ""
- --azureblob-sas-url
- SAS URL for container level access only (leave blank if using
- account/key or connection string)
- - Config: sas_url
- - Env Var: RCLONE_AZUREBLOB_SAS_URL
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to azureblob (Microsoft Azure
- Blob Storage).
- --azureblob-endpoint
- Endpoint for the service Leave blank normally.
- - Config: endpoint
- - Env Var: RCLONE_AZUREBLOB_ENDPOINT
- - Type: string
- - Default: ""
- --azureblob-upload-cutoff
- Cutoff for switching to chunked upload (<= 256MB).
- - Config: upload_cutoff
- - Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- - Type: SizeSuffix
- - Default: 256M
- --azureblob-chunk-size
- Upload chunk size (<= 100MB).
- Note that this is stored in memory and there may be up to "--transfers"
- chunks stored at once in memory.
- - Config: chunk_size
- - Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 4M
- --azureblob-list-chunk
- Size of blob list.
- This sets the number of blobs requested in each listing chunk. Default
- is the maximum, 5000. "List blobs" requests are permitted 2 minutes per
- megabyte to complete. If an operation is taking longer than 2 minutes
- per megabyte on average, it will time out ( source ). This can be used
- to limit the number of blobs items to return, to avoid the time out.
- - Config: list_chunk
- - Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
- - Type: int
- - Default: 5000
- --azureblob-access-tier
- Access tier of blob: hot, cool or archive.
- Archived blobs can be restored by setting access tier to hot or cool.
- Leave blank if you intend to use default access tier, which is set at
- account level
- If there is no "access tier" specified, rclone doesn't apply any tier.
- rclone performs "Set Tier" operation on blobs while uploading, if
- objects are not modified, specifying "access tier" to new one will have
- no effect. If blobs are in "archive tier" at remote, trying to perform
- data transfer operations from remote will not be allowed. User should
- first restore by tiering blob to "Hot" or "Cool".
- - Config: access_tier
- - Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- - Type: string
- - Default: ""
- Limitations
- MD5 sums are only uploaded with chunked files if the source has an MD5
- sum. This will always be the case for a local to azure copy.
- Microsoft OneDrive
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- The initial setup for OneDrive involves getting a token from Microsoft
- which you need to do in your browser. rclone config walks you through
- it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- ...
- 17 / Microsoft OneDrive
- \ "onedrive"
- ...
- Storage> 17
- Microsoft App Client Id
- Leave blank normally.
- Enter a string value. Press Enter for the default ("").
- client_id>
- Microsoft App Client Secret
- Leave blank normally.
- Enter a string value. Press Enter for the default ("").
- client_secret>
- Edit advanced config? (y/n)
- y) Yes
- n) No
- y/n> n
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- Choose a number from below, or type in an existing value
- 1 / OneDrive Personal or Business
- \ "onedrive"
- 2 / Sharepoint site
- \ "sharepoint"
- 3 / Type in driveID
- \ "driveid"
- 4 / Type in SiteID
- \ "siteid"
- 5 / Search a Sharepoint site
- \ "search"
- Your choice> 1
- Found 1 drives, please select the one you want to use:
- 0: OneDrive (business) id=b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
- Chose drive to use:> 0
- Found drive 'root' of type 'business', URL: https://org-my.sharepoint.com/personal/you/Documents
- Is that okay?
- y) Yes
- n) No
- y/n> y
- --------------------
- [remote]
- type = onedrive
- token = {"access_token":"youraccesstoken","token_type":"Bearer","refresh_token":"yourrefreshtoken","expiry":"2018-08-26T22:39:52.486512262+08:00"}
- drive_id = b!Eqwertyuiopasdfghjklzxcvbnm-7mnbvcxzlkjhgfdsapoiuytrewqk
- drive_type = business
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Microsoft. This only runs from the moment it
- opens your browser to the moment you get back the verification code.
- This is on http://127.0.0.1:53682/ and this it may require you to
- unblock it temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- List directories in top level of your OneDrive
- rclone lsd remote:
- List all the files in your OneDrive
- rclone ls remote:
- To copy a local directory to an OneDrive directory called backup
- rclone copy /home/source remote:backup
- Getting your own Client ID and Key
- rclone uses a pair of Client ID and Key shared by all rclone users when
- performing requests by default. If you are having problems with them
- (E.g., seeing a lot of throttling), you can get your own Client ID and
- Key by following the steps below:
- 1. Open https://apps.dev.microsoft.com/#/appList, then click Add an app
- (Choose Converged applications if applicable)
- 2. Enter a name for your app, and click continue. Copy and keep the
- Application Id under the app name for later use.
- 3. Under section Application Secrets, click Generate New Password. Copy
- and keep that password for later use.
- 4. Under section Platforms, click Add platform, then Web. Enter
- http://localhost:53682/ in Redirect URLs.
- 5. Under section Microsoft Graph Permissions, Add these
- delegated permissions: Files.Read, Files.ReadWrite, Files.Read.All,
- Files.ReadWrite.All, offline_access, User.Read.
- 6. Scroll to the bottom and click Save.
- Now the application is complete. Run rclone config to create or edit a
- OneDrive remote. Supply the app ID and password as Client ID and Secret,
- respectively. rclone will walk you through the remaining steps.
- Modified time and hashes
- OneDrive allows modification times to be set on objects accurate to 1
- second. These will be used to detect whether objects need syncing or
- not.
- OneDrive personal supports SHA1 type hashes. OneDrive for business and
- Sharepoint Server support QuickXorHash.
- For all types of OneDrive you can use the --checksum flag.
- Deleting files
- Any files you delete with rclone will end up in the trash. Microsoft
- doesn't provide an API to permanently delete files, nor to empty the
- trash, so you will have to do that with one of Microsoft's apps or via
- the OneDrive website.
- Standard Options
- Here are the standard options specific to onedrive (Microsoft OneDrive).
- --onedrive-client-id
- Microsoft App Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_ONEDRIVE_CLIENT_ID
- - Type: string
- - Default: ""
- --onedrive-client-secret
- Microsoft App Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
- - Type: string
- - Default: ""
- Advanced Options
- Here are the advanced options specific to onedrive (Microsoft OneDrive).
- --onedrive-chunk-size
- Chunk size to upload files with - must be multiple of 320k.
- Above this size files will be chunked - must be multiple of 320k. Note
- that the chunks will be buffered into memory.
- - Config: chunk_size
- - Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 10M
- --onedrive-drive-id
- The ID of the drive to use
- - Config: drive_id
- - Env Var: RCLONE_ONEDRIVE_DRIVE_ID
- - Type: string
- - Default: ""
- --onedrive-drive-type
- The type of the drive ( personal | business | documentLibrary )
- - Config: drive_type
- - Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
- - Type: string
- - Default: ""
- --onedrive-expose-onenote-files
- Set to make OneNote files show up in directory listings.
- By default rclone will hide OneNote files in directory listings because
- operations like "Open" and "Update" won't work on them. But this
- behaviour may also prevent you from deleting them. If you want to delete
- OneNote files or otherwise want them to show up in directory listing,
- set this option.
- - Config: expose_onenote_files
- - Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
- - Type: bool
- - Default: false
- Limitations
- Note that OneDrive is case insensitive so you can't have a file called
- "Hello.doc" and one called "hello.doc".
- There are quite a few characters that can't be in OneDrive file names.
- These can't occur on Windows platforms, but on non-Windows platforms
- they are common. Rclone will map these names to and from an identical
- looking unicode equivalent. For example if a file has a ? in it will be
- mapped to ? instead.
- The largest allowed file size is 10GiB (10,737,418,240 bytes).
- Versioning issue
- Every change in OneDrive causes the service to create a new version.
- This counts against a users quota. For example changing the modification
- time of a file creates a second version, so the file is using twice the
- space.
- The copy is the only rclone command affected by this as we copy the file
- and then afterwards set the modification time to match the source file.
- User Weropol has found a method to disable versioning on OneDrive
- 1. Open the settings menu by clicking on the gear symbol at the top of
- the OneDrive Business page.
- 2. Click Site settings.
- 3. Once on the Site settings page, navigate to Site Administration >
- Site libraries and lists.
- 4. Click Customize "Documents".
- 5. Click General Settings > Versioning Settings.
- 6. Under Document Version History select the option No versioning.
- Note: This will disable the creation of new file versions, but will
- not remove any previous versions. Your documents are safe.
- 7. Apply the changes by clicking OK.
- 8. Use rclone to upload or modify files. (I also use the
- --no-update-modtime flag)
- 9. Restore the versioning settings after using rclone. (Optional)
- Troubleshooting
- Error: access_denied
- Code: AADSTS65005
- Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
- This means that rclone can't use the OneDrive for Business API with your
- account. You can't do much about it, maybe write an email to your
- admins.
- However, there are other ways to interact with your OneDrive account.
- Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint
- OpenDrive
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- n) New remote
- d) Delete remote
- q) Quit config
- e/n/d/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / OpenDrive
- \ "opendrive"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- Storage> 10
- Username
- username>
- Password
- y) Yes type in my own password
- g) Generate random password
- y/g> y
- Enter the password:
- password:
- Confirm the password:
- password:
- --------------------
- [remote]
- username =
- password = *** ENCRYPTED ***
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- List directories in top level of your OpenDrive
- rclone lsd remote:
- List all the files in your OpenDrive
- rclone ls remote:
- To copy a local directory to an OpenDrive directory called backup
- rclone copy /home/source remote:backup
- Modified time and MD5SUMs
- OpenDrive allows modification times to be set on objects accurate to 1
- second. These will be used to detect whether objects need syncing or
- not.
- Standard Options
- Here are the standard options specific to opendrive (OpenDrive).
- --opendrive-username
- Username
- - Config: username
- - Env Var: RCLONE_OPENDRIVE_USERNAME
- - Type: string
- - Default: ""
- --opendrive-password
- Password.
- - Config: password
- - Env Var: RCLONE_OPENDRIVE_PASSWORD
- - Type: string
- - Default: ""
- Limitations
- Note that OpenDrive is case insensitive so you can't have a file called
- "Hello.doc" and one called "hello.doc".
- There are quite a few characters that can't be in OpenDrive file names.
- These can't occur on Windows platforms, but on non-Windows platforms
- they are common. Rclone will map these names to and from an identical
- looking unicode equivalent. For example if a file has a ? in it will be
- mapped to ? instead.
- QingStor
- Paths are specified as remote:bucket (or remote: for the lsd command.)
- You may put subdirectories in too, eg remote:bucket/path/to/dir.
- Here is an example of making an QingStor configuration. First run
- rclone config
- This will guide you through an interactive setup process.
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / QingStor Object Storage
- \ "qingstor"
- 14 / SSH/SFTP Connection
- \ "sftp"
- 15 / Yandex Disk
- \ "yandex"
- Storage> 13
- Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- Choose a number from below, or type in your own value
- 1 / Enter QingStor credentials in the next step
- \ "false"
- 2 / Get QingStor credentials from the environment (env vars or IAM)
- \ "true"
- env_auth> 1
- QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
- access_key_id> access_key
- QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
- secret_access_key> secret_key
- Enter a endpoint URL to connection QingStor API.
- Leave blank will use the default value "https://qingstor.com:443"
- endpoint>
- Zone connect to. Default is "pek3a".
- Choose a number from below, or type in your own value
- / The Beijing (China) Three Zone
- 1 | Needs location constraint pek3a.
- \ "pek3a"
- / The Shanghai (China) First Zone
- 2 | Needs location constraint sh1a.
- \ "sh1a"
- zone> 1
- Number of connnection retry.
- Leave blank will use the default value "3".
- connection_retries>
- Remote config
- --------------------
- [remote]
- env_auth = false
- access_key_id = access_key
- secret_access_key = secret_key
- endpoint =
- zone = pek3a
- connection_retries =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This remote is called remote and can now be used like this
- See all buckets
- rclone lsd remote:
- Make a new bucket
- rclone mkdir remote:bucket
- List the contents of a bucket
- rclone ls remote:bucket
- Sync /home/local/directory to the remote bucket, deleting any excess
- files in the bucket.
- rclone sync /home/local/directory remote:bucket
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Multipart uploads
- rclone supports multipart uploads with QingStor which means that it can
- upload files bigger than 5GB. Note that files uploaded with multipart
- upload don't have an MD5SUM.
- Buckets and Zone
- With QingStor you can list buckets (rclone lsd) using any zone, but you
- can only access the content of a bucket from the zone it was created in.
- If you attempt to access a bucket from the wrong zone, you will get an
- error, incorrect zone, the bucket is not in 'XXX' zone.
- Authentication
- There are two ways to supply rclone with a set of QingStor credentials.
- In order of precedence:
- - Directly in the rclone configuration file (as configured by
- rclone config)
- - set access_key_id and secret_access_key
- - Runtime configuration:
- - set env_auth to true in the config file
- - Exporting the following environment variables before running rclone
- - Access Key ID: QS_ACCESS_KEY_ID or QS_ACCESS_KEY
- - Secret Access Key: QS_SECRET_ACCESS_KEY or QS_SECRET_KEY
- Standard Options
- Here are the standard options specific to qingstor (QingCloud Object
- Storage).
- --qingstor-env-auth
- Get QingStor credentials from runtime. Only applies if access_key_id and
- secret_access_key is blank.
- - Config: env_auth
- - Env Var: RCLONE_QINGSTOR_ENV_AUTH
- - Type: bool
- - Default: false
- - Examples:
- - "false"
- - Enter QingStor credentials in the next step
- - "true"
- - Get QingStor credentials from the environment (env vars or
- IAM)
- --qingstor-access-key-id
- QingStor Access Key ID Leave blank for anonymous access or runtime
- credentials.
- - Config: access_key_id
- - Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
- - Type: string
- - Default: ""
- --qingstor-secret-access-key
- QingStor Secret Access Key (password) Leave blank for anonymous access
- or runtime credentials.
- - Config: secret_access_key
- - Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
- - Type: string
- - Default: ""
- --qingstor-endpoint
- Enter a endpoint URL to connection QingStor API. Leave blank will use
- the default value "https://qingstor.com:443"
- - Config: endpoint
- - Env Var: RCLONE_QINGSTOR_ENDPOINT
- - Type: string
- - Default: ""
- --qingstor-zone
- Zone to connect to. Default is "pek3a".
- - Config: zone
- - Env Var: RCLONE_QINGSTOR_ZONE
- - Type: string
- - Default: ""
- - Examples:
- - "pek3a"
- - The Beijing (China) Three Zone
- - Needs location constraint pek3a.
- - "sh1a"
- - The Shanghai (China) First Zone
- - Needs location constraint sh1a.
- - "gd2a"
- - The Guangdong (China) Second Zone
- - Needs location constraint gd2a.
- Advanced Options
- Here are the advanced options specific to qingstor (QingCloud Object
- Storage).
- --qingstor-connection-retries
- Number of connnection retries.
- - Config: connection_retries
- - Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
- - Type: int
- - Default: 3
- Swift
- Swift refers to Openstack Object Storage. Commercial implementations of
- that being:
- - Rackspace Cloud Files
- - Memset Memstore
- - OVH Object Storage
- - Oracle Cloud Storage
- - IBM Bluemix Cloud ObjectStorage Swift
- Paths are specified as remote:container (or remote: for the lsd
- command.) You may put subdirectories in too, eg
- remote:container/path/to/dir.
- Here is an example of making a swift configuration. First run
- rclone config
- This will guide you through an interactive setup process.
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Cache a remote
- \ "cache"
- 6 / Dropbox
- \ "dropbox"
- 7 / Encrypt/Decrypt a remote
- \ "crypt"
- 8 / FTP Connection
- \ "ftp"
- 9 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 10 / Google Drive
- \ "drive"
- 11 / Hubic
- \ "hubic"
- 12 / Local Disk
- \ "local"
- 13 / Microsoft Azure Blob Storage
- \ "azureblob"
- 14 / Microsoft OneDrive
- \ "onedrive"
- 15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 16 / Pcloud
- \ "pcloud"
- 17 / QingCloud Object Storage
- \ "qingstor"
- 18 / SSH/SFTP Connection
- \ "sftp"
- 19 / Webdav
- \ "webdav"
- 20 / Yandex Disk
- \ "yandex"
- 21 / http Connection
- \ "http"
- Storage> swift
- Get swift credentials from environment variables in standard OpenStack form.
- Choose a number from below, or type in your own value
- 1 / Enter swift credentials in the next step
- \ "false"
- 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
- \ "true"
- env_auth> true
- User name to log in (OS_USERNAME).
- user>
- API key or password (OS_PASSWORD).
- key>
- Authentication URL for server (OS_AUTH_URL).
- Choose a number from below, or type in your own value
- 1 / Rackspace US
- \ "https://auth.api.rackspacecloud.com/v1.0"
- 2 / Rackspace UK
- \ "https://lon.auth.api.rackspacecloud.com/v1.0"
- 3 / Rackspace v2
- \ "https://identity.api.rackspacecloud.com/v2.0"
- 4 / Memset Memstore UK
- \ "https://auth.storage.memset.com/v1.0"
- 5 / Memset Memstore UK v2
- \ "https://auth.storage.memset.com/v2.0"
- 6 / OVH
- \ "https://auth.cloud.ovh.net/v2.0"
- auth>
- User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- user_id>
- User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- domain>
- Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- tenant>
- Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- tenant_id>
- Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- tenant_domain>
- Region name - optional (OS_REGION_NAME)
- region>
- Storage URL - optional (OS_STORAGE_URL)
- storage_url>
- Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- auth_token>
- AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- auth_version>
- Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
- Choose a number from below, or type in your own value
- 1 / Public (default, choose this if not sure)
- \ "public"
- 2 / Internal (use internal service net)
- \ "internal"
- 3 / Admin
- \ "admin"
- endpoint_type>
- Remote config
- --------------------
- [test]
- env_auth = true
- user =
- key =
- auth =
- user_id =
- domain =
- tenant =
- tenant_id =
- tenant_domain =
- region =
- storage_url =
- auth_token =
- auth_version =
- endpoint_type =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This remote is called remote and can now be used like this
- See all containers
- rclone lsd remote:
- Make a new container
- rclone mkdir remote:container
- List the contents of a container
- rclone ls remote:container
- Sync /home/local/directory to the remote container, deleting any excess
- files in the container.
- rclone sync /home/local/directory remote:container
- Configuration from an Openstack credentials file
- An Opentstack credentials file typically looks something something like
- this (without the comments)
- export OS_AUTH_URL=https://a.provider.net/v2.0
- export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
- export OS_TENANT_NAME="1234567890123456"
- export OS_USERNAME="123abc567xy"
- echo "Please enter your OpenStack Password: "
- read -sr OS_PASSWORD_INPUT
- export OS_PASSWORD=$OS_PASSWORD_INPUT
- export OS_REGION_NAME="SBG1"
- if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
- The config file needs to look something like this where $OS_USERNAME
- represents the value of the OS_USERNAME variable - 123abc567xy in the
- example above.
- [remote]
- type = swift
- user = $OS_USERNAME
- key = $OS_PASSWORD
- auth = $OS_AUTH_URL
- tenant = $OS_TENANT_NAME
- Note that you may (or may not) need to set region too - try without
- first.
- Configuration from the environment
- If you prefer you can configure rclone to use swift using a standard set
- of OpenStack environment variables.
- When you run through the config, make sure you choose true for env_auth
- and leave everything else blank.
- rclone will then set any empty config parameters from the environment
- using standard OpenStack environment variables. There is a list of the
- variables in the docs for the swift library.
- Using an alternate authentication method
- If your OpenStack installation uses a non-standard authentication method
- that might not be yet supported by rclone or the underlying swift
- library, you can authenticate externally (e.g. calling manually the
- openstack commands to get a token). Then, you just need to pass the two
- configuration variables auth_token and storage_url. If they are both
- provided, the other variables are ignored. rclone will not try to
- authenticate but instead assume it is already authenticated and use
- these two variables to access the OpenStack installation.
- Using rclone without a config file
- You can use rclone with swift without a config file, if desired, like
- this:
- source openstack-credentials-file
- export RCLONE_CONFIG_MYREMOTE_TYPE=swift
- export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
- rclone lsd myremote:
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- --update and --use-server-modtime
- As noted below, the modified time is stored on metadata on the object.
- It is used by default for all operations that require checking the time
- a file was last updated. It allows rclone to treat the remote more like
- a true filesystem, but it is inefficient because it requires an extra
- API call to retrieve the metadata.
- For many operations, the time the object was last uploaded to the remote
- is sufficient to determine if it is "dirty". By using --update along
- with --use-server-modtime, you can avoid the extra API call and simply
- upload files whose local modtime is newer than the time it was last
- uploaded.
- Standard Options
- Here are the standard options specific to swift (Openstack Swift
- (Rackspace Cloud Files, Memset Memstore, OVH)).
- --swift-env-auth
- Get swift credentials from environment variables in standard OpenStack
- form.
- - Config: env_auth
- - Env Var: RCLONE_SWIFT_ENV_AUTH
- - Type: bool
- - Default: false
- - Examples:
- - "false"
- - Enter swift credentials in the next step
- - "true"
- - Get swift credentials from environment vars. Leave other
- fields blank if using this.
- --swift-user
- User name to log in (OS_USERNAME).
- - Config: user
- - Env Var: RCLONE_SWIFT_USER
- - Type: string
- - Default: ""
- --swift-key
- API key or password (OS_PASSWORD).
- - Config: key
- - Env Var: RCLONE_SWIFT_KEY
- - Type: string
- - Default: ""
- --swift-auth
- Authentication URL for server (OS_AUTH_URL).
- - Config: auth
- - Env Var: RCLONE_SWIFT_AUTH
- - Type: string
- - Default: ""
- - Examples:
- - "https://auth.api.rackspacecloud.com/v1.0"
- - Rackspace US
- - "https://lon.auth.api.rackspacecloud.com/v1.0"
- - Rackspace UK
- - "https://identity.api.rackspacecloud.com/v2.0"
- - Rackspace v2
- - "https://auth.storage.memset.com/v1.0"
- - Memset Memstore UK
- - "https://auth.storage.memset.com/v2.0"
- - Memset Memstore UK v2
- - "https://auth.cloud.ovh.net/v2.0"
- - OVH
- --swift-user-id
- User ID to log in - optional - most swift systems use user and leave
- this blank (v3 auth) (OS_USER_ID).
- - Config: user_id
- - Env Var: RCLONE_SWIFT_USER_ID
- - Type: string
- - Default: ""
- --swift-domain
- User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- - Config: domain
- - Env Var: RCLONE_SWIFT_DOMAIN
- - Type: string
- - Default: ""
- --swift-tenant
- Tenant name - optional for v1 auth, this or tenant_id required otherwise
- (OS_TENANT_NAME or OS_PROJECT_NAME)
- - Config: tenant
- - Env Var: RCLONE_SWIFT_TENANT
- - Type: string
- - Default: ""
- --swift-tenant-id
- Tenant ID - optional for v1 auth, this or tenant required otherwise
- (OS_TENANT_ID)
- - Config: tenant_id
- - Env Var: RCLONE_SWIFT_TENANT_ID
- - Type: string
- - Default: ""
- --swift-tenant-domain
- Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- - Config: tenant_domain
- - Env Var: RCLONE_SWIFT_TENANT_DOMAIN
- - Type: string
- - Default: ""
- --swift-region
- Region name - optional (OS_REGION_NAME)
- - Config: region
- - Env Var: RCLONE_SWIFT_REGION
- - Type: string
- - Default: ""
- --swift-storage-url
- Storage URL - optional (OS_STORAGE_URL)
- - Config: storage_url
- - Env Var: RCLONE_SWIFT_STORAGE_URL
- - Type: string
- - Default: ""
- --swift-auth-token
- Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- - Config: auth_token
- - Env Var: RCLONE_SWIFT_AUTH_TOKEN
- - Type: string
- - Default: ""
- --swift-auth-version
- AuthVersion - optional - set to (1,2,3) if your auth URL has no version
- (ST_AUTH_VERSION)
- - Config: auth_version
- - Env Var: RCLONE_SWIFT_AUTH_VERSION
- - Type: int
- - Default: 0
- --swift-endpoint-type
- Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
- - Config: endpoint_type
- - Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
- - Type: string
- - Default: "public"
- - Examples:
- - "public"
- - Public (default, choose this if not sure)
- - "internal"
- - Internal (use internal service net)
- - "admin"
- - Admin
- --swift-storage-policy
- The storage policy to use when creating a new container
- This applies the specified storage policy when creating a new container.
- The policy cannot be changed afterwards. The allowed configuration
- values and their meaning depend on your Swift storage provider.
- - Config: storage_policy
- - Env Var: RCLONE_SWIFT_STORAGE_POLICY
- - Type: string
- - Default: ""
- - Examples:
- - ""
- - Default
- - "pcs"
- - OVH Public Cloud Storage
- - "pca"
- - OVH Public Cloud Archive
- Advanced Options
- Here are the advanced options specific to swift (Openstack Swift
- (Rackspace Cloud Files, Memset Memstore, OVH)).
- --swift-chunk-size
- Above this size files will be chunked into a _segments container.
- Above this size files will be chunked into a _segments container. The
- default for this is 5GB which is its maximum value.
- - Config: chunk_size
- - Env Var: RCLONE_SWIFT_CHUNK_SIZE
- - Type: SizeSuffix
- - Default: 5G
- Modified time
- The modified time is stored as metadata on the object as
- X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
- This is a defacto standard (used in the official python-swiftclient
- amongst others) for storing the modification time for an object.
- Limitations
- The Swift API doesn't return a correct MD5SUM for segmented files
- (Dynamic or Static Large Objects) so rclone won't check or use the
- MD5SUM for these.
- Troubleshooting
- Rclone gives Failed to create file system for "remote:": Bad Request
- Due to an oddity of the underlying swift library, it gives a "Bad
- Request" error rather than a more sensible error when the authentication
- fails for Swift.
- So this most likely means your username / password is wrong. You can
- investigate further with the --dump-bodies flag.
- This may also be caused by specifying the region when you shouldn't have
- (eg OVH).
- Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
- This is most likely caused by forgetting to specify your tenant when
- setting up a swift remote.
- pCloud
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- The initial setup for pCloud involves getting a token from pCloud which
- you need to do in your browser. rclone config walks you through it.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Dropbox
- \ "dropbox"
- 6 / Encrypt/Decrypt a remote
- \ "crypt"
- 7 / FTP Connection
- \ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 9 / Google Drive
- \ "drive"
- 10 / Hubic
- \ "hubic"
- 11 / Local Disk
- \ "local"
- 12 / Microsoft Azure Blob Storage
- \ "azureblob"
- 13 / Microsoft OneDrive
- \ "onedrive"
- 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 15 / Pcloud
- \ "pcloud"
- 16 / QingCloud Object Storage
- \ "qingstor"
- 17 / SSH/SFTP Connection
- \ "sftp"
- 18 / Yandex Disk
- \ "yandex"
- 19 / http Connection
- \ "http"
- Storage> pcloud
- Pcloud App Client Id - leave blank normally.
- client_id>
- Pcloud App Client Secret - leave blank normally.
- client_secret>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from pCloud. This only runs from the moment it opens
- your browser to the moment you get back the verification code. This is
- on http://127.0.0.1:53682/ and this it may require you to unblock it
- temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- List directories in top level of your pCloud
- rclone lsd remote:
- List all the files in your pCloud
- rclone ls remote:
- To copy a local directory to an pCloud directory called backup
- rclone copy /home/source remote:backup
- Modified time and hashes
- pCloud allows modification times to be set on objects accurate to 1
- second. These will be used to detect whether objects need syncing or
- not. In order to set a Modification time pCloud requires the object be
- re-uploaded.
- pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
- flag.
- Deleting files
- Deleted files will be moved to the trash. Your subscription level will
- determine how long items stay in the trash. rclone cleanup can be used
- to empty the trash.
- Standard Options
- Here are the standard options specific to pcloud (Pcloud).
- --pcloud-client-id
- Pcloud App Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_PCLOUD_CLIENT_ID
- - Type: string
- - Default: ""
- --pcloud-client-secret
- Pcloud App Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_PCLOUD_CLIENT_SECRET
- - Type: string
- - Default: ""
- SFTP
- SFTP is the Secure (or SSH) File Transfer Protocol.
- SFTP runs over SSH v2 and is installed as standard with most modern SSH
- installations.
- Paths are specified as remote:path. If the path does not begin with a /
- it is relative to the home directory of the user. An empty path remote:
- refers to the user's home directory.
- Note that some SFTP servers will need the leading / - Synology is a good
- example of this.
- Here is an example of making an SFTP configuration. First run
- rclone config
- This will guide you through an interactive setup process.
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- 15 / http Connection
- \ "http"
- Storage> sftp
- SSH host to connect to
- Choose a number from below, or type in your own value
- 1 / Connect to example.com
- \ "example.com"
- host> example.com
- SSH username, leave blank for current username, ncw
- user> sftpuser
- SSH port, leave blank to use default (22)
- port>
- SSH password, leave blank to use ssh-agent.
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> n
- Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
- key_file>
- Remote config
- --------------------
- [remote]
- host = example.com
- user = sftpuser
- port =
- pass =
- key_file =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- This remote is called remote and can now be used like this:
- See all directories in the home directory
- rclone lsd remote:
- Make a new directory
- rclone mkdir remote:path/to/directory
- List the contents of a directory
- rclone ls remote:path/to/directory
- Sync /home/local/directory to the remote directory, deleting any excess
- files in the directory.
- rclone sync /home/local/directory remote:directory
- SSH Authentication
- The SFTP remote supports three authentication methods:
- - Password
- - Key file
- - ssh-agent
- Key files should be unencrypted PEM-encoded private key files. For
- instance /home/$USER/.ssh/id_rsa.
- If you don't specify pass or key_file then rclone will attempt to
- contact an ssh-agent.
- If you set the --sftp-ask-password option, rclone will prompt for a
- password when needed and no password has been configured.
- ssh-agent on macOS
- Note that there seem to be various problems with using an ssh-agent on
- macOS due to recent changes in the OS. The most effective work-around
- seems to be to start an ssh-agent in each session, eg
- eval `ssh-agent -s` && ssh-add -A
- And then at the end of the session
- eval `ssh-agent -k`
- These commands can be used in scripts of course.
- Modified time
- Modified times are stored on the server to 1 second precision.
- Modified times are used in syncing and are fully supported.
- Some SFTP servers disable setting/modifying the file modification time
- after upload (for example, certain configurations of ProFTPd with
- mod_sftp). If you are using one of these servers, you can set the option
- set_modtime = false in your RClone backend configuration to disable this
- behaviour.
- Standard Options
- Here are the standard options specific to sftp (SSH/SFTP Connection).
- --sftp-host
- SSH host to connect to
- - Config: host
- - Env Var: RCLONE_SFTP_HOST
- - Type: string
- - Default: ""
- - Examples:
- - "example.com"
- - Connect to example.com
- --sftp-user
- SSH username, leave blank for current username, ncw
- - Config: user
- - Env Var: RCLONE_SFTP_USER
- - Type: string
- - Default: ""
- --sftp-port
- SSH port, leave blank to use default (22)
- - Config: port
- - Env Var: RCLONE_SFTP_PORT
- - Type: string
- - Default: ""
- --sftp-pass
- SSH password, leave blank to use ssh-agent.
- - Config: pass
- - Env Var: RCLONE_SFTP_PASS
- - Type: string
- - Default: ""
- --sftp-key-file
- Path to unencrypted PEM-encoded private key file, leave blank to use
- ssh-agent.
- - Config: key_file
- - Env Var: RCLONE_SFTP_KEY_FILE
- - Type: string
- - Default: ""
- --sftp-use-insecure-cipher
- Enable the use of the aes128-cbc cipher. This cipher is insecure and may
- allow plaintext data to be recovered by an attacker.
- - Config: use_insecure_cipher
- - Env Var: RCLONE_SFTP_USE_INSECURE_CIPHER
- - Type: bool
- - Default: false
- - Examples:
- - "false"
- - Use default Cipher list.
- - "true"
- - Enables the use of the aes128-cbc cipher.
- --sftp-disable-hashcheck
- Disable the execution of SSH commands to determine if remote file
- hashing is available. Leave blank or set to false to enable hashing
- (recommended), set to true to disable hashing.
- - Config: disable_hashcheck
- - Env Var: RCLONE_SFTP_DISABLE_HASHCHECK
- - Type: bool
- - Default: false
- Advanced Options
- Here are the advanced options specific to sftp (SSH/SFTP Connection).
- --sftp-ask-password
- Allow asking for SFTP password when needed.
- - Config: ask_password
- - Env Var: RCLONE_SFTP_ASK_PASSWORD
- - Type: bool
- - Default: false
- --sftp-path-override
- Override path used by SSH connection.
- This allows checksum calculation when SFTP and SSH paths are different.
- This issue affects among others Synology NAS boxes.
- Shared folders can be found in directories representing volumes
- rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
- Home directory can be found in a shared folder called "home"
- rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
- - Config: path_override
- - Env Var: RCLONE_SFTP_PATH_OVERRIDE
- - Type: string
- - Default: ""
- --sftp-set-modtime
- Set the modified time on the remote if set.
- - Config: set_modtime
- - Env Var: RCLONE_SFTP_SET_MODTIME
- - Type: bool
- - Default: true
- Limitations
- SFTP supports checksums if the same login has shell access and md5sum or
- sha1sum as well as echo are in the remote's PATH. This remote
- checksumming (file hashing) is recommended and enabled by default.
- Disabling the checksumming may be required if you are connecting to SFTP
- servers which are not under your control, and to which the execution of
- remote commands is prohibited. Set the configuration option
- disable_hashcheck to true to disable checksumming.
- Note that some SFTP servers (eg Synology) the paths are different for
- SSH and SFTP so the hashes can't be calculated properly. For them using
- disable_hashcheck is a good idea.
- The only ssh agent supported under Windows is Putty's pageant.
- The Go SSH library disables the use of the aes128-cbc cipher by default,
- due to security concerns. This can be re-enabled on a per-connection
- basis by setting the use_insecure_cipher setting in the configuration
- file to true. Further details on the insecurity of this cipher can be
- found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
- SFTP isn't supported under plan9 until this issue is fixed.
- Note that since SFTP isn't HTTP based the following flags don't work
- with it: --dump-headers, --dump-bodies, --dump-auth
- Note that --timeout isn't supported (but --contimeout is).
- Union
- The union remote provides a unification similar to UnionFS using other
- remotes.
- Paths may be as deep as required or a local path, eg
- remote:directory/subdirectory or /directory/subdirectory.
- During the initial setup with rclone config you will specify the target
- remotes as a space separated list. The target remotes can either be a
- local paths or other remotes.
- The order of the remotes is important as it defines which remotes take
- precedence over others if there are files with the same name in the same
- logical path. The last remote is the topmost remote and replaces files
- with the same name from previous remotes.
- Only the last remote is used to write to and delete from, all other
- remotes are read-only.
- Subfolders can be used in target remote. Asume a union remote named
- backup with the remotes mydrive:private/backup mydrive2:/backup.
- Invoking rclone mkdir backup:desktop is exactly the same as invoking
- rclone mkdir mydrive2:/backup/desktop.
- There will be no special handling of paths containing .. segments.
- Invoking rclone mkdir backup:../desktop is exactly the same as invoking
- rclone mkdir mydrive2:/backup/../desktop.
- Here is an example of how to make a union called remote for local
- folders. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Alias for a existing remote
- \ "alias"
- 2 / Amazon Drive
- \ "amazon cloud drive"
- 3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
- \ "s3"
- 4 / Backblaze B2
- \ "b2"
- 5 / Box
- \ "box"
- 6 / Builds a stackable unification remote, which can appear to merge the contents of several remotes
- \ "union"
- 7 / Cache a remote
- \ "cache"
- 8 / Dropbox
- \ "dropbox"
- 9 / Encrypt/Decrypt a remote
- \ "crypt"
- 10 / FTP Connection
- \ "ftp"
- 11 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 12 / Google Drive
- \ "drive"
- 13 / Hubic
- \ "hubic"
- 14 / JottaCloud
- \ "jottacloud"
- 15 / Local Disk
- \ "local"
- 16 / Mega
- \ "mega"
- 17 / Microsoft Azure Blob Storage
- \ "azureblob"
- 18 / Microsoft OneDrive
- \ "onedrive"
- 19 / OpenDrive
- \ "opendrive"
- 20 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 21 / Pcloud
- \ "pcloud"
- 22 / QingCloud Object Storage
- \ "qingstor"
- 23 / SSH/SFTP Connection
- \ "sftp"
- 24 / Webdav
- \ "webdav"
- 25 / Yandex Disk
- \ "yandex"
- 26 / http Connection
- \ "http"
- Storage> union
- List of space separated remotes.
- Can be 'remotea:test/dir remoteb:', '"remotea:test/space dir" remoteb:', etc.
- The last remote is used to write to.
- Enter a string value. Press Enter for the default ("").
- remotes>
- Remote config
- --------------------
- [remote]
- type = union
- remotes = C:\dir1 C:\dir2 C:\dir3
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Current remotes:
- Name Type
- ==== ====
- remote union
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> q
- Once configured you can then use rclone like this,
- List directories in top level in C:\dir1, C:\dir2 and C:\dir3
- rclone lsd remote:
- List all the files in C:\dir1, C:\dir2 and C:\dir3
- rclone ls remote:
- Copy another local directory to the union directory called source, which
- will be placed into C:\dir3
- rclone copy C:\source remote:source
- Standard Options
- Here are the standard options specific to union (A stackable unification
- remote, which can appear to merge the contents of several remotes).
- --union-remotes
- List of space separated remotes. Can be 'remotea:test/dir remoteb:',
- '"remotea:test/space dir" remoteb:', etc. The last remote is used to
- write to.
- - Config: remotes
- - Env Var: RCLONE_UNION_REMOTES
- - Type: string
- - Default: ""
- WebDAV
- Paths are specified as remote:path
- Paths may be as deep as required, eg remote:directory/subdirectory.
- To configure the WebDAV remote you will need to have a URL for it, and a
- username and password. If you know what kind of system you are
- connecting to then rclone can enable extra features.
- Here is an example of how to make a remote called remote. First run:
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- q) Quit config
- n/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- [snip]
- 22 / Webdav
- \ "webdav"
- [snip]
- Storage> webdav
- URL of http host to connect to
- Choose a number from below, or type in your own value
- 1 / Connect to example.com
- \ "https://example.com"
- url> https://example.com/remote.php/webdav/
- Name of the Webdav site/service/software you are using
- Choose a number from below, or type in your own value
- 1 / Nextcloud
- \ "nextcloud"
- 2 / Owncloud
- \ "owncloud"
- 3 / Sharepoint
- \ "sharepoint"
- 4 / Other site/service or software
- \ "other"
- vendor> 1
- User name
- user> user
- Password.
- y) Yes type in my own password
- g) Generate random password
- n) No leave this optional password blank
- y/g/n> y
- Enter the password:
- password:
- Confirm the password:
- password:
- Bearer token instead of user/pass (eg a Macaroon)
- bearer_token>
- Remote config
- --------------------
- [remote]
- type = webdav
- url = https://example.com/remote.php/webdav/
- vendor = nextcloud
- user = user
- pass = *** ENCRYPTED ***
- bearer_token =
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- Once configured you can then use rclone like this,
- List directories in top level of your WebDAV
- rclone lsd remote:
- List all the files in your WebDAV
- rclone ls remote:
- To copy a local directory to an WebDAV directory called backup
- rclone copy /home/source remote:backup
- Modified time and hashes
- Plain WebDAV does not support modified times. However when used with
- Owncloud or Nextcloud rclone will support modified times.
- Hashes are not supported.
- Standard Options
- Here are the standard options specific to webdav (Webdav).
- --webdav-url
- URL of http host to connect to
- - Config: url
- - Env Var: RCLONE_WEBDAV_URL
- - Type: string
- - Default: ""
- - Examples:
- - "https://example.com"
- - Connect to example.com
- --webdav-vendor
- Name of the Webdav site/service/software you are using
- - Config: vendor
- - Env Var: RCLONE_WEBDAV_VENDOR
- - Type: string
- - Default: ""
- - Examples:
- - "nextcloud"
- - Nextcloud
- - "owncloud"
- - Owncloud
- - "sharepoint"
- - Sharepoint
- - "other"
- - Other site/service or software
- --webdav-user
- User name
- - Config: user
- - Env Var: RCLONE_WEBDAV_USER
- - Type: string
- - Default: ""
- --webdav-pass
- Password.
- - Config: pass
- - Env Var: RCLONE_WEBDAV_PASS
- - Type: string
- - Default: ""
- --webdav-bearer-token
- Bearer token instead of user/pass (eg a Macaroon)
- - Config: bearer_token
- - Env Var: RCLONE_WEBDAV_BEARER_TOKEN
- - Type: string
- - Default: ""
- Provider notes
- See below for notes on specific providers.
- Owncloud
- Click on the settings cog in the bottom right of the page and this will
- show the WebDAV URL that rclone needs in the config step. It will look
- something like https://example.com/remote.php/webdav/.
- Owncloud supports modified times using the X-OC-Mtime header.
- Nextcloud
- This is configured in an identical way to Owncloud. Note that Nextcloud
- does not support streaming of files (rcat) whereas Owncloud does. This
- may be fixed in the future.
- Put.io
- put.io can be accessed in a read only way using webdav.
- Configure the url as https://webdav.put.io and use your normal account
- username and password for user and pass. Set the vendor to other.
- Your config file should end up looking like this:
- [putio]
- type = webdav
- url = https://webdav.put.io
- vendor = other
- user = YourUserName
- pass = encryptedpassword
- If you are using put.io with rclone mount then use the --read-only flag
- to signal to the OS that it can't write to the mount.
- For more help see the put.io webdav docs.
- Sharepoint
- Rclone can be used with Sharepoint provided by OneDrive for Business or
- Office365 Education Accounts. This feature is only needed for a few of
- these Accounts, mostly Office365 Education ones. These accounts are
- sometimes not verified by the domain owner github#1975
- This means that these accounts can't be added using the official API
- (other Accounts should work with the "onedrive" option). However, it is
- possible to access them using webdav.
- To use a sharepoint remote with rclone, add it like this: First, you
- need to get your remote's URL:
- - Go here to open your OneDrive or to sign in
- - Now take a look at your address bar, the URL should look like this:
- https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx
- You'll only need this URL upto the email address. After that, you'll
- most likely want to add "/Documents". That subdirectory contains the
- actual data stored on your OneDrive.
- Add the remote to rclone like this: Configure the url as
- https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
- and use your normal account email and password for user and pass. If you
- have 2FA enabled, you have to generate an app password. Set the vendor
- to sharepoint.
- Your config file should look like this:
- [sharepoint]
- type = webdav
- url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
- vendor = other
- user = YourEmailAddress
- pass = encryptedpassword
- dCache
- dCache is a storage system with WebDAV doors that support, beside basic
- and x509, authentication with Macaroons (bearer tokens).
- Configure as normal using the other type. Don't enter a username or
- password, instead enter your Macaroon as the bearer_token.
- The config will end up looking something like this.
- [dcache]
- type = webdav
- url = https://dcache...
- vendor = other
- user =
- pass =
- bearer_token = your-macaroon
- There is a script that obtains a Macaroon from a dCache WebDAV endpoint,
- and creates an rclone config file.
- Yandex Disk
- Yandex Disk is a cloud storage solution created by Yandex.
- Yandex paths may be as deep as required, eg
- remote:directory/subdirectory.
- Here is an example of making a yandex configuration. First run
- rclone config
- This will guide you through an interactive setup process:
- No remotes found - make a new one
- n) New remote
- s) Set configuration password
- n/s> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 7 / Google Drive
- \ "drive"
- 8 / Hubic
- \ "hubic"
- 9 / Local Disk
- \ "local"
- 10 / Microsoft OneDrive
- \ "onedrive"
- 11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 12 / SSH/SFTP Connection
- \ "sftp"
- 13 / Yandex Disk
- \ "yandex"
- Storage> 13
- Yandex Client Id - leave blank normally.
- client_id>
- Yandex Client Secret - leave blank normally.
- client_secret>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
- See the remote setup docs for how to set it up on a machine with no
- Internet browser available.
- Note that rclone runs a webserver on your local machine to collect the
- token as returned from Yandex Disk. This only runs from the moment it
- opens your browser to the moment you get back the verification code.
- This is on http://127.0.0.1:53682/ and this it may require you to
- unblock it temporarily if you are running a host firewall.
- Once configured you can then use rclone like this,
- See top level directories
- rclone lsd remote:
- Make a new directory
- rclone mkdir remote:directory
- List the contents of a directory
- rclone ls remote:directory
- Sync /home/local/directory to the remote path, deleting any excess files
- in the path.
- rclone sync /home/local/directory remote:directory
- --fast-list
- This remote supports --fast-list which allows you to use fewer
- transactions in exchange for more memory. See the rclone docs for more
- details.
- Modified time
- Modified times are supported and are stored accurate to 1 ns in custom
- metadata called rclone_modified in RFC3339 with nanoseconds format.
- MD5 checksums
- MD5 checksums are natively supported by Yandex Disk.
- Emptying Trash
- If you wish to empty your trash you can use the rclone cleanup remote:
- command which will permanently delete all your trashed files. This
- command does not take any path arguments.
- Standard Options
- Here are the standard options specific to yandex (Yandex Disk).
- --yandex-client-id
- Yandex Client Id Leave blank normally.
- - Config: client_id
- - Env Var: RCLONE_YANDEX_CLIENT_ID
- - Type: string
- - Default: ""
- --yandex-client-secret
- Yandex Client Secret Leave blank normally.
- - Config: client_secret
- - Env Var: RCLONE_YANDEX_CLIENT_SECRET
- - Type: string
- - Default: ""
- Local Filesystem
- Local paths are specified as normal filesystem paths, eg
- /path/to/wherever, so
- rclone sync /home/source /tmp/destination
- Will sync /home/source to /tmp/destination
- These can be configured into the config file for consistencies sake, but
- it is probably easier not to.
- Modified time
- Rclone reads and writes the modified time using an accuracy determined
- by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
- on OS X.
- Filenames
- Filenames are expected to be encoded in UTF-8 on disk. This is the
- normal case for Windows and OS X.
- There is a bit more uncertainty in the Linux world, but new
- distributions will have UTF-8 encoded files names. If you are using an
- old Linux filesystem with non UTF-8 file names (eg latin1) then you can
- use the convmv tool to convert the filesystem to UTF-8. This tool is
- available in most distributions' package managers.
- If an invalid (non-UTF8) filename is read, the invalid characters will
- be replaced with the unicode replacement character, '�'. rclone will
- emit a debug message in this case (use -v to see), eg
- Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
- Long paths on Windows
- Rclone handles long paths automatically, by converting all paths to long
- UNC paths which allows paths up to 32,767 characters.
- This is why you will see that your paths, for instance c:\files is
- converted to the UNC path \\?\c:\files in the output, and \\server\share
- is converted to \\?\UNC\server\share.
- However, in rare cases this may cause problems with buggy file system
- drivers like EncFS. To disable UNC conversion globally, add this to your
- .rclone.conf file:
- [local]
- nounc = true
- If you want to selectively disable UNC, you can add it to a separate
- entry like this:
- [nounc]
- type = local
- nounc = true
- And use rclone like this:
- rclone copy c:\src nounc:z:\dst
- This will use UNC paths on c:\src but not on z:\dst. Of course this will
- cause problems if the absolute path length of a file exceeds 258
- characters on z, so only use this option if you have to.
- Symlinks / Junction points
- Normally rclone will ignore symlinks or junction points (which behave
- like symlinks under Windows).
- If you supply --copy-links or -L then rclone will follow the symlink and
- copy the pointed to file or directory.
- This flag applies to all commands.
- For example, supposing you have a directory structure like this
- $ tree /tmp/a
- /tmp/a
- ├── b -> ../b
- ├── expected -> ../expected
- ├── one
- └── two
- └── three
- Then you can see the difference with and without the flag like this
- $ rclone ls /tmp/a
- 6 one
- 6 two/three
- and
- $ rclone -L ls /tmp/a
- 4174 expected
- 6 one
- 6 two/three
- 6 b/two
- 6 b/one
- Restricting filesystems with --one-file-system
- Normally rclone will recurse through filesystems as mounted.
- However if you set --one-file-system or -x this tells rclone to stay in
- the filesystem specified by the root and not to recurse into different
- file systems.
- For example if you have a directory hierarchy like this
- root
- ├── disk1 - disk1 mounted on the root
- │ └── file3 - stored on disk1
- ├── disk2 - disk2 mounted on the root
- │ └── file4 - stored on disk12
- ├── file1 - stored on the root disk
- └── file2 - stored on the root disk
- Using rclone --one-file-system copy root remote: will only copy file1
- and file2. Eg
- $ rclone -q --one-file-system ls root
- 0 file1
- 0 file2
- $ rclone -q ls root
- 0 disk1/file3
- 0 disk2/file4
- 0 file1
- 0 file2
- NB Rclone (like most unix tools such as du, rsync and tar) treats a bind
- mount to the same device as being on the same filesystem.
- NB This flag is only available on Unix based systems. On systems where
- it isn't supported (eg Windows) it will be ignored.
- Standard Options
- Here are the standard options specific to local (Local Disk).
- --local-nounc
- Disable UNC (long path names) conversion on Windows
- - Config: nounc
- - Env Var: RCLONE_LOCAL_NOUNC
- - Type: string
- - Default: ""
- - Examples:
- - "true"
- - Disables long file names
- Advanced Options
- Here are the advanced options specific to local (Local Disk).
- --copy-links
- Follow symlinks and copy the pointed to item.
- - Config: copy_links
- - Env Var: RCLONE_LOCAL_COPY_LINKS
- - Type: bool
- - Default: false
- --skip-links
- Don't warn about skipped symlinks. This flag disables warning messages
- on skipped symlinks or junction points, as you explicitly acknowledge
- that they should be skipped.
- - Config: skip_links
- - Env Var: RCLONE_LOCAL_SKIP_LINKS
- - Type: bool
- - Default: false
- --local-no-unicode-normalization
- Don't apply unicode normalization to paths and filenames (Deprecated)
- This flag is deprecated now. Rclone no longer normalizes unicode file
- names, but it compares them with unicode normalization in the sync
- routine instead.
- - Config: no_unicode_normalization
- - Env Var: RCLONE_LOCAL_NO_UNICODE_NORMALIZATION
- - Type: bool
- - Default: false
- --local-no-check-updated
- Don't check to see if the files change during upload
- Normally rclone checks the size and modification time of files as they
- are being uploaded and aborts with a message which starts "can't copy -
- source file is being updated" if the file changes during upload.
- However on some file systems this modification time check may fail (eg
- Glusterfs #2206) so this check can be disabled with this flag.
- - Config: no_check_updated
- - Env Var: RCLONE_LOCAL_NO_CHECK_UPDATED
- - Type: bool
- - Default: false
- --one-file-system
- Don't cross filesystem boundaries (unix/macOS only).
- - Config: one_file_system
- - Env Var: RCLONE_LOCAL_ONE_FILE_SYSTEM
- - Type: bool
- - Default: false
- CHANGELOG
- v1.44 - 2018-10-15
- - New commands
- - serve ftp: Add ftp server (Antoine GIRARD)
- - settier: perform storage tier changes on supported remotes
- (sandeepkru)
- - New Features
- - Reworked command line help
- - Make default help less verbose (Nick Craig-Wood)
- - Split flags up into global and backend flags (Nick
- Craig-Wood)
- - Implement specialised help for flags and backends (Nick
- Craig-Wood)
- - Show URL of backend help page when starting config (Nick
- Craig-Wood)
- - stats: Long names now split in center (Joanna Marek)
- - Add --log-format flag for more control over log output (dcpu)
- - rc: Add support for OPTIONS and basic CORS (frenos)
- - stats: show FatalErrors and NoRetryErrors in stats (Cédric
- Connes)
- - Bug Fixes
- - Fix -P not ending with a new line (Nick Craig-Wood)
- - config: don't create default config dir when user supplies
- --config (albertony)
- - Don't print non-ASCII characters with --progress on windows
- (Nick Craig-Wood)
- - Correct logs for excluded items (ssaqua)
- - Mount
- - Remove EXPERIMENTAL tags (Nick Craig-Wood)
- - VFS
- - Fix race condition detected by serve ftp tests (Nick Craig-Wood)
- - Add vfs/poll-interval rc command (Fabian Möller)
- - Enable rename for nearly all remotes using server side Move or
- Copy (Nick Craig-Wood)
- - Reduce directory cache cleared by poll-interval (Fabian Möller)
- - Remove EXPERIMENTAL tags (Nick Craig-Wood)
- - Local
- - Skip bad symlinks in dir listing with -L enabled (Cédric Connes)
- - Preallocate files on Windows to reduce fragmentation (Nick
- Craig-Wood)
- - Preallocate files on linux with fallocate(2) (Nick Craig-Wood)
- - Cache
- - Add cache/fetch rc function (Fabian Möller)
- - Fix worker scale down (Fabian Möller)
- - Improve performance by not sending info requests for cached
- chunks (dcpu)
- - Fix error return value of cache/fetch rc method (Fabian Möller)
- - Documentation fix for cache-chunk-total-size (Anagh Kumar
- Baranwal)
- - Preserve leading / in wrapped remote path (Fabian Möller)
- - Add plex_insecure option to skip certificate validation (Fabian
- Möller)
- - Remove entries that no longer exist in the source (dcpu)
- - Crypt
- - Preserve leading / in wrapped remote path (Fabian Möller)
- - Alias
- - Fix handling of Windows network paths (Nick Craig-Wood)
- - Azure Blob
- - Add --azureblob-list-chunk parameter (Santiago Rodríguez)
- - Implemented settier command support on azureblob remote.
- (sandeepkru)
- - Work around SDK bug which causes errors for chunk-sized files
- (Nick Craig-Wood)
- - Box
- - Implement link sharing. (Sebastian Bünger)
- - Drive
- - Add --drive-import-formats - google docs can now be imported
- (Fabian Möller)
- - Rewrite mime type and extension handling (Fabian Möller)
- - Add document links (Fabian Möller)
- - Add support for multipart document extensions (Fabian
- Möller)
- - Add support for apps-script to json export (Fabian Möller)
- - Fix escaped chars in documents during list (Fabian Möller)
- - Add --drive-v2-download-min-size a workaround for slow downloads
- (Fabian Möller)
- - Improve directory notifications in ChangeNotify (Fabian Möller)
- - When listing team drives in config, continue on failure (Nick
- Craig-Wood)
- - FTP
- - Add a small pause after failed upload before deleting file (Nick
- Craig-Wood)
- - Google Cloud Storage
- - Fix service_account_file being ignored (Fabian Möller)
- - Jottacloud
- - Minor improvement in quota info (omit if unlimited) (albertony)
- - Add --fast-list support (albertony)
- - Add permanent delete support: --jottacloud-hard-delete
- (albertony)
- - Add link sharing support (albertony)
- - Fix handling of reserved characters. (Sebastian Bünger)
- - Fix socket leak on Object.Remove (Nick Craig-Wood)
- - Onedrive
- - Rework to support Microsoft Graph (Cnly)
- - NB this will require re-authenticating the remote
- - Removed upload cutoff and always do session uploads (Oliver
- Heyme)
- - Use single-part upload for empty files (Cnly)
- - Fix new fields not saved when editing old config (Alex Chen)
- - Fix sometimes special chars in filenames not replaced (Alex
- Chen)
- - Ignore OneNote files by default (Alex Chen)
- - Add link sharing support (jackyzy823)
- - S3
- - Use custom pacer, to retry operations when reasonable (Craig
- Miskell)
- - Use configured server-side-encryption and storace class options
- when calling CopyObject() (Paul Kohout)
- - Make --s3-v2-auth flag (Nick Craig-Wood)
- - Fix v2 auth on files with spaces (Nick Craig-Wood)
- - Union
- - Implement union backend which reads from multiple backends
- (Felix Brucker)
- - Implement optional interfaces (Move, DirMove, Copy etc) (Nick
- Craig-Wood)
- - Fix ChangeNotify to support multiple remotes (Fabian Möller)
- - Fix --backup-dir on union backend (Nick Craig-Wood)
- - WebDAV
- - Add another time format (Nick Craig-Wood)
- - Add a small pause after failed upload before deleting file (Nick
- Craig-Wood)
- - Add workaround for missing mtime (buergi)
- - Sharepoint: Renew cookies after 12hrs (Henning Surmeier)
- - Yandex
- - Remove redundant nil checks (teresy)
- v1.43.1 - 2018-09-07
- Point release to fix hubic and azureblob backends.
- - Bug Fixes
- - ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
- - cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
- - docs: Tidy website display (Anagh Kumar Baranwal)
- - Azure Blob:
- - Fix multi-part uploads. (sandeepkru)
- - Hubic
- - Fix uploads (Nick Craig-Wood)
- - Retry auth fetching if it fails to make hubic more reliable
- (Nick Craig-Wood)
- v1.43 - 2018-09-01
- - New backends
- - Jottacloud (Sebastian Bünger)
- - New commands
- - copyurl: copies a URL to a remote (Denis)
- - New Features
- - Reworked config for backends (Nick Craig-Wood)
- - All backend config can now be supplied by command line, env
- var or config file
- - Advanced section in the config wizard for the optional items
- - A large step towards rclone backends being usable in other
- go software
- - Allow on the fly remotes with :backend: syntax
- - Stats revamp
- - Add --progress/-P flag to show interactive progress (Nick
- Craig-Wood)
- - Show the total progress of the sync in the stats (Nick
- Craig-Wood)
- - Add --stats-one-line flag for single line stats (Nick
- Craig-Wood)
- - Added weekday schedule into --bwlimit (Mateusz)
- - lsjson: Add option to show the original object IDs (Fabian
- Möller)
- - serve webdav: Make Content-Type without reading the file and add
- --etag-hash (Nick Craig-Wood)
- - build
- - Build macOS with native compiler (Nick Craig-Wood)
- - Update to use go1.11 for the build (Nick Craig-Wood)
- - rc
- - Added core/stats to return the stats (reddi1)
- - version --check: Prints the current release and beta versions
- (Nick Craig-Wood)
- - Bug Fixes
- - accounting
- - Fix time to completion estimates (Nick Craig-Wood)
- - Fix moving average speed for file stats (Nick Craig-Wood)
- - config: Fix error reading password from piped input (Nick
- Craig-Wood)
- - move: Fix --delete-empty-src-dirs flag to delete all empty dirs
- on move (ishuah)
- - Mount
- - Implement --daemon-timeout flag for OSXFUSE (Nick Craig-Wood)
- - Fix mount --daemon not working with encrypted config (Alex Chen)
- - Clip the number of blocks to 2^32-1 on macOS - fixes borg backup
- (Nick Craig-Wood)
- - VFS
- - Enable vfs-read-chunk-size by default (Fabian Möller)
- - Add the vfs/refresh rc command (Fabian Möller)
- - Add non recursive mode to vfs/refresh rc command (Fabian Möller)
- - Try to seek buffer on read only files (Fabian Möller)
- - Local
- - Fix crash when deprecated --local-no-unicode-normalization is
- supplied (Nick Craig-Wood)
- - Fix mkdir error when trying to copy files to the root of a drive
- on windows (Nick Craig-Wood)
- - Cache
- - Fix nil pointer deref when using lsjson on cached directory
- (Nick Craig-Wood)
- - Fix nil pointer deref for occasional crash on playback (Nick
- Craig-Wood)
- - Crypt
- - Fix accounting when checking hashes on upload (Nick Craig-Wood)
- - Amazon Cloud Drive
- - Make very clear in the docs that rclone has no ACD keys (Nick
- Craig-Wood)
- - Azure Blob
- - Add connection string and SAS URL auth (Nick Craig-Wood)
- - List the container to see if it exists (Nick Craig-Wood)
- - Port new Azure Blob Storage SDK (sandeepkru)
- - Added blob tier, tier between Hot, Cool and Archive.
- (sandeepkru)
- - Remove leading / from paths (Nick Craig-Wood)
- - B2
- - Support Application Keys (Nick Craig-Wood)
- - Remove leading / from paths (Nick Craig-Wood)
- - Box
- - Fix upload of > 2GB files on 32 bit platforms (Nick Craig-Wood)
- - Make --box-commit-retries flag defaulting to 100 to fix large
- uploads (Nick Craig-Wood)
- - Drive
- - Add --drive-keep-revision-forever flag (lewapm)
- - Handle gdocs when filtering file names in list (Fabian Möller)
- - Support using --fast-list for large speedups (Fabian Möller)
- - FTP
- - Fix Put mkParentDir failed: 521 for BunnyCDN (Nick Craig-Wood)
- - Google Cloud Storage
- - Fix index out of range error with --fast-list (Nick Craig-Wood)
- - Jottacloud
- - Fix MD5 error check (Oliver Heyme)
- - Handle empty time values (Martin Polden)
- - Calculate missing MD5s (Oliver Heyme)
- - Docs, fixes and tests for MD5 calculation (Nick Craig-Wood)
- - Add optional MimeTyper interface. (Sebastian Bünger)
- - Implement optional About interface (for df support). (Sebastian
- Bünger)
- - Mega
- - Wait for events instead of arbitrary sleeping (Nick Craig-Wood)
- - Add --mega-hard-delete flag (Nick Craig-Wood)
- - Fix failed logins with upper case chars in email (Nick
- Craig-Wood)
- - Onedrive
- - Shared folder support (Yoni Jah)
- - Implement DirMove (Cnly)
- - Fix rmdir sometimes deleting directories with contents (Nick
- Craig-Wood)
- - Pcloud
- - Delete half uploaded files on upload error (Nick Craig-Wood)
- - Qingstor
- - Remove leading / from paths (Nick Craig-Wood)
- - S3
- - Fix index out of range error with --fast-list (Nick Craig-Wood)
- - Add --s3-force-path-style (Nick Craig-Wood)
- - Add support for KMS Key ID (bsteiss)
- - Remove leading / from paths (Nick Craig-Wood)
- - Swift
- - Add storage_policy (Ruben Vandamme)
- - Make it so just storage_url or auth_token can be overidden (Nick
- Craig-Wood)
- - Fix server side copy bug for unusal file names (Nick Craig-Wood)
- - Remove leading / from paths (Nick Craig-Wood)
- - WebDAV
- - Ensure we call MKCOL with a URL with a trailing / for QNAP
- interop (Nick Craig-Wood)
- - If root ends with / then don't check if it is a file (Nick
- Craig-Wood)
- - Don't accept redirects when reading metadata (Nick Craig-Wood)
- - Add bearer token (Macaroon) support for dCache (Nick Craig-Wood)
- - Document dCache and Macaroons (Onno Zweers)
- - Sharepoint recursion with different depth (Henning)
- - Attempt to remove failed uploads (Nick Craig-Wood)
- - Yandex
- - Fix listing/deleting files in the root (Nick Craig-Wood)
- v1.42 - 2018-06-16
- - New backends
- - OpenDrive (Oliver Heyme, Jakub Karlicek, ncw)
- - New commands
- - deletefile command (Filip Bartodziej)
- - New Features
- - copy, move: Copy single files directly, don't use --files-from
- work-around
- - this makes them much more efficient
- - Implement --max-transfer flag to quit transferring at a limit
- - make exit code 8 for --max-transfer exceeded
- - copy: copy empty source directories to destination (Ishuah
- Kariuki)
- - check: Add --one-way flag (Kasper Byrdal Nielsen)
- - Add siginfo handler for macOS for ctrl-T stats (kubatasiemski)
- - rc
- - add core/gc to run a garbage collection on demand
- - enable go profiling by default on the --rc port
- - return error from remote on failure
- - lsf
- - Add --absolute flag to add a leading / onto path names
- - Add --csv flag for compliant CSV output
- - Add 'm' format specifier to show the MimeType
- - Implement 'i' format for showing object ID
- - lsjson
- - Add MimeType to the output
- - Add ID field to output to show Object ID
- - Add --retries-sleep flag (Benjamin Joseph Dag)
- - Oauth tidy up web page and error handling (Henning Surmeier)
- - Bug Fixes
- - Password prompt output with --log-file fixed for unix (Filip
- Bartodziej)
- - Calculate ModifyWindow each time on the fly to fix various
- problems (Stefan Breunig)
- - Mount
- - Only print "File.rename error" if there actually is an error
- (Stefan Breunig)
- - Delay rename if file has open writers instead of failing
- outright (Stefan Breunig)
- - Ensure atexit gets run on interrupt
- - macOS enhancements
- - Make --noappledouble --noapplexattr
- - Add --volname flag and remove special chars from it
- - Make Get/List/Set/Remove xattr return ENOSYS for efficiency
- - Make --daemon work for macOS without CGO
- - VFS
- - Add --vfs-read-chunk-size and --vfs-read-chunk-size-limit
- (Fabian Möller)
- - Fix ChangeNotify for new or changed folders (Fabian Möller)
- - Local
- - Fix symlink/junction point directory handling under Windows
- - NB you will need to add -L to your command line to copy
- files with reparse points
- - Cache
- - Add non cached dirs on notifications (Remus Bunduc)
- - Allow root to be expired from rc (Remus Bunduc)
- - Clean remaining empty folders from temp upload path (Remus
- Bunduc)
- - Cache lists using batch writes (Remus Bunduc)
- - Use secure websockets for HTTPS Plex addresses (John Clayton)
- - Reconnect plex websocket on failures (Remus Bunduc)
- - Fix panic when running without plex configs (Remus Bunduc)
- - Fix root folder caching (Remus Bunduc)
- - Crypt
- - Check the crypted hash of files when uploading for extra data
- security
- - Dropbox
- - Make Dropbox for business folders accessible using an initial /
- in the path
- - Google Cloud Storage
- - Low level retry all operations if necessary
- - Google Drive
- - Add --drive-acknowledge-abuse to download flagged files
- - Add --drive-alternate-export to fix large doc export
- - Don't attempt to choose Team Drives when using rclone config
- create
- - Fix change list polling with team drives
- - Fix ChangeNotify for folders (Fabian Möller)
- - Fix about (and df on a mount) for team drives
- - Onedrive
- - Errorhandler for onedrive for business requests (Henning
- Surmeier)
- - S3
- - Adjust upload concurrency with --s3-upload-concurrency
- (themylogin)
- - Fix --s3-chunk-size which was always using the minimum
- - SFTP
- - Add --ssh-path-override flag (Piotr Oleszczyk)
- - Fix slow downloads for long latency connections
- - Webdav
- - Add workarounds for biz.mail.ru
- - Ignore Reason-Phrase in status line to fix 4shared (Rodrigo)
- - Better error message generation
- v1.41 - 2018-04-28
- - New backends
- - Mega support added
- - Webdav now supports SharePoint cookie authentication (hensur)
- - New commands
- - link: create public link to files and folders (Stefan Breunig)
- - about: gets quota info from a remote (a-roussos, ncw)
- - hashsum: a generic tool for any hash to produce md5sum like
- output
- - New Features
- - lsd: Add -R flag and fix and update docs for all ls commands
- - ncdu: added a "refresh" key - CTRL-L (Keith Goldfarb)
- - serve restic: Add append-only mode (Steve Kriss)
- - serve restic: Disallow overwriting files in append-only mode
- (Alexander Neumann)
- - serve restic: Print actual listener address (Matt Holt)
- - size: Add --json flag (Matthew Holt)
- - sync: implement --ignore-errors (Mateusz Pabian)
- - dedupe: Add dedupe largest functionality (Richard Yang)
- - fs: Extend SizeSuffix to include TB and PB for rclone about
- - fs: add --dump goroutines and --dump openfiles for debugging
- - rc: implement core/memstats to print internal memory usage info
- - rc: new call rc/pid (Michael P. Dubner)
- - Compile
- - Drop support for go1.6
- - Release
- - Fix make tarball (Chih-Hsuan Yen)
- - Bug Fixes
- - filter: fix --min-age and --max-age together check
- - fs: limit MaxIdleConns and MaxIdleConnsPerHost in transport
- - lsd,lsf: make sure all times we output are in local time
- - rc: fix setting bwlimit to unlimited
- - rc: take note of the --rc-addr flag too as per the docs
- - Mount
- - Use About to return the correct disk total/used/free (eg in df)
- - Set --attr-timeout default to 1s - fixes:
- - rclone using too much memory
- - rclone not serving files to samba
- - excessive time listing directories
- - Fix df -i (upstream fix)
- - VFS
- - Filter files . and .. from directory listing
- - Only make the VFS cache if --vfs-cache-mode > Off
- - Local
- - Add --local-no-check-updated to disable updated file checks
- - Retry remove on Windows sharing violation error
- - Cache
- - Flush the memory cache after close
- - Purge file data on notification
- - Always forget parent dir for notifications
- - Integrate with Plex websocket
- - Add rc cache/stats (seuffert)
- - Add info log on notification
- - Box
- - Fix failure reading large directories - parse file/directory
- size as float
- - Dropbox
- - Fix crypt+obfuscate on dropbox
- - Fix repeatedly uploading the same files
- - FTP
- - Work around strange response from box FTP server
- - More workarounds for FTP servers to fix mkParentDir error
- - Fix no error on listing non-existent directory
- - Google Cloud Storage
- - Add service_account_credentials (Matt Holt)
- - Detect bucket presence by listing it - minimises permissions
- needed
- - Ignore zero length directory markers
- - Google Drive
- - Add service_account_credentials (Matt Holt)
- - Fix directory move leaving a hardlinked directory behind
- - Return proper google errors when Opening files
- - When initialized with a filepath, optional features used
- incorrect root path (Stefan Breunig)
- - HTTP
- - Fix sync for servers which don't return Content-Length in HEAD
- - Onedrive
- - Add QuickXorHash support for OneDrive for business
- - Fix socket leak in multipart session upload
- - S3
- - Look in S3 named profile files for credentials
- - Add --s3-disable-checksum to disable checksum uploading (Chris
- Redekop)
- - Hierarchical configuration support (Giri Badanahatti)
- - Add in config for all the supported S3 providers
- - Add One Zone Infrequent Access storage class (Craig Rachel)
- - Add --use-server-modtime support (Peter Baumgartner)
- - Add --s3-chunk-size option to control multipart uploads
- - Ignore zero length directory markers
- - SFTP
- - Update docs to match code, fix typos and clarify
- disable_hashcheck prompt (Michael G. Noll)
- - Update docs with Synology quirks
- - Fail soft with a debug on hash failure
- - Swift
- - Add --use-server-modtime support (Peter Baumgartner)
- - Webdav
- - Support SharePoint cookie authentication (hensur)
- - Strip leading and trailing / off root
- v1.40 - 2018-03-19
- - New backends
- - Alias backend to create aliases for existing remote names
- (Fabian Möller)
- - New commands
- - lsf: list for parsing purposes (Jakub Tasiemski)
- - by default this is a simple non recursive list of files and
- directories
- - it can be configured to add more info in an easy to parse
- way
- - serve restic: for serving a remote as a Restic REST endpoint
- - This enables restic to use any backends that rclone can
- access
- - Thanks Alexander Neumann for help, patches and review
- - rc: enable the remote control of a running rclone
- - The running rclone must be started with --rc and related
- flags.
- - Currently there is support for bwlimit, and flushing for
- mount and cache.
- - New Features
- - --max-delete flag to add a delete threshold (Bjørn Erik
- Pedersen)
- - All backends now support RangeOption for ranged Open
- - cat: Use RangeOption for limited fetches to make more
- efficient
- - cryptcheck: make reading of nonce more efficient with
- RangeOption
- - serve http/webdav/restic
- - support SSL/TLS
- - add --user --pass and --htpasswd for authentication
- - copy/move: detect file size change during copy/move and abort
- transfer (ishuah)
- - cryptdecode: added option to return encrypted file names.
- (ishuah)
- - lsjson: add --encrypted to show encrypted name (Jakub Tasiemski)
- - Add --stats-file-name-length to specify the printed file name
- length for stats (Will Gunn)
- - Compile
- - Code base was shuffled and factored
- - backends moved into a backend directory
- - large packages split up
- - See the CONTRIBUTING.md doc for info as to what lives where
- now
- - Update to using go1.10 as the default go version
- - Implement daily full integration tests
- - Release
- - Include a source tarball and sign it and the binaries
- - Sign the git tags as part of the release process
- - Add .deb and .rpm packages as part of the build
- - Make a beta release for all branches on the main repo (but not
- pull requests)
- - Bug Fixes
- - config: fixes errors on non existing config by loading config
- file only on first access
- - config: retry saving the config after failure (Mateusz)
- - sync: when using --backup-dir don't delete files if we can't set
- their modtime
- - this fixes odd behaviour with Dropbox and --backup-dir
- - fshttp: fix idle timeouts for HTTP connections
- - serve http: fix serving files with : in - fixes
- - Fix --exclude-if-present to ignore directories which it doesn't
- have permission for (Iakov Davydov)
- - Make accounting work properly with crypt and b2
- - remove --no-traverse flag because it is obsolete
- - Mount
- - Add --attr-timeout flag to control attribute caching in kernel
- - this now defaults to 0 which is correct but less efficient
- - see the mount docs for more info
- - Add --daemon flag to allow mount to run in the background
- (ishuah)
- - Fix: Return ENOSYS rather than EIO on attempted link
- - This fixes FileZilla accessing an rclone mount served over
- sftp.
- - Fix setting modtime twice
- - Mount tests now run on CI for Linux (mount & cmount)/Mac/Windows
- - Many bugs fixed in the VFS layer - see below
- - VFS
- - Many fixes for --vfs-cache-mode writes and above
- - Update cached copy if we know it has changed (fixes stale
- data)
- - Clean path names before using them in the cache
- - Disable cache cleaner if --vfs-cache-poll-interval=0
- - Fill and clean the cache immediately on startup
- - Fix Windows opening every file when it stats the file
- - Fix applying modtime for an open Write Handle
- - Fix creation of files when truncating
- - Write 0 bytes when flushing unwritten handles to avoid race
- conditions in FUSE
- - Downgrade "poll-interval is not supported" message to Info
- - Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
- - Local
- - Downgrade "invalid cross-device link: trying copy" to debug
- - Make DirMove return fs.ErrorCantDirMove to allow fallback to
- Copy for cross device
- - Fix race conditions updating the hashes
- - Cache
- - Add support for polling - cache will update when remote changes
- on supported backends
- - Reduce log level for Plex api
- - Fix dir cache issue
- - Implement --cache-db-wait-time flag
- - Improve efficiency with RangeOption and RangeSeek
- - Fix dirmove with temp fs enabled
- - Notify vfs when using temp fs
- - Offline uploading
- - Remote control support for path flushing
- - Amazon cloud drive
- - Rclone no longer has any working keys - disable integration
- tests
- - Implement DirChangeNotify to notify cache/vfs/mount of changes
- - Azureblob
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - Improve accounting for chunked uploads
- - Backblaze B2
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - Box
- - Improve accounting for chunked uploads
- - Dropbox
- - Fix custom oauth client parameters
- - Google Cloud Storage
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - Google Drive
- - Migrate to api v3 (Fabian Möller)
- - Add scope configuration and root folder selection
- - Add --drive-impersonate for service accounts
- - thanks to everyone who tested, explored and contributed docs
- - Add --drive-use-created-date to use created date as modified
- date (nbuchanan)
- - Request the export formats only when required
- - This makes rclone quicker when there are no google docs
- - Fix finding paths with latin1 chars (a workaround for a drive
- bug)
- - Fix copying of a single Google doc file
- - Fix --drive-auth-owner-only to look in all directories
- - HTTP
- - Fix handling of directories with & in
- - Onedrive
- - Removed upload cutoff and always do session uploads
- - this stops the creation of multiple versions on business
- onedrive
- - Overwrite object size value with real size when reading file.
- (Victor)
- - this fixes oddities when onedrive misreports the size of
- images
- - Pcloud
- - Remove unused chunked upload flag and code
- - Qingstor
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - S3
- - Support hashes for multipart files (Chris Redekop)
- - Initial support for IBM COS (S3) (Giri Badanahatti)
- - Update docs to discourage use of v2 auth with CEPH and others
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - Fix server side copy and set modtime on files with + in
- - SFTP
- - Add option to disable remote hash check command execution (Jon
- Fautley)
- - Add --sftp-ask-password flag to prompt for password when needed
- (Leo R. Lundgren)
- - Add set_modtime configuration option
- - Fix following of symlinks
- - Fix reading config file outside of Fs setup
- - Fix reading $USER in username fallback not $HOME
- - Fix running under crontab - Use correct OS way of reading
- username
- - Swift
- - Fix refresh of authentication token
- - in v1.39 a bug was introduced which ignored new tokens -
- this fixes it
- - Fix extra HEAD transaction when uploading a new file
- - Don't check for bucket/container presense if listing was OK
- - this makes rclone do one less request per invocation
- - Webdav
- - Add new time formats to support mydrive.ch and others
- v1.39 - 2017-12-23
- - New backends
- - WebDAV
- - tested with nextcloud, owncloud, put.io and others!
- - Pcloud
- - cache - wraps a cache around other backends (Remus Bunduc)
- - useful in combination with mount
- - NB this feature is in beta so use with care
- - New commands
- - serve command with subcommands:
- - serve webdav: this implements a webdav server for any rclone
- remote.
- - serve http: command to serve a remote over HTTP
- - config: add sub commands for full config file management
- - create/delete/dump/edit/file/password/providers/show/update
- - touch: to create or update the timestamp of a file (Jakub
- Tasiemski)
- - New Features
- - curl install for rclone (Filip Bartodziej)
- - --stats now shows percentage, size, rate and ETA in condensed
- form (Ishuah Kariuki)
- - --exclude-if-present to exclude a directory if a file is present
- (Iakov Davydov)
- - rmdirs: add --leave-root flag (lewpam)
- - move: add --delete-empty-src-dirs flag to remove dirs after move
- (Ishuah Kariuki)
- - Add --dump flag, introduce --dump requests, responses and remove
- --dump-auth, --dump-filters
- - Obscure X-Auth-Token: from headers when dumping too
- - Document and implement exit codes for different failure modes
- (Ishuah Kariuki)
- - Compile
- - Bug Fixes
- - Retry lots more different types of errors to make multipart
- transfers more reliable
- - Save the config before asking for a token, fixes disappearing
- oauth config
- - Warn the user if --include and --exclude are used together
- (Ernest Borowski)
- - Fix duplicate files (eg on Google drive) causing spurious copies
- - Allow trailing and leading whitespace for passwords (Jason Rose)
- - ncdu: fix crashes on empty directories
- - rcat: fix goroutine leak
- - moveto/copyto: Fix to allow copying to the same name
- - Mount
- - --vfs-cache mode to make writes into mounts more reliable.
- - this requires caching files on the disk (see --cache-dir)
- - As this is a new feature, use with care
- - Use sdnotify to signal systemd the mount is ready (Fabian
- Möller)
- - Check if directory is not empty before mounting (Ernest
- Borowski)
- - Local
- - Add error message for cross file system moves
- - Fix equality check for times
- - Dropbox
- - Rework multipart upload
- - buffer the chunks when uploading large files so they can be
- retried
- - change default chunk size to 48MB now we are buffering them
- in memory
- - retry every error after the first chunk is done successfully
- - Fix error when renaming directories
- - Swift
- - Fix crash on bad authentication
- - Google Drive
- - Add service account support (Tim Cooijmans)
- - S3
- - Make it work properly with Digital Ocean Spaces (Andrew
- Starr-Bochicchio)
- - Fix crash if a bad listing is received
- - Add support for ECS task IAM roles (David Minor)
- - Backblaze B2
- - Fix multipart upload retries
- - Fix --hard-delete to make it work 100% of the time
- - Swift
- - Allow authentication with storage URL and auth key (Giovanni
- Pizzi)
- - Add new fields for swift configuration to support IBM Bluemix
- Swift (Pierre Carlson)
- - Add OS_TENANT_ID and OS_USER_ID to config
- - Allow configs with user id instead of user name
- - Check if swift segments container exists before creating (John
- Leach)
- - Fix memory leak in swift transfers (upstream fix)
- - SFTP
- - Add option to enable the use of aes128-cbc cipher (Jon Fautley)
- - Amazon cloud drive
- - Fix download of large files failing with "Only one auth
- mechanism allowed"
- - crypt
- - Option to encrypt directory names or leave them intact
- - Implement DirChangeNotify (Fabian Möller)
- - onedrive
- - Add option to choose resourceURL during setup of OneDrive
- Business account if more than one is available for user
- v1.38 - 2017-09-30
- - New backends
- - Azure Blob Storage (thanks Andrei Dragomir)
- - Box
- - Onedrive for Business (thanks Oliver Heyme)
- - QingStor from QingCloud (thanks wuyu)
- - New commands
- - rcat - read from standard input and stream upload
- - tree - shows a nicely formatted recursive listing
- - cryptdecode - decode crypted file names (thanks ishuah)
- - config show - print the config file
- - config file - print the config file location
- - New Features
- - Empty directories are deleted on sync
- - dedupe - implement merging of duplicate directories
- - check and cryptcheck made more consistent and use less memory
- - cleanup for remaining remotes (thanks ishuah)
- - --immutable for ensuring that files don't change (thanks Jacob
- McNamee)
- - --user-agent option (thanks Alex McGrath Kraak)
- - --disable flag to disable optional features
- - --bind flag for choosing the local addr on outgoing connections
- - Support for zsh auto-completion (thanks bpicode)
- - Stop normalizing file names but do a normalized compare in sync
- - Compile
- - Update to using go1.9 as the default go version
- - Remove snapd build due to maintenance problems
- - Bug Fixes
- - Improve retriable error detection which makes multipart uploads
- better
- - Make check obey --ignore-size
- - Fix bwlimit toggle in conjunction with schedules (thanks
- cbruegg)
- - config ensures newly written config is on the same mount
- - Local
- - Revert to copy when moving file across file system boundaries
- - --skip-links to suppress symlink warnings (thanks Zhiming Wang)
- - Mount
- - Re-use rcat internals to support uploads from all remotes
- - Dropbox
- - Fix "entry doesn't belong in directory" error
- - Stop using deprecated API methods
- - Swift
- - Fix server side copy to empty container with --fast-list
- - Google Drive
- - Change the default for --drive-use-trash to true
- - S3
- - Set session token when using STS (thanks Girish Ramakrishnan)
- - Glacier docs and error messages (thanks Jan Varho)
- - Read 1000 (not 1024) items in dir listings to fix Wasabi
- - Backblaze B2
- - Fix SHA1 mismatch when downloading files with no SHA1
- - Calculate missing hashes on the fly instead of spooling
- - --b2-hard-delete to permanently delete (not hide) files (thanks
- John Papandriopoulos)
- - Hubic
- - Fix creating containers - no longer have to use the default
- container
- - Swift
- - Optionally configure from a standard set of OpenStack
- environment vars
- - Add endpoint_type config
- - Google Cloud Storage
- - Fix bucket creation to work with limited permission users
- - SFTP
- - Implement connection pooling for multiple ssh connections
- - Limit new connections per second
- - Add support for MD5 and SHA1 hashes where available (thanks
- Christian Brüggemann)
- - HTTP
- - Fix URL encoding issues
- - Fix directories with : in
- - Fix panic with URL encoded content
- v1.37 - 2017-07-22
- - New backends
- - FTP - thanks to Antonio Messina
- - HTTP - thanks to Vasiliy Tolstov
- - New commands
- - rclone ncdu - for exploring a remote with a text based user
- interface.
- - rclone lsjson - for listing with a machine readable output
- - rclone dbhashsum - to show Dropbox style hashes of files (local
- or Dropbox)
- - New Features
- - Implement --fast-list flag
- - This allows remotes to list recursively if they can
- - This uses less transactions (important if you pay for them)
- - This may or may not be quicker
- - This will use more memory as it has to hold the listing in
- memory
- - --old-sync-method deprecated - the remaining uses are
- covered by --fast-list
- - This involved a major re-write of all the listing code
- - Add --tpslimit and --tpslimit-burst to limit transactions per
- second
- - this is useful in conjuction with rclone mount to limit
- external apps
- - Add --stats-log-level so can see --stats without -v
- - Print password prompts to stderr - Hraban Luyat
- - Warn about duplicate files when syncing
- - Oauth improvements
- - allow auth_url and token_url to be set in the config file
- - Print redirection URI if using own credentials.
- - Don't Mkdir at the start of sync to save transactions
- - Compile
- - Update build to go1.8.3
- - Require go1.6 for building rclone
- - Compile 386 builds with "GO386=387" for maximum compatibility
- - Bug Fixes
- - Fix menu selection when no remotes
- - Config saving reworked to not kill the file if disk gets full
- - Don't delete remote if name does not change while renaming
- - moveto, copyto: report transfers and checks as per move and copy
- - Local
- - Add --local-no-unicode-normalization flag - Bob Potter
- - Mount
- - Now supported on Windows using cgofuse and WinFsp - thanks to
- Bill Zissimopoulos for much help
- - Compare checksums on upload/download via FUSE
- - Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM -
- Jérôme Vizcaino
- - On read only open of file, make open pending until first read
- - Make --read-only reject modify operations
- - Implement ModTime via FUSE for remotes that support it
- - Allow modTime to be changed even before all writers are closed
- - Fix panic on renames
- - Fix hang on errored upload
- - Crypt
- - Report the name:root as specified by the user
- - Add an "obfuscate" option for filename encryption - Stephen
- Harris
- - Amazon Drive
- - Fix initialization order for token renewer
- - Remove revoked credentials, allow oauth proxy config and update
- docs
- - B2
- - Reduce minimum chunk size to 5MB
- - Drive
- - Add team drive support
- - Reduce bandwidth by adding fields for partial responses - Martin
- Kristensen
- - Implement --drive-shared-with-me flag to view shared with me
- files - Danny Tsai
- - Add --drive-trashed-only to read only the files in the trash
- - Remove obsolete --drive-full-list
- - Add missing seek to start on retries of chunked uploads
- - Fix stats accounting for upload
- - Convert / in names to a unicode equivalent (/)
- - Poll for Google Drive changes when mounted
- - OneDrive
- - Fix the uploading of files with spaces
- - Fix initialization order for token renewer
- - Display speeds accurately when uploading - Yoni Jah
- - Swap to using http://localhost:53682/ as redirect URL - Michael
- Ledin
- - Retry on token expired error, reset upload body on retry - Yoni
- Jah
- - Google Cloud Storage
- - Add ability to specify location and storage class via config and
- command line - thanks gdm85
- - Create container if necessary on server side copy
- - Increase directory listing chunk to 1000 to increase performance
- - Obtain a refresh token for GCS - Steven Lu
- - Yandex
- - Fix the name reported in log messages (was empty)
- - Correct error return for listing empty directory
- - Dropbox
- - Rewritten to use the v2 API
- - Now supports ModTime
- - Can only set by uploading the file again
- - If you uploaded with an old rclone, rclone may upload
- everything again
- - Use --size-only or --checksum to avoid this
- - Now supports the Dropbox content hashing scheme
- - Now supports low level retries
- - S3
- - Work around eventual consistency in bucket creation
- - Create container if necessary on server side copy
- - Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar
- Ahmed
- - Swift, Hubic
- - Fix zero length directory markers showing in the subdirectory
- listing
- - this caused lots of duplicate transfers
- - Fix paged directory listings
- - this caused duplicate directory errors
- - Create container if necessary on server side copy
- - Increase directory listing chunk to 1000 to increase performance
- - Make sensible error if the user forgets the container
- - SFTP
- - Add support for using ssh key files
- - Fix under Windows
- - Fix ssh agent on Windows
- - Adapt to latest version of library - Igor Kharin
- v1.36 - 2017-03-18
- - New Features
- - SFTP remote (Jack Schmidt)
- - Re-implement sync routine to work a directory at a time reducing
- memory usage
- - Logging revamped to be more inline with rsync - now much
- quieter * -v only shows transfers * -vv is for full debug *
- --syslog to log to syslog on capable platforms
- - Implement --backup-dir and --suffix
- - Implement --track-renames (initial implementation by Bjørn Erik
- Pedersen)
- - Add time-based bandwidth limits (Lukas Loesche)
- - rclone cryptcheck: checks integrity of crypt remotes
- - Allow all config file variables and options to be set from
- environment variables
- - Add --buffer-size parameter to control buffer size for copy
- - Make --delete-after the default
- - Add --ignore-checksum flag (fixed by Hisham Zarka)
- - rclone check: Add --download flag to check all the data, not
- just hashes
- - rclone cat: add --head, --tail, --offset, --count and --discard
- - rclone config: when choosing from a list, allow the value to be
- entered too
- - rclone config: allow rename and copy of remotes
- - rclone obscure: for generating encrypted passwords for rclone's
- config (T.C. Ferguson)
- - Comply with XDG Base Directory specification (Dario Giovannetti)
- - this moves the default location of the config file in a
- backwards compatible way
- - Release changes
- - Ubuntu snap support (Dedsec1)
- - Compile with go 1.8
- - MIPS/Linux big and little endian support
- - Bug Fixes
- - Fix copyto copying things to the wrong place if the destination
- dir didn't exist
- - Fix parsing of remotes in moveto and copyto
- - Fix --delete-before deleting files on copy
- - Fix --files-from with an empty file copying everything
- - Fix sync: don't update mod times if --dry-run set
- - Fix MimeType propagation
- - Fix filters to add ** rules to directory rules
- - Local
- - Implement -L, --copy-links flag to allow rclone to follow
- symlinks
- - Open files in write only mode so rclone can write to an rclone
- mount
- - Fix unnormalised unicode causing problems reading directories
- - Fix interaction between -x flag and --max-depth
- - Mount
- - Implement proper directory handling (mkdir, rmdir, renaming)
- - Make include and exclude filters apply to mount
- - Implement read and write async buffers - control with
- --buffer-size
- - Fix fsync on for directories
- - Fix retry on network failure when reading off crypt
- - Crypt
- - Add --crypt-show-mapping to show encrypted file mapping
- - Fix crypt writer getting stuck in a loop
- - IMPORTANT this bug had the potential to cause data
- corruption when
- - reading data from a network based remote and
- - writing to a crypt on Google Drive
- - Use the cryptcheck command to validate your data if you are
- concerned
- - If syncing two crypt remotes, sync the unencrypted remote
- - Amazon Drive
- - Fix panics on Move (rename)
- - Fix panic on token expiry
- - B2
- - Fix inconsistent listings and rclone check
- - Fix uploading empty files with go1.8
- - Constrain memory usage when doing multipart uploads
- - Fix upload url not being refreshed properly
- - Drive
- - Fix Rmdir on directories with trashed files
- - Fix "Ignoring unknown object" when downloading
- - Add --drive-list-chunk
- - Add --drive-skip-gdocs (Károly Oláh)
- - OneDrive
- - Implement Move
- - Fix Copy
- - Fix overwrite detection in Copy
- - Fix waitForJob to parse errors correctly
- - Use token renewer to stop auth errors on long uploads
- - Fix uploading empty files with go1.8
- - Google Cloud Storage
- - Fix depth 1 directory listings
- - Yandex
- - Fix single level directory listing
- - Dropbox
- - Normalise the case for single level directory listings
- - Fix depth 1 listing
- - S3
- - Added ca-central-1 region (Jon Yergatian)
- v1.35 - 2017-01-02
- - New Features
- - moveto and copyto commands for choosing a destination name on
- copy/move
- - rmdirs command to recursively delete empty directories
- - Allow repeated --include/--exclude/--filter options
- - Only show transfer stats on commands which transfer stuff
- - show stats on any command using the --stats flag
- - Allow overlapping directories in move when server side dir move
- is supported
- - Add --stats-unit option - thanks Scott McGillivray
- - Bug Fixes
- - Fix the config file being overwritten when two rclones are
- running
- - Make rclone lsd obey the filters properly
- - Fix compilation on mips
- - Fix not transferring files that don't differ in size
- - Fix panic on nil retry/fatal error
- - Mount
- - Retry reads on error - should help with reliability a lot
- - Report the modification times for directories from the remote
- - Add bandwidth accounting and limiting (fixes --bwlimit)
- - If --stats provided will show stats and which files are
- transferring
- - Support R/W files if truncate is set.
- - Implement statfs interface so df works
- - Note that write is now supported on Amazon Drive
- - Report number of blocks in a file - thanks Stefan Breunig
- - Crypt
- - Prevent the user pointing crypt at itself
- - Fix failed to authenticate decrypted block errors
- - these will now return the underlying unexpected EOF instead
- - Amazon Drive
- - Add support for server side move and directory move - thanks
- Stefan Breunig
- - Fix nil pointer deref on size attribute
- - B2
- - Use new prefix and delimiter parameters in directory listings
- - This makes --max-depth 1 dir listings as used in mount much
- faster
- - Reauth the account while doing uploads too - should help with
- token expiry
- - Drive
- - Make DirMove more efficient and complain about moving the root
- - Create destination directory on Move()
- v1.34 - 2016-11-06
- - New Features
- - Stop single file and --files-from operations iterating through
- the source bucket.
- - Stop removing failed upload to cloud storage remotes
- - Make ContentType be preserved for cloud to cloud copies
- - Add support to toggle bandwidth limits via SIGUSR2 - thanks
- Marco Paganini
- - rclone check shows count of hashes that couldn't be checked
- - rclone listremotes command
- - Support linux/arm64 build - thanks Fredrik Fornwall
- - Remove Authorization: lines from --dump-headers output
- - Bug Fixes
- - Ignore files with control characters in the names
- - Fix rclone move command
- - Delete src files which already existed in dst
- - Fix deletion of src file when dst file older
- - Fix rclone check on crypted file systems
- - Make failed uploads not count as "Transferred"
- - Make sure high level retries show with -q
- - Use a vendor directory with godep for repeatable builds
- - rclone mount - FUSE
- - Implement FUSE mount options
- - --no-modtime, --debug-fuse, --read-only, --allow-non-empty,
- --allow-root, --allow-other
- - --default-permissions, --write-back-cache, --max-read-ahead,
- --umask, --uid, --gid
- - Add --dir-cache-time to control caching of directory entries
- - Implement seek for files opened for read (useful for video
- players)
- - with -no-seek flag to disable
- - Fix crash on 32 bit ARM (alignment of 64 bit counter)
- - ...and many more internal fixes and improvements!
- - Crypt
- - Don't show encrypted password in configurator to stop confusion
- - Amazon Drive
- - New wait for upload option --acd-upload-wait-per-gb
- - upload timeouts scale by file size and can be disabled
- - Add 502 Bad Gateway to list of errors we retry
- - Fix overwriting a file with a zero length file
- - Fix ACD file size warning limit - thanks Felix Bünemann
- - Local
- - Unix: implement -x/--one-file-system to stay on a single file
- system
- - thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
- - Windows: ignore the symlink bit on files
- - Windows: Ignore directory based junction points
- - B2
- - Make sure each upload has at least one upload slot - fixes
- strange upload stats
- - Fix uploads when using crypt
- - Fix download of large files (sha1 mismatch)
- - Return error when we try to create a bucket which someone else
- owns
- - Update B2 docs with Data usage, and Crypt section - thanks
- Tomasz Mazur
- - S3
- - Command line and config file support for
- - Setting/overriding ACL - thanks Radek Senfeld
- - Setting storage class - thanks Asko Tamm
- - Drive
- - Make exponential backoff work exactly as per Google
- specification
- - add .epub, .odp and .tsv as export formats.
- - Swift
- - Don't read metadata for directory marker objects
- v1.33 - 2016-08-24
- - New Features
- - Implement encryption
- - data encrypted in NACL secretbox format
- - with optional file name encryption
- - New commands
- - rclone mount - implements FUSE mounting of remotes
- (EXPERIMENTAL)
- - works on Linux, FreeBSD and OS X (need testers for the
- last 2!)
- - rclone cat - outputs remote file or files to the terminal
- - rclone genautocomplete - command to make a bash completion
- script for rclone
- - Editing a remote using rclone config now goes through the wizard
- - Compile with go 1.7 - this fixes rclone on macOS Sierra and on
- 386 processors
- - Use cobra for sub commands and docs generation
- - drive
- - Document how to make your own client_id
- - s3
- - User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
- - b2
- - Fix stats accounting for upload - no more jumping to 100% done
- - On cleanup delete hide marker if it is the current file
- - New B2 API endpoint (thanks Per Cederberg)
- - Set maximum backoff to 5 Minutes
- - onedrive
- - Fix URL escaping in file names - eg uploading files with + in
- them.
- - amazon cloud drive
- - Fix token expiry during large uploads
- - Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
- - local
- - Fix filenames with invalid UTF-8 not being uploaded
- - Fix problem with some UTF-8 characters on OS X
- v1.32 - 2016-07-13
- - Backblaze B2
- - Fix upload of files large files not in root
- v1.31 - 2016-07-13
- - New Features
- - Reduce memory on sync by about 50%
- - Implement --no-traverse flag to stop copy traversing the
- destination remote.
- - This can be used to reduce memory usage down to the smallest
- possible.
- - Useful to copy a small number of files into a large
- destination folder.
- - Implement cleanup command for emptying trash / removing old
- versions of files
- - Currently B2 only
- - Single file handling improved
- - Now copied with --files-from
- - Automatically sets --no-traverse when copying a single file
- - Info on using installing with ansible - thanks Stefan Weichinger
- - Implement --no-update-modtime flag to stop rclone fixing the
- remote modified times.
- - Bug Fixes
- - Fix move command - stop it running for overlapping Fses - this
- was causing data loss.
- - Local
- - Fix incomplete hashes - this was causing problems for B2.
- - Amazon Drive
- - Rename Amazon Cloud Drive to Amazon Drive - no changes to config
- file needed.
- - Swift
- - Add support for non-default project domain - thanks Antonio
- Messina.
- - S3
- - Add instructions on how to use rclone with minio.
- - Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
- - Skip setting the modified time for objects > 5GB as it isn't
- possible.
- - Backblaze B2
- - Add --b2-versions flag so old versions can be listed and
- retreived.
- - Treat 403 errors (eg cap exceeded) as fatal.
- - Implement cleanup command for deleting old file versions.
- - Make error handling compliant with B2 integrations notes.
- - Fix handling of token expiry.
- - Implement --b2-test-mode to set X-Bz-Test-Mode header.
- - Set cutoff for chunked upload to 200MB as per B2 guidelines.
- - Make upload multi-threaded.
- - Dropbox
- - Don't retry 461 errors.
- v1.30 - 2016-06-18
- - New Features
- - Directory listing code reworked for more features and better
- error reporting (thanks to Klaus Post for help). This enables
- - Directory include filtering for efficiency
- - --max-depth parameter
- - Better error reporting
- - More to come
- - Retry more errors
- - Add --ignore-size flag - for uploading images to onedrive
- - Log -v output to stdout by default
- - Display the transfer stats in more human readable form
- - Make 0 size files specifiable with --max-size 0b
- - Add b suffix so we can specify bytes in --bwlimit, --min-size
- etc
- - Use "password:" instead of "password>" prompt - thanks Klaus
- Post and Leigh Klotz
- - Bug Fixes
- - Fix retry doing one too many retries
- - Local
- - Fix problems with OS X and UTF-8 characters
- - Amazon Drive
- - Check a file exists before uploading to help with 408 Conflict
- errors
- - Reauth on 401 errors - this has been causing a lot of problems
- - Work around spurious 403 errors
- - Restart directory listings on error
- - Google Drive
- - Check a file exists before uploading to help with duplicates
- - Fix retry of multipart uploads
- - Backblaze B2
- - Implement large file uploading
- - S3
- - Add AES256 server-side encryption for - thanks Justin R. Wilson
- - Google Cloud Storage
- - Make sure we don't use conflicting content types on upload
- - Add service account support - thanks Michal Witkowski
- - Swift
- - Add auth version parameter
- - Add domain option for openstack (v3 auth) - thanks Fabian Ruff
- v1.29 - 2016-04-18
- - New Features
- - Implement -I, --ignore-times for unconditional upload
- - Improve dedupecommand
- - Now removes identical copies without asking
- - Now obeys --dry-run
- - Implement --dedupe-mode for non interactive running
- - --dedupe-mode interactive - interactive the default.
- - --dedupe-mode skip - removes identical files then skips
- anything left.
- - --dedupe-mode first - removes identical files then keeps
- the first one.
- - --dedupe-mode newest - removes identical files then
- keeps the newest one.
- - --dedupe-mode oldest - removes identical files then
- keeps the oldest one.
- - --dedupe-mode rename - removes identical files then
- renames the rest to be different.
- - Bug fixes
- - Make rclone check obey the --size-only flag.
- - Use "application/octet-stream" if discovered mime type is
- invalid.
- - Fix missing "quit" option when there are no remotes.
- - Google Drive
- - Increase default chunk size to 8 MB - increases upload speed of
- big files
- - Speed up directory listings and make more reliable
- - Add missing retries for Move and DirMove - increases reliability
- - Preserve mime type on file update
- - Backblaze B2
- - Enable mod time syncing
- - This means that B2 will now check modification times
- - It will upload new files to update the modification times
- - (there isn't an API to just set the mod time.)
- - If you want the old behaviour use --size-only.
- - Update API to new version
- - Fix parsing of mod time when not in metadata
- - Swift/Hubic
- - Don't return an MD5SUM for static large objects
- - S3
- - Fix uploading files bigger than 50GB
- v1.28 - 2016-03-01
- - New Features
- - Configuration file encryption - thanks Klaus Post
- - Improve rclone config adding more help and making it easier to
- understand
- - Implement -u/--update so creation times can be used on all
- remotes
- - Implement --low-level-retries flag
- - Optionally disable gzip compression on downloads with
- --no-gzip-encoding
- - Bug fixes
- - Don't make directories if --dry-run set
- - Fix and document the move command
- - Fix redirecting stderr on unix-like OSes when using --log-file
- - Fix delete command to wait until all finished - fixes missing
- deletes.
- - Backblaze B2
- - Use one upload URL per go routine fixes
- more than one upload using auth token
- - Add pacing, retries and reauthentication - fixes token expiry
- problems
- - Upload without using a temporary file from local (and remotes
- which support SHA1)
- - Fix reading metadata for all files when it shouldn't have been
- - Drive
- - Fix listing drive documents at root
- - Disable copy and move for Google docs
- - Swift
- - Fix uploading of chunked files with non ASCII characters
- - Allow setting of storage_url in the config - thanks Xavier Lucas
- - S3
- - Allow IAM role and credentials from environment variables -
- thanks Brian Stengaard
- - Allow low privilege users to use S3 (check if directory exists
- during Mkdir) - thanks Jakub Gedeon
- - Amazon Drive
- - Retry on more things to make directory listings more reliable
- v1.27 - 2016-01-31
- - New Features
- - Easier headless configuration with rclone authorize
- - Add support for multiple hash types - we now check SHA1 as well
- as MD5 hashes.
- - delete command which does obey the filters (unlike purge)
- - dedupe command to deduplicate a remote. Useful with Google
- Drive.
- - Add --ignore-existing flag to skip all files that exist on
- destination.
- - Add --delete-before, --delete-during, --delete-after flags.
- - Add --memprofile flag to debug memory use.
- - Warn the user about files with same name but different case
- - Make --include rules add their implict exclude * at the end of
- the filter list
- - Deprecate compiling with go1.3
- - Amazon Drive
- - Fix download of files > 10 GB
- - Fix directory traversal ("Next token is expired") for large
- directory listings
- - Remove 409 conflict from error codes we will retry - stops very
- long pauses
- - Backblaze B2
- - SHA1 hashes now checked by rclone core
- - Drive
- - Add --drive-auth-owner-only to only consider files owned by the
- user - thanks Björn Harrtell
- - Export Google documents
- - Dropbox
- - Make file exclusion error controllable with -q
- - Swift
- - Fix upload from unprivileged user.
- - S3
- - Fix updating of mod times of files with + in.
- - Local
- - Add local file system option to disable UNC on Windows.
- v1.26 - 2016-01-02
- - New Features
- - Yandex storage backend - thank you Dmitry Burdeev ("dibu")
- - Implement Backblaze B2 storage backend
- - Add --min-age and --max-age flags - thank you Adriano Aurélio
- Meirelles
- - Make ls/lsl/md5sum/size/check obey includes and excludes
- - Fixes
- - Fix crash in http logging
- - Upload releases to github too
- - Swift
- - Fix sync for chunked files
- - OneDrive
- - Re-enable server side copy
- - Don't mask HTTP error codes with JSON decode error
- - S3
- - Fix corrupting Content-Type on mod time update (thanks Joseph
- Spurrier)
- v1.25 - 2015-11-14
- - New features
- - Implement Hubic storage system
- - Fixes
- - Fix deletion of some excluded files without --delete-excluded
- - This could have deleted files unexpectedly on sync
- - Always check first with --dry-run!
- - Swift
- - Stop SetModTime losing metadata (eg X-Object-Manifest)
- - This could have caused data loss for files > 5GB in size
- - Use ContentType from Object to avoid lookups in listings
- - OneDrive
- - disable server side copy as it seems to be broken at Microsoft
- v1.24 - 2015-11-07
- - New features
- - Add support for Microsoft OneDrive
- - Add --no-check-certificate option to disable server certificate
- verification
- - Add async readahead buffer for faster transfer of big files
- - Fixes
- - Allow spaces in remotes and check remote names for validity at
- creation time
- - Allow '&' and disallow ':' in Windows filenames.
- - Swift
- - Ignore directory marker objects where appropriate - allows
- working with Hubic
- - Don't delete the container if fs wasn't at root
- - S3
- - Don't delete the bucket if fs wasn't at root
- - Google Cloud Storage
- - Don't delete the bucket if fs wasn't at root
- v1.23 - 2015-10-03
- - New features
- - Implement rclone size for measuring remotes
- - Fixes
- - Fix headless config for drive and gcs
- - Tell the user they should try again if the webserver method
- failed
- - Improve output of --dump-headers
- - S3
- - Allow anonymous access to public buckets
- - Swift
- - Stop chunked operations logging "Failed to read info: Object Not
- Found"
- - Use Content-Length on uploads for extra reliability
- v1.22 - 2015-09-28
- - Implement rsync like include and exclude flags
- - swift
- - Support files > 5GB - thanks Sergey Tolmachev
- v1.21 - 2015-09-22
- - New features
- - Display individual transfer progress
- - Make lsl output times in localtime
- - Fixes
- - Fix allowing user to override credentials again in Drive, GCS
- and ACD
- - Amazon Drive
- - Implement compliant pacing scheme
- - Google Drive
- - Make directory reads concurrent for increased speed.
- v1.20 - 2015-09-15
- - New features
- - Amazon Drive support
- - Oauth support redone - fix many bugs and improve usability
- - Use "golang.org/x/oauth2" as oauth libary of choice
- - Improve oauth usability for smoother initial signup
- - drive, googlecloudstorage: optionally use auto config for
- the oauth token
- - Implement --dump-headers and --dump-bodies debug flags
- - Show multiple matched commands if abbreviation too short
- - Implement server side move where possible
- - local
- - Always use UNC paths internally on Windows - fixes a lot of bugs
- - dropbox
- - force use of our custom transport which makes timeouts work
- - Thanks to Klaus Post for lots of help with this release
- v1.19 - 2015-08-28
- - New features
- - Server side copies for s3/swift/drive/dropbox/gcs
- - Move command - uses server side copies if it can
- - Implement --retries flag - tries 3 times by default
- - Build for plan9/amd64 and solaris/amd64 too
- - Fixes
- - Make a current version download with a fixed URL for scripting
- - Ignore rmdir in limited fs rather than throwing error
- - dropbox
- - Increase chunk size to improve upload speeds massively
- - Issue an error message when trying to upload bad file name
- v1.18 - 2015-08-17
- - drive
- - Add --drive-use-trash flag so rclone trashes instead of deletes
- - Add "Forbidden to download" message for files with no
- downloadURL
- - dropbox
- - Remove datastore
- - This was deprecated and it caused a lot of problems
- - Modification times and MD5SUMs no longer stored
- - Fix uploading files > 2GB
- - s3
- - use official AWS SDK from github.com/aws/aws-sdk-go
- - NB will most likely require you to delete and recreate remote
- - enable multipart upload which enables files > 5GB
- - tested with Ceph / RadosGW / S3 emulation
- - many thanks to Sam Liston and Brian Haymore at the Utah Center
- for High Performance Computing for a Ceph test account
- - misc
- - Show errors when reading the config file
- - Do not print stats in quiet mode - thanks Leonid Shalupov
- - Add FAQ
- - Fix created directories not obeying umask
- - Linux installation instructions - thanks Shimon Doodkin
- v1.17 - 2015-06-14
- - dropbox: fix case insensitivity issues - thanks Leonid Shalupov
- v1.16 - 2015-06-09
- - Fix uploading big files which was causing timeouts or panics
- - Don't check md5sum after download with --size-only
- v1.15 - 2015-06-06
- - Add --checksum flag to only discard transfers by MD5SUM - thanks
- Alex Couper
- - Implement --size-only flag to sync on size not checksum & modtime
- - Expand docs and remove duplicated information
- - Document rclone's limitations with directories
- - dropbox: update docs about case insensitivity
- v1.14 - 2015-05-21
- - local: fix encoding of non utf-8 file names - fixes a duplicate file
- problem
- - drive: docs about rate limiting
- - google cloud storage: Fix compile after API change in
- "google.golang.org/api/storage/v1"
- v1.13 - 2015-05-10
- - Revise documentation (especially sync)
- - Implement --timeout and --conntimeout
- - s3: ignore etags from multipart uploads which aren't md5sums
- v1.12 - 2015-03-15
- - drive: Use chunked upload for files above a certain size
- - drive: add --drive-chunk-size and --drive-upload-cutoff parameters
- - drive: switch to insert from update when a failed copy deletes the
- upload
- - core: Log duplicate files if they are detected
- v1.11 - 2015-03-04
- - swift: add region parameter
- - drive: fix crash on failed to update remote mtime
- - In remote paths, change native directory separators to /
- - Add synchronization to ls/lsl/lsd output to stop corruptions
- - Ensure all stats/log messages to go stderr
- - Add --log-file flag to log everything (including panics) to file
- - Make it possible to disable stats printing with --stats=0
- - Implement --bwlimit to limit data transfer bandwidth
- v1.10 - 2015-02-12
- - s3: list an unlimited number of items
- - Fix getting stuck in the configurator
- v1.09 - 2015-02-07
- - windows: Stop drive letters (eg C:) getting mixed up with remotes
- (eg drive:)
- - local: Fix directory separators on Windows
- - drive: fix rate limit exceeded errors
- v1.08 - 2015-02-04
- - drive: fix subdirectory listing to not list entire drive
- - drive: Fix SetModTime
- - dropbox: adapt code to recent library changes
- v1.07 - 2014-12-23
- - google cloud storage: fix memory leak
- v1.06 - 2014-12-12
- - Fix "Couldn't find home directory" on OSX
- - swift: Add tenant parameter
- - Use new location of Google API packages
- v1.05 - 2014-08-09
- - Improved tests and consequently lots of minor fixes
- - core: Fix race detected by go race detector
- - core: Fixes after running errcheck
- - drive: reset root directory on Rmdir and Purge
- - fs: Document that Purger returns error on empty directory, test and
- fix
- - google cloud storage: fix ListDir on subdirectory
- - google cloud storage: re-read metadata in SetModTime
- - s3: make reading metadata more reliable to work around eventual
- consistency problems
- - s3: strip trailing / from ListDir()
- - swift: return directories without / in ListDir
- v1.04 - 2014-07-21
- - google cloud storage: Fix crash on Update
- v1.03 - 2014-07-20
- - swift, s3, dropbox: fix updated files being marked as corrupted
- - Make compile with go 1.1 again
- v1.02 - 2014-07-19
- - Implement Dropbox remote
- - Implement Google Cloud Storage remote
- - Verify Md5sums and Sizes after copies
- - Remove times from "ls" command - lists sizes only
- - Add add "lsl" - lists times and sizes
- - Add "md5sum" command
- v1.01 - 2014-07-04
- - drive: fix transfer of big files using up lots of memory
- v1.00 - 2014-07-03
- - drive: fix whole second dates
- v0.99 - 2014-06-26
- - Fix --dry-run not working
- - Make compatible with go 1.1
- v0.98 - 2014-05-30
- - s3: Treat missing Content-Length as 0 for some ceph installations
- - rclonetest: add file with a space in
- v0.97 - 2014-05-05
- - Implement copying of single files
- - s3 & swift: support paths inside containers/buckets
- v0.96 - 2014-04-24
- - drive: Fix multiple files of same name being created
- - drive: Use o.Update and fs.Put to optimise transfers
- - Add version number, -V and --version
- v0.95 - 2014-03-28
- - rclone.org: website, docs and graphics
- - drive: fix path parsing
- v0.94 - 2014-03-27
- - Change remote format one last time
- - GNU style flags
- v0.93 - 2014-03-16
- - drive: store token in config file
- - cross compile other versions
- - set strict permissions on config file
- v0.92 - 2014-03-15
- - Config fixes and --config option
- v0.91 - 2014-03-15
- - Make config file
- v0.90 - 2013-06-27
- - Project named rclone
- v0.00 - 2012-11-18
- - Project started
- Bugs and Limitations
- Empty directories are left behind / not created
- With remotes that have a concept of directory, eg Local and Drive, empty
- directories may be left behind, or not created when one was expected.
- This is because rclone doesn't have a concept of a directory - it only
- works on objects. Most of the object storage systems can't actually
- store a directory so there is nowhere for rclone to store anything about
- directories.
- You can work round this to some extent with thepurge command which will
- delete everything under the path, INLUDING empty directories.
- This may be fixed at some point in Issue #100
- Directory timestamps aren't preserved
- For the same reason as the above, rclone doesn't have a concept of a
- directory - it only works on objects, therefore it can't preserve the
- timestamps of directories.
- Frequently Asked Questions
- Do all cloud storage systems support all rclone commands
- Yes they do. All the rclone commands (eg sync, copy etc) will work on
- all the remote storage systems.
- Can I copy the config from one machine to another
- Sure! Rclone stores all of its config in a single file. If you want to
- find this file, the simplest way is to run rclone -h and look at the
- help for the --config flag which will tell you where it is.
- See the remote setup docs for more info.
- How do I configure rclone on a remote / headless box with no browser?
- This has now been documented in its own remote setup page.
- Can rclone sync directly from drive to s3
- Rclone can sync between two remote cloud storage systems just fine.
- Note that it effectively downloads the file and uploads it again, so the
- node running rclone would need to have lots of bandwidth.
- The syncs would be incremental (on a file by file basis).
- Eg
- rclone sync drive:Folder s3:bucket
- Using rclone from multiple locations at the same time
- You can use rclone from multiple places at the same time if you choose
- different subdirectory for the output, eg
- Server A> rclone sync /tmp/whatever remote:ServerA
- Server B> rclone sync /tmp/whatever remote:ServerB
- If you sync to the same directory then you should use rclone copy
- otherwise the two rclones may delete each others files, eg
- Server A> rclone copy /tmp/whatever remote:Backup
- Server B> rclone copy /tmp/whatever remote:Backup
- The file names you upload from Server A and Server B should be different
- in this case, otherwise some file systems (eg Drive) may make
- duplicates.
- Why doesn't rclone support partial transfers / binary diffs like rsync?
- Rclone stores each file you transfer as a native object on the remote
- cloud storage system. This means that you can see the files you upload
- as expected using alternative access methods (eg using the Google Drive
- web interface). There is a 1:1 mapping between files on your hard disk
- and objects created in the cloud storage system.
- Cloud storage systems (at least none I've come across yet) don't support
- partially uploading an object. You can't take an existing object, and
- change some bytes in the middle of it.
- It would be possible to make a sync system which stored binary diffs
- instead of whole objects like rclone does, but that would break the 1:1
- mapping of files on your hard disk to objects in the remote cloud
- storage system.
- All the cloud storage systems support partial downloads of content, so
- it would be possible to make partial downloads work. However to make
- this work efficiently this would require storing a significant amount of
- metadata, which breaks the desired 1:1 mapping of files to objects.
- Can rclone do bi-directional sync?
- No, not at present. rclone only does uni-directional sync from A -> B.
- It may do in the future though since it has all the primitives - it just
- requires writing the algorithm to do it.
- Can I use rclone with an HTTP proxy?
- Yes. rclone will use the environment variables HTTP_PROXY, HTTPS_PROXY
- and NO_PROXY, similar to cURL and other programs.
- HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.
- The environment values may be either a complete URL or a "host[:port]",
- in which case the "http" scheme is assumed.
- The NO_PROXY allows you to disable the proxy for specific hosts. Hosts
- must be comma separated, and can contain domains or parts. For instance
- "foo.com" also matches "bar.foo.com".
- Rclone gives x509: failed to load system roots and no roots provided error
- This means that rclone can't file the SSL root certificates. Likely you
- are running rclone on a NAS with a cut-down Linux OS, or possibly on
- Solaris.
- Rclone (via the Go runtime) tries to load the root certificates from
- these places on Linux.
- "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
- "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
- "/etc/ssl/ca-bundle.pem", // OpenSUSE
- "/etc/pki/tls/cacert.pem", // OpenELEC
- So doing something like this should fix the problem. It also sets the
- time which is important for SSL to work properly.
- mkdir -p /etc/ssl/certs/
- curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
- ntpclient -s -h pool.ntp.org
- The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned
- in the x509 pacakge, provide an additional way to provide the SSL root
- certificates.
- Note that you may need to add the --insecure option to the curl command
- line if it doesn't work without.
- curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
- Rclone gives Failed to load config file: function not implemented error
- Likely this means that you are running rclone on Linux version not
- supported by the go runtime, ie earlier than version 2.6.23.
- See the system requirements section in the go install docs for full
- details.
- All my uploaded docx/xlsx/pptx files appear as archive/zip
- This is caused by uploading these files from a Windows computer which
- hasn't got the Microsoft Office suite installed. The easiest way to fix
- is to install the Word viewer and the Microsoft Office Compatibility
- Pack for Word, Excel, and PowerPoint 2007 and later versions' file
- formats
- tcp lookup some.domain.com no such host
- This happens when rclone cannot resolve a domain. Please check that your
- DNS setup is generally working, e.g.
- # both should print a long list of possible IP addresses
- dig www.googleapis.com # resolve using your default DNS
- dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
- If you are using systemd-resolved (default on Arch Linux), ensure it is
- at version 233 or higher. Previous releases contain a bug which causes
- not all domains to be resolved properly.
- Additionally with the GODEBUG=netdns= environment variable the Go
- resolver decision can be influenced. This also allows to resolve certain
- issues with DNS resolution. See the name resolution section in the go
- docs.
- License
- This is free software under the terms of MIT the license (check the
- COPYING file included with the source code).
- Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
- Permission is hereby granted, free of charge, to any person obtaining a copy
- of this software and associated documentation files (the "Software"), to deal
- in the Software without restriction, including without limitation the rights
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- copies of the Software, and to permit persons to whom the Software is
- furnished to do so, subject to the following conditions:
- The above copyright notice and this permission notice shall be included in
- all copies or substantial portions of the Software.
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
- THE SOFTWARE.
- Authors
- - Nick Craig-Wood nick@craig-wood.com
- Contributors
- - Alex Couper amcouper@gmail.com
- - Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru
- - Shimon Doodkin helpmepro1@gmail.com
- - Colin Nicholson colin@colinn.com
- - Klaus Post klauspost@gmail.com
- - Sergey Tolmachev tolsi.ru@gmail.com
- - Adriano Aurélio Meirelles adriano@atinge.com
- - C. Bess cbess@users.noreply.github.com
- - Dmitry Burdeev dibu28@gmail.com
- - Joseph Spurrier github@josephspurrier.com
- - Björn Harrtell bjorn@wololo.org
- - Xavier Lucas xavier.lucas@corp.ovh.com
- - Werner Beroux werner@beroux.com
- - Brian Stengaard brian@stengaard.eu
- - Jakub Gedeon jgedeon@sofi.com
- - Jim Tittsler jwt@onjapan.net
- - Michal Witkowski michal@improbable.io
- - Fabian Ruff fabian.ruff@sap.com
- - Leigh Klotz klotz@quixey.com
- - Romain Lapray lapray.romain@gmail.com
- - Justin R. Wilson jrw972@gmail.com
- - Antonio Messina antonio.s.messina@gmail.com
- - Stefan G. Weichinger office@oops.co.at
- - Per Cederberg cederberg@gmail.com
- - Radek Šenfeld rush@logic.cz
- - Fredrik Fornwall fredrik@fornwall.net
- - Asko Tamm asko@deekit.net
- - xor-zz xor@gstocco.com
- - Tomasz Mazur tmazur90@gmail.com
- - Marco Paganini paganini@paganini.net
- - Felix Bünemann buenemann@louis.info
- - Durval Menezes jmrclone@durval.com
- - Luiz Carlos Rumbelsperger Viana maxd13_luiz_carlos@hotmail.com
- - Stefan Breunig stefan-github@yrden.de
- - Alishan Ladhani ali-l@users.noreply.github.com
- - 0xJAKE 0xJAKE@users.noreply.github.com
- - Thibault Molleman thibaultmol@users.noreply.github.com
- - Scott McGillivray scott.mcgillivray@gmail.com
- - Bjørn Erik Pedersen bjorn.erik.pedersen@gmail.com
- - Lukas Loesche lukas@mesosphere.io
- - emyarod allllaboutyou@gmail.com
- - T.C. Ferguson tcf909@gmail.com
- - Brandur brandur@mutelight.org
- - Dario Giovannetti dev@dariogiovannetti.net
- - Károly Oláh okaresz@aol.com
- - Jon Yergatian jon@macfanatic.ca
- - Jack Schmidt github@mowsey.org
- - Dedsec1 Dedsec1@users.noreply.github.com
- - Hisham Zarka hzarka@gmail.com
- - Jérôme Vizcaino jerome.vizcaino@gmail.com
- - Mike Tesch mjt6129@rit.edu
- - Marvin Watson marvwatson@users.noreply.github.com
- - Danny Tsai danny8376@gmail.com
- - Yoni Jah yonjah+git@gmail.com yonjah+github@gmail.com
- - Stephen Harris github@spuddy.org
- - Ihor Dvoretskyi ihor.dvoretskyi@gmail.com
- - Jon Craton jncraton@gmail.com
- - Hraban Luyat hraban@0brg.net
- - Michael Ledin mledin89@gmail.com
- - Martin Kristensen me@azgul.com
- - Too Much IO toomuchio@users.noreply.github.com
- - Anisse Astier anisse@astier.eu
- - Zahiar Ahmed zahiar@live.com
- - Igor Kharin igorkharin@gmail.com
- - Bill Zissimopoulos billziss@navimatics.com
- - Bob Potter bobby.potter@gmail.com
- - Steven Lu tacticalazn@gmail.com
- - Sjur Fredriksen sjurtf@ifi.uio.no
- - Ruwbin hubus12345@gmail.com
- - Fabian Möller fabianm88@gmail.com f.moeller@nynex.de
- - Edward Q. Bridges github@eqbridges.com
- - Vasiliy Tolstov v.tolstov@selfip.ru
- - Harshavardhana harsha@minio.io
- - sainaen sainaen@gmail.com
- - gdm85 gdm85@users.noreply.github.com
- - Yaroslav Halchenko debian@onerussian.com
- - John Papandriopoulos jpap@users.noreply.github.com
- - Zhiming Wang zmwangx@gmail.com
- - Andy Pilate cubox@cubox.me
- - Oliver Heyme olihey@googlemail.com olihey@users.noreply.github.com
- de8olihe@lego.com
- - wuyu wuyu@yunify.com
- - Andrei Dragomir adragomi@adobe.com
- - Christian Brüggemann mail@cbruegg.com
- - Alex McGrath Kraak amkdude@gmail.com
- - bpicode bjoern.pirnay@googlemail.com
- - Daniel Jagszent daniel@jagszent.de
- - Josiah White thegenius2009@gmail.com
- - Ishuah Kariuki kariuki@ishuah.com ishuah91@gmail.com
- - Jan Varho jan@varho.org
- - Girish Ramakrishnan girish@cloudron.io
- - LingMan LingMan@users.noreply.github.com
- - Jacob McNamee jacobmcnamee@gmail.com
- - jersou jertux@gmail.com
- - thierry thierry@substantiel.fr
- - Simon Leinen simon.leinen@gmail.com ubuntu@s3-test.novalocal
- - Dan Dascalescu ddascalescu+github@gmail.com
- - Jason Rose jason@jro.io
- - Andrew Starr-Bochicchio a.starr.b@gmail.com
- - John Leach john@johnleach.co.uk
- - Corban Raun craun@instructure.com
- - Pierre Carlson mpcarl@us.ibm.com
- - Ernest Borowski er.borowski@gmail.com
- - Remus Bunduc remus.bunduc@gmail.com
- - Iakov Davydov iakov.davydov@unil.ch dav05.gith@myths.ru
- - Jakub Tasiemski tasiemski@gmail.com
- - David Minor dminor@saymedia.com
- - Tim Cooijmans cooijmans.tim@gmail.com
- - Laurence liuxy6@gmail.com
- - Giovanni Pizzi gio.piz@gmail.com
- - Filip Bartodziej filipbartodziej@gmail.com
- - Jon Fautley jon@dead.li
- - lewapm 32110057+lewapm@users.noreply.github.com
- - Yassine Imounachen yassine256@gmail.com
- - Chris Redekop chris-redekop@users.noreply.github.com
- chris.redekop@gmail.com
- - Jon Fautley jon@adenoid.appstal.co.uk
- - Will Gunn WillGunn@users.noreply.github.com
- - Lucas Bremgartner lucas@bremis.ch
- - Jody Frankowski jody.frankowski@gmail.com
- - Andreas Roussos arouss1980@gmail.com
- - nbuchanan nbuchanan@utah.gov
- - Durval Menezes rclone@durval.com
- - Victor vb-github@viblo.se
- - Mateusz pabian.mateusz@gmail.com
- - Daniel Loader spicypixel@gmail.com
- - David0rk davidork@gmail.com
- - Alexander Neumann alexander@bumpern.de
- - Giri Badanahatti gbadanahatti@us.ibm.com@Giris-MacBook-Pro.local
- - Leo R. Lundgren leo@finalresort.org
- - wolfv wolfv6@users.noreply.github.com
- - Dave Pedu dave@davepedu.com
- - Stefan Lindblom lindblom@spotify.com
- - seuffert oliver@seuffert.biz
- - gbadanahatti 37121690+gbadanahatti@users.noreply.github.com
- - Keith Goldfarb barkofdelight@gmail.com
- - Steve Kriss steve@heptio.com
- - Chih-Hsuan Yen yan12125@gmail.com
- - Alexander Neumann fd0@users.noreply.github.com
- - Matt Holt mholt@users.noreply.github.com
- - Eri Bastos bastos.eri@gmail.com
- - Michael P. Dubner pywebmail@list.ru
- - Antoine GIRARD sapk@users.noreply.github.com
- - Mateusz Piotrowski mpp302@gmail.com
- - Animosity022 animosity22@users.noreply.github.com
- - Peter Baumgartner pete@lincolnloop.com
- - Craig Rachel craig@craigrachel.com
- - Michael G. Noll miguno@users.noreply.github.com
- - hensur me@hensur.de
- - Oliver Heyme de8olihe@lego.com
- - Richard Yang richard@yenforyang.com
- - Piotr Oleszczyk piotr.oleszczyk@gmail.com
- - Rodrigo rodarima@gmail.com
- - NoLooseEnds NoLooseEnds@users.noreply.github.com
- - Jakub Karlicek jakub@karlicek.me
- - John Clayton john@codemonkeylabs.com
- - Kasper Byrdal Nielsen byrdal76@gmail.com
- - Benjamin Joseph Dag bjdag1234@users.noreply.github.com
- - themylogin themylogin@gmail.com
- - Onno Zweers onno.zweers@surfsara.nl
- - Jasper Lievisse Adriaanse jasper@humppa.nl
- - sandeepkru sandeep.ummadi@gmail.com
- sandeepkru@users.noreply.github.com
- - HerrH atomtigerzoo@users.noreply.github.com
- - Andrew 4030760+sparkyman215@users.noreply.github.com
- - dan smith XX1011@gmail.com
- - Oleg Kovalov iamolegkovalov@gmail.com
- - Ruben Vandamme github-com-00ff86@vandamme.email
- - Cnly minecnly@gmail.com
- - Andres Alvarez 1671935+kir4h@users.noreply.github.com
- - reddi1 xreddi@gmail.com
- - Matt Tucker matthewtckr@gmail.com
- - Sebastian Bünger buengese@gmail.com
- - Martin Polden mpolden@mpolden.no
- - Alex Chen Cnly@users.noreply.github.com
- - Denis deniskovpen@gmail.com
- - bsteiss 35940619+bsteiss@users.noreply.github.com
- - Cédric Connes cedric.connes@gmail.com
- - Dr. Tobias Quathamer toddy15@users.noreply.github.com
- - dcpu 42736967+dcpu@users.noreply.github.com
- - Sheldon Rupp me@shel.io
- - albertony 12441419+albertony@users.noreply.github.com
- - cron410 cron410@gmail.com
- - Anagh Kumar Baranwal anaghk.dos@gmail.com
- - Felix Brucker felix@felixbrucker.com
- - Santiago Rodríguez scollazo@users.noreply.github.com
- - Craig Miskell craig.miskell@fluxfederation.com
- - Antoine GIRARD sapk@sapk.fr
- - Joanna Marek joanna.marek@u2i.com
- - frenos frenos@users.noreply.github.com
- - ssaqua ssaqua@users.noreply.github.com
- - xnaas me@xnaas.info
- - Frantisek Fuka fuka@fuxoft.cz
- - Paul Kohout pauljkohout@yahoo.com
- - dcpu 43330287+dcpu@users.noreply.github.com
- - jackyzy823 jackyzy823@gmail.com
- - David Haguenauer ml@kurokatta.org
- - teresy hi.teresy@gmail.com
- - buergi patbuergi@gmx.de
- CONTACT THE RCLONE PROJECT
- Forum
- Forum for general discussions and questions:
- - https://forum.rclone.org
- Gitub project
- The project website is at:
- - https://github.com/ncw/rclone
- There you can file bug reports, ask for help or contribute pull
- requests.
- Google+
- Rclone has a Google+ page which announcements are posted to
- - Google+ page for general comments
- Twitter
- You can also follow me on twitter for rclone announcements
- - [@njcw](https://twitter.com/njcw)
- Email
- Or if all else fails or you want to ask something private or
- confidential email Nick Craig-Wood
|