# Ri Xu Online > Change the life in a geek way ### Using Go Dynamic and Static Shared libraries in C/C++ Programs In this post, I build a shared library and dynamic library by a Go source code and use it for spreadsheet decryption. A Shared Object/Library is a binary file with a dynamic/static loading table for functions. Go language Source Code // decrypt.go package main import "C" import ( "fmt" "github.com/xuri/excelize/v2" ) //export decryption func decryption(file *C.char, passwd *C.char) { f, err := excelize.OpenFile(C.GoString(file), excelize.Options{Password: C.GoString(passwd)}) if err != nil { return } defer func() { if err := f.Close(); err != nil { fmt.Println(err) } }() if err := f.SaveAs("decrypted workbook.xlsx", excelize.Options{}); err != nil { fmt.Println(err) } } func main() { } Build shared libraries Build a dynamic shared library go build -o libdecrypt.so -buildmode=c-shared decrypt.go It will generate two files: libdecrypt.a and libdecrypt.h. Build a static shared library go build -o libdecrypt.a -buildmode=c-archive It will generate two files: libdecrypt.so and libdecrypt.h. Using a dynamic shared library in C++ program // decrypt.cpp #include <stdio.h> #include <unistd.h> #include "libdecrypt.h" int main() { decryption((char*)"encryptAES.xlsx", (char*)"password"); } Build a executable file: gcc -v decrypt.cpp -o decrypt ./libdecrypt.so By executing the following command we can get decrypted spreadsheet: decrypted workbook.xlsx. ./decrypt Using a static shared library in C program // decrypt.c #include "libdecrypt.h" int main () { decryption("encryptAES.xlsx", "password"); } Build a executable file: gcc -o decrypt decrypt.c libdecrypt.a -lpthread By executing the following command we can get decrypted spreadsheet. ./decrypt That explains how to build the dynamic and static shared library (shared object) in Go, and how to use them. The open-source project excelize-py is a practice for more complex scenario such as complex custom data structure. ### 5th Anniversary of the Excelize Open Source Excelize (github.com/xuri/excelize) since 2016 has become the popular choice to process spreadsheets documents among cloud-native and especially among Go language developers. Deep in essence and known for their professional support and well capabilities, Excelize is being widely used by large Internet companies, SME customers, and startup companies. Now, for the 5th anniversary of the Excelize open source, it has been released 16 versions, more than 100 contributors have participated in code contributing. I would like to thank all those who support the development and contributions of the project, and I'm very happy to be able to help anyone in need it. For me personally, creating and maintaining an open-source project requires not only the ability to write code, write documentation, and community management, the more important thing is how to keep the enthusiasm, happy, energetic, and efficient output state. Excelize Star History Excelize Contributors In the future, I'll continue to work with developers in the community to optimize functions, capabilities, and performance, including to: Make more API support concurrency safety Keep to optimization performance for generating or reading data from a worksheet with huge amounts of data More formula functions support in the calculation engine Complex conditional format settings, data prediction support, etc. Support more Chart style settings PivotTable, PivotChart, Slicer Number format parsing Workbook encryption More API will be support in the streaming reading and writing mode At last, welcome to contribute to Excelize. ### Discover vulnerabilities across a codebase with CodeQL on localhost CodeQL is the analysis engine used by developers to automate security checks, and by security researchers to perform variant analysis. In CodeQL, code is treated like data. Security vulnerabilities, bugs, and other errors are modeled as queries that can be executed against databases extracted from code. You can run the standard CodeQL queries, written by GitHub researchers and community contributors, or write your own to use in custom analyses. Queries that find potential bugs highlight the result directly in the source file. In this post, I'll make an example in Go language, first download the CodeQL command-line tools from the release page, extract into $GOPATH/src/github.com/github/codeql-cli-binaries, and export this path to ~/.zprofile: # CodeQL export PATH="$PATH:$GOPATH/src/github.com/github/codeql-cli-binaries" Checkout git repo github.com/github/codeql and github.com/github/codeql-go into $GOPATH/src/github.com/github directory. Run following command in the codebase, it will generate the CSV format report: CODEQL_EXTRACTOR_GO_BUILD_TRACING=on codeql database create go-database --language=go && codeql database analyze go-database $GOPATH/src/github.com/github/codeql-go/ql/src/codeql-suites/go-lgtm.qls --format=csv --output=result.csv ### Rust Report Card - Get a badge for your Rust open source project A web application that generates a report on the quality of an open-source Rust project. This project inspired and based on the Go report card. It uses rust-clippy measures. To get a report on your own project, the hosted version of this code running at the https://rust-reportcard.xuri.me. Note that, it will make a high CPU load on the server when the rust Clippy driver running, the latency of this server may be slow, or timeout when working on some parallel checkings, please be patient and coming back to check the report later. In fact, this server only has 4GB of free disk space and 1 core CPU, and I have to make a scheduler job to clean the code cache and toolchains every half hour currently. Score Grade Badge > 90 A+ > 80 A > 70 B > 60 C > 50 D > 40 E <= 40 F ### Wiener Philharmoniker Vienna New Year's Concert 2020 The 2020 Vienna Philharmonic New Year's Concert took place on January 1, 2020, under the baton of Andris Nelsons in the Musikverein in Vienna. Andris Nelsons is Music Director of the Boston Symphony Orchestra and Gewandhauskapellmeister of the Gewandhausorchester Leipzig. Since 2010, Andris Nelsons has been a musical partner of the Vienna Philharmonic. The New Year's Concert 2020 represented Nelsons' first time to conduct this prestigious international concert event. The 2020 New Year's Concert was broadcast in over 90 countries and followed by as many as 40 million television viewers around the world. Andris Nelsons Born in Riga in 1978 into a family of musicians, Andris Nelsons began his career as a trumpeter in the Latvian National Opera Orchestra before studying conducting. He was Music Director of the City of Birmingham Symphony Orchestra from 2008-2015, Principal Conductor of the Nordwestdeutsche Philharmonie in Herford, Germany 2006-2009 and Music Director of the Latvian National Opera 2003-2007. Program Carl Michael Ziehrer Overture to "Die Landstreicher (The Vagabonds)" Josef Strauss Liebesgrüße (Greetings of Love) Waltz, op. 56 Liechtenstein March, op. 36 Johann Strauss Jr. Blumenfest (Flower Festival) Polka. op. 111 Wo die Citronen blüh'n (Where the Lemons Blossom), Waltz, op. 364 Eduard Strauss Knall und Fall (Without Delay), Fast Polka, op. 132 -- Pause -- Franz von Suppé Light Cavalry Overture Josef Strauss Cupido, Polka française, op. 81 Johann Strauss Jr. Seid umschlungen, Millionen (Be Embraced, You Millions), Waltz, op. 443 Eduard Strauss Eisblume (Ice Flower), Polka mazurka, op. 55 (Arr. W. Dörner) Josef Hellmesberger, Jr. Gavotte Hans Christian Lumbye Postillon Galop, op. 16/2 (Arr. W. Dörner) Ludwig van Beethoven Twelve Contredanses, WoO 14 (Excerpts) Johann Strauss Jr. Freuet euch des Lebens (Enjoy Life), Waltz, op.340 Tritsch-Tratsch, Fast Polka, op. 214 Josef Strauss Dynamiden, Waltz, op. 173 magnet:?xt=urn:btih:44725535742D374032136A75126B81AC1FF0D2C3 ### Talks at Beijing Gopher Meetup I gave a topic "International Spreadsheet Format Standard Implementation in Go Language" at Beijing Gopher Meetup on November 30th, 2019. Excelize is a library written in pure Go providing a set of functions that allow you to write to and read from XLSX files, based on ECMA-376, ISO/IEC 29500 international spreadsheet document format standard. Supports reading and writing XLSX files generated by Microsoft Excel™ 2007 and later. This lecture will explore the international file format standards and interpret the use of Go XLSX documents. Excelize can be used in various reporting systems, rewards as the most valuable open source project in Open Source China in 2018, and is now the most popular spreadsheet document base for the Go language. This presentation will explore international document format standards, introducing the story to create this library, and share practices of implement international spreadsheet document format standards in the Go language. The following is a summary of the content of the talk. Prologue Good afternoon, my name is Ri Xu. I'll talk about international spreadsheet format standard implementation in the Go language today. Everyone is familiar with spreadsheets. The classic spreadsheets represented by Excel applications have been used in various industries. According to estimates from relevant research institutions, the number of Office documents in the world reached 40 billion in 2007, and it grows at a scale of billions of years each year. Why use Excelize The spreadsheet documents as an important carrier of data that have been used in many fields, As a developer, in some cases, we need to manipulate Excel documents through programs, such as: open to read existing Excel document content, create new Excel documents, generate new Excel documents based on existing documents (templates), insert images into Excel documents, charts Elements such as tables and sometimes need to implement these operations across platforms. Excelize can easily meet these needs. Today's topic consists of two parts. The first part introduces the contents of the international spreadsheet document format standard, and the second part shares some experiences in the process of implementing the standard using the Go language. International Office Document Format Standard If you want to manipulate a spreadsheet document programmatically, you first need to understand its standards. ECMA-376, ISO/IEC 29500 as the international document format standard is a document format based on XML and ZIP technology, corresponding to familiar Office document files. The name of this standard involves three international standardization organizations: ECMA (European Computer Manufacturers Association), ISO (International Organization for Standardization), and IEC (International Electrotechnical Association). These international organizations have jointly developed international standards for document formats. The standard features are divided into 6 aspects. There are some of the more representative features following: Interoperability - developers can write applications on multiple platforms that consume and produce Office Open XML. Internationalization - Office Open XML supports internationalization features required by such diverse languages as Arabic, Chinese (three variants), French, Hebrew, Hindi, Japanese, Korean, Russian, and Turkish. Office Open XML inherently supports Unicode because it is XML. In addition, Office Open XML has a rich set of internationalization features that have been refined over the course of many years. This list is representative. High fidelity migration - Office Open XML is designed to support all of the features in the Microsoft Office 97-2003 binary formats. It is difficult to overstate the difficulty of accomplishing this goal, and the consequent uniqueness of Office Open XML in doing so. Some formats, such as PDF, are designed to deliver a visual facsimile of a finished document to an end-user. In contrast, Office Open XML is intended to permit future editing or manipulation at the same level of abstraction available to the original creator; for example, reducing a vector graphic to a bitmap would fall short of this intent, as would collapsing a style hierarchy to independent styles. Further, a document can contain computational semantics that the original creator expects to preserve, such as formula logic that depends on intermediate calculation results, including error codes or animation rules that produce dynamic behavior. Low barrier to developer adoption, a developer can begin to write simple Office Open XML conforming applications within a few hours of beginning to read the Specification. Intergration with business data scalable - Office Open XML enables organizations to integrate productivity applications with information systems that manage business processes. It does so through the use of custom schemas within Office Open XML documents. The goal is to reuse and to automate the processing of business information that is otherwise buried opaquely inside documents, where business applications cannot read or write it. This latest version of the standard is the 5th edition and consists of five parts. The first part of the 5th edition is a 5039 pages document. Standard specification files can be downloaded from the ISO or ECMA website. Foremost, the interoperability of Office Open XML has been accomplished through extensive contributions, modifications, and review of the specification by members of the ECMA TC45 committee with diverse backgrounds and corporate interests. Representation included some well-known vendors, corporations and organizations: Apple, Intel, Microsoft, BP, Essilor, The British Library and the United States Library of Congress, etc. Some familiar applications such as Microsoft Office Excel, Apple Numbers, LibreOffice, Google Docs, Apache OpenOffice, etc., are typical representatives who follow this standard. Standard specification A primary objective of this white paper is to enable the reader to follow the high-level structure of any Office Open XML file. To accomplish this, we provide a moderate level of detail regarding the Open Packaging Conventions (OPC), and less detail regarding the individual markup languages. The markup language corresponding to a Word document is called WordprocessingML, the spreadsheet is SpreadsheetML, and the PowerPoint presentation corresponds to PresentationML. Excelize mainly implements SpreadsheetML. In addition, Office documents support nesting across applications. For example, Word can nest Excel, and Excel can nest Word. DrawingML markup is a cross-application text markup language that involves visual charts, extensible markup, source data, and directory references. An Office Open XML file is conventionally stored as a ZIP archive for purposes of packaging and compression, following the recommended implementation of the Open Packaging Conventions. XSD (XML Schema Definition) In order to make it easier for everyone to understand what follows content, I'll introduce a technical term named "XSD" here. XSD (XML Schema Definition) is a World Wide Web Consortium (W3C) recommendation that specifies how to formally describe the elements in an Extensible Markup Language (XML) document. There are sample example with employee element in the following structure: <employee> <firstname>John</firstname> <lastname>Smith</lastname> </employee> the above code definition in W3C XML Schema: <xs:element name="employee"> <xs:complexType> <xs:element name="firestname" type="xs:string"></xs:element> <xs:element name="lastname" type="xs:string"></xs:element> </complexType> </xs:element> The type (xs:string) is prefixed with the namespace prefix associated with XML Schema that indicates a predefined schema data type. XML Schema Dependency In the document format standard, the data structure is defined by XSD. After understanding the data structure definition, let's sort out the data structures involved in the spreadsheet document. This is a schematic diagram of the XML tags defined in an XSD file that formed the tree structure. XSDs can reference each other. Analyze and list the main data structure definition XSD file involved in the spreadsheet document, create a dependency graph like the right side, the root file is sml.xsd, which is the entry point used for the data structure definition of the spreadsheet document, and referenced some else common markup language data structures XSD files, such as the definitions for simple data types. The XSD files starting with dml- are DrawingML data structure definition files that cross-application visualization. The pictures, charts, graphics, and SmartArt are defined in those files. There are 7 XSD files related with DrawingML, and over 8,000 XML elements and attributes including in that. There are about thousands of element and attributes defined in sml.xsd. I using xgen tool to parsing and generate Go data struct definition code. Format Interpretation Let's take a look of XLSX file structure. For example Workbook1.xlsx: Unzip Workbook1.xlsx file we can got directory tree like this: Workbook1 ├── [Content_Types].xml ├── _rels ├── docProps │ ├── app.xml │ └── core.xml └── xl ├── _rels │ └── workbook.xml.rels ├── charts │ ├── _rels │ │ ├── chart1.xml.rels │ │ └── chart2.xml.rels │ ├── chart1.xml │ ├── chart2.xml │ ├── colors1.xml │ ├── colors2.xml │ ├── style1.xml │ └── style2.xml ├── drawings │ ├── _rels │ │ └── drawing1.xml.rels │ └── drawing1.xml ├── sharedStrings.xml ├── styles.xml ├── theme │ └── theme1.xml ├── workbook.xml └── worksheets ├── _rels │ └── sheet1.xml.rels ├── sheet1.xml └── sheet2.xml Every package must have a [Content_Types].xml, found at the root of the package. This file contains a list of all of the content types of the parts in the package. Every part and its type must be listed in [Content_Types].xml. Every package contains the part of a relationship that defines the relationships between the other parts and to resources outside of the package. This separates the relationships from content and makes it easy to change relationships without changing the sources that reference targets. For an OOXML package, there is always a relationship part .rels within the _rels folder that identifies the starting parts of the package, or the package relationships. The worksheets directory contains all the data, formulas, and characteristics of a given worksheet. Typical Workbook Scenario A typical workbook will not be a blank, minimum workbook. A typical workbook might contain numbers, text, charts, tables, and pivot tables. Each of these additional parts is contained within the .zip package of the spreadsheet document. XML Structure and Dependency Processing Let's take a look about how cell D2 formatting. Cell D2 contains the text "Q1" and is defined in the cell table of sheet1. On this cell, the attribute value s="7" indicates that the 7th (zero-based) <xf> definition of holds the formatting information for the cell. The 7th <xf> of <cellXfs> is defined. The number formatting information cannot be found in a <numFmt> definition because it is a built-in format; instead, it is implicitly understood to be the 0th built-in number format. Remembering that the indexes to other element collections are also zero-based, the font information can be found in the 4th <font> definition; the fill information in the 2nd <fill> definition; and the border information in the 2nd <border> definition. The cell uses a cell style which is defined in the 1st <cellStyleXf> definition and, finally, borders specified in this master formatting record should be applied. Remember that these collections are zero-based. Additionally the <fill> definition for D2 references a themed color, whose index is 4th in the <clrScheme> definition of the theme part. Graphically, the index references can be shown like this: In OPC, relationships describe references from parts to other internal resources in the package or to external resources. They represent the type of connection between a source part and a target resource, and make the connection directly discoverable without looking at the part contents, so they are quick to resolve. The same ZIP item can be the target of multiple relationships. Office Open XML imposes constraints on relationships, described in subsequent clauses of this Part of ECMA-376. Relationships in Office Open XML are either explicit or implicit. The following figure shows how the source, relationship item, and the target relate to each other for implicit and explicit relationships, respectively. The target does not have to be a file, however. The Go Language Implementation Next, I'll share some experiences about the implementation of this standard with the Go language. The technical capabilities of Excelize can be divided into basic capabilities, style processing capabilities, data processing capabilities, pictures/charts, workbooks/worksheets, cells and model processing, 7 major parts: File Process Units: File Format Processor, Meta Processor, Embeddings Media, OPC Processor, Markup Language parsing Style Process Units: Border Processor, Freeze Panes, Fonts, Hight and Width, Number Format, Color System Runtime Model: Model Componments, Validator, Calculation Engine, Upgrade capabilities, Code Generator Chart and Picture Process Units: 2D/3D processing, Cluster/Stack/Area, Bar/Cone/Pie, Bubble/Scatter/Line, Combo Chart and Property Settings Workbook and Worksheet Process Units: Visibility, Row/Column Handler, Workbook and Worksheet Property Settings, Header/Footer, View Properties, Search Engine, Data Protection, Page Layout Cell Process Units: Data Types, Map, Merge Range, Rich Text, Hyperlink, Comments, Formula, Style Index, Cell Style Data Process Units: Data Validation, Time Processor, Crypto Class, Unit Converter, Table/Filter, Pivot Table, Conditional Format, VBA Script Some third-party open-source spreadsheet libraries to manipulate slightly more complicated forms. In some cases, we have got a corrupted file after saving. To resolve this issue, Excelize needs to do a lot of compatibilities checks. Excelize supports the generation over 52 categories charts for data visualization, it supports the creation of 2D and 3D charts, such as histograms, area charts, and stacked charts. Some kinds of charts belong to the same category. Design a skeleton that creates a basic structure, adjusts its specific parameters to change the type of chart. This method has been applied to creating cluster and stack charts, cluster and stack charts, etc. Document Structure Definition The File structure in the Excelize is used to define the document model, which includes exportable and non-exportable fields: // File define a populated XLSX file struct. type File struct { checked map[string]bool sheetMap map[string]string CalcChain *xlsxCalcChain Comments map[string]*xlsxComments ContentTypes *xlsxTypes Drawings map[string]*xlsxWsDr Path string SharedStrings *xlsxSST Sheet map[string]*xlsxWorksheet SheetCount int Styles *xlsxStyleSheet Theme *xlsxTheme DecodeVMLDrawing map[string]*decodeVmlDrawing VMLDrawing map[string]*vmlDrawing WorkBook *xlsxWorkbook Relationships map[string]*xlsxRelationships XLSX map[string][]byte CharsetReader charsetTranscoderFn } Exportable fields include worksheets, styles, reference relationships between data structures, etc, and developers can access them from the external. After opening a spreadsheet document with the Excelize, Excelize maintains dependencies between internal data structures through non-exportable fields. During the process of operation the spreadsheet, Excelize will perform dynamic memory adjustment and error correction at runtime. According to the design principle of on-demand processing, when the external API is called, Excelize can accurately identify which data structures to operate, load the required document data structure to the corresponding model as needed, and then process it through the internal processor. This processing design can improve the performance of file processing while ensuring compatibility. Working with Excelize Create XLSX file: package main import ( "fmt" "github.com/xuri/excelize/v2" ) func main() { f := excelize.NewFile() // Create a new sheet. index := f.NewSheet("Sheet2") // Set value of a cell. f.SetCellValue("Sheet2", "A2", "Hello world.") f.SetCellValue("Sheet1", "B2", 100) // Set active sheet of the workbook. f.SetActiveSheet(index) // Save xlsx file by the given path. if err := f.SaveAs("Book1.xlsx"); err != nil { fmt.Println(err) } } When I designing this library, I considered the developers of some other programming languages ​​in the way of using it, hoping to reduce the learning cost of the developers and to achieve quickly start through simple and easy to use functions. The above is a minimal example usage that will create a XLSX file. Use the function NewFile to create a new Excel workbook. The newly created workbook contains a default worksheet named Sheet1. In order to generate a spreadsheet that contains a relatively large amount of data, Excelize also provides coordinates convert function, which supports conversion to corresponding coordinates during the iteration cycle. Add picture to XLSX file: package main import ( "fmt" _ "image/png" "github.com/xuri/excelize/v2" ) func main() { f, err := excelize.OpenFile("Book1.xlsx") if err != nil { fmt.Println(err) return } // Insert a picture. if err := f.AddPicture("Sheet1", "A2", "image.png", ""); err != nil { fmt.Println(err) } // Save the xlsx file with the origin path. if err = f.Save(); err != nil { fmt.Println(err) } } We need to process more data structure, ensure compatibility when creating a picture or chart to the spreadsheet document. Which will be more complex to implement. AddPicture provides the method to add a picture in a sheet by given picture format set (such as offset, scale, aspect ratio setting and print settings) and file path. In addition, GetPicture provides a function to get picture base name and raw content embed in XLSX by given worksheet and cell name. This function returns the file name in XLSX and file contents as []byte data types. Pivot Table A pivot table is a table of statistics that summarizes the data of a more extensive table (such as from a database, spreadsheet, or business intelligence program). This summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way. The example above includes 5 columns of data, including the Year, Month, Product Category, Product Sales, and Product Sales Area. If we need a summary of monthly sales by region, one-time filtering using traditional condition filtering is not possible, but we can use the pivot table to solve this problem: err := f.AddPivotTable(&excelize.PivotTableOption{ DataRange: "Sheet1!$A$1:$E$31", PivotTableRange: "Sheet1!$G$2:$M$34", Rows: []excelize.PivotTableField{{Data: "Month"}, {Data: "Year"}}, Columns: []excelize.PivotTableField{{Data: "Type"}}, Data: []excelize.PivotTableField{{Data: "Sales", Name: "Summarize", Subtotal: "Sum"}}, }) The AddPivotTable provides the method to add pivot table by given pivot table options. DataSubtotal specifies the aggregation function that applies to this data field. Runtime Model Adjustment of the runtime model, let's imagine a scenario when the user opens an Excel document with 10,000 rows of data, then delete the 10th row after opening. In this case, many range related data objects will be affected, such as all cell data below 10th row, pictures with hyperlinks, pivot tables, filters and so on. Excelize will recalculate and adjust all rows below 10th row, and do real-time adjustment in runtime. Excelize has a method named adjustHelper that provides four parameters: sheet, adjustDirection, num and offset. There are series of functions in the adjustHelper, each sub-function will dynamic adjustment for spreadsheet components. Performance The following graph shows performance comparison of generation 12800*50 plain text matrix by the major open-source Excel libs under personal computer (2.6 GHz 6-Core Intel Core i7, 16 GB 2667 MHz DDR4, 500GB SSD, macOS Big Sur 11.6), including Go, Python, Java, PHP, and NodeJS. Conclusion Summarizing today's topic, I introduced the international spreadsheet document format standard firest and shared the practice of implementing the standard using the Go language, which involved the implementation of a series of core functions. Excelize's open-source wanna make better for the Go language ecosystem, hoping to help more friends in need. The roadmap of Excelize's includes formula calculation engine, VBA script and more complex chart operation support and continuous optimization of performance. Welcome every developer to participate in the open-source ecosystem by submitting issues, PR, donate, and other forms. Q&A Q: How about post-performance optimization and concurrency? A: In the large-scale data search scenario, you can use the advantages of the Go language to search the worksheet content through multiple coroutines to improve the search speed. In addition, the Excelize also supports streaming reading and writing of large spreadsheets, which can save time and memory costs. References Links Chinese version news on GoCN Excelize - Go Library for Reading and Writing XLSX Files Go 语言国际电子表格文档格式标准实践 from Ri Xu ### EBook: Open Source in the Enterprise Free and open source software is everywhere, frequently taking over entire fields of computing. Consider how Linux, Hadoop, Docker, Kubernetes, and TensorFlow have changed the way companies produce software. Or why corporate giants such as Amazon, Apple, Facebook, Google, IBM, Intel, and Microsoft continue to be major contributors, even instigators, to many open source projects. With this practical book, you’ll explore not only ways to use open source software in the enterprise, but also how to contribute code and expertise to the communities that cultivate and maintain it. Authors Andy Oram and Zaheda Bhorat then guide you through the process of launching an open source software project of your own. You’ll discover why organizations of every size and in every field need to include free and open source software in their strategies. Understand why many companies are turning to open source Learn how open source thrives on the power of community and shared practices Unmask several myths, including “open source software is low quality or less secure” Examine the processes you need to adopt to successfully use open source code Participate in an open source project’s community Learn several ways to contribute code and funding to existing open source projects through code, funding, governance and community-building Explore the steps you can take to launch your own open source project Connect to a comprehensive set of resources developed by members of the open source community to further shape your open source journey Original book: EBook: Open Source in the Enterprise I translated this book into a simplified Chinese version: 《面向企业的开源软件实践》 ### Setup Static IP for a Virtual Machine CentOS Environment VMWare Fusion Professional Version 11.0.2 (10952296) CentOS Linux Release 7.6.1810 64bit Virtual machine NAT network configuration Validate the config file: /Library/Preferences/VMware Fusion/networking ... answer VNET_2_DISPLAY_NAME vmnet answer VNET_2_HOSTONLY_NETMASK 255.255.255.0 answer VNET_2_HOSTONLY_SUBNET 192.168.155.0 answer VNET_2_NAT yes answer VNET_2_NAT_PARAM_UDP_TIMEOUT 30 answer VNET_2_VIRTUAL_ADAPTER yes ... Validate the config file: /Library/Preferences/VMware Fusion/vmnet2/nat.conf ... [host] # NAT gateway address ip = 192.168.155.2 netmask = 255.255.255.0 # VMnet device if not specified on command line device = vmnet2 ... CentOS static IP setup by the command line nmtui ### Deploy the Kubernetes Web UI Dashboard This post guides you through deploying the Kubernetes dashboard after Deploy Kubernetes Cluster. Install Dashboard Download and install Docker images docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 docker image rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 Download dashboard config file template wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml Edit dashboard config file add type: NodePort in Dashboard Service section spec: type: NodePort ports: - port: 443 targetPort: 8443 Install dashboard kubectl create -f kubernetes-dashboard.yaml Check dashboard running status kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-fvzh6 1/1 Running 0 33m kube-system coredns-86c58d9df4-jcjgn 1/1 Running 0 33m kube-system etcd-k8s-master 1/1 Running 0 32m kube-system kube-apiserver-k8s-master 1/1 Running 0 32m kube-system kube-controller-manager-k8s-master 1/1 Running 0 32m kube-system kube-flannel-ds-amd64-fl8k6 1/1 Running 0 21m kube-system kube-flannel-ds-amd64-jjqbx 1/1 Running 0 25m kube-system kube-flannel-ds-amd64-sh79b 1/1 Running 0 21m kube-system kube-proxy-mlnkt 1/1 Running 0 33m kube-system kube-proxy-pf7fp 1/1 Running 0 21m kube-system kube-proxy-t54fr 1/1 Running 0 21m kube-system kube-scheduler-k8s-master 1/1 Running 0 32m kube-system kubernetes-dashboard-57df4db6b-kmxw9 1/1 Running 0 22s Creating sample user, reference guide on GitHub Create a config file dashboard-adminuser.yaml for Service Account and ClusterRoleBinding --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system Create user and role binding kubectl apply -f dashboard-adminuser.yaml Bearer Token kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}') Name: kubernetes-dashboard-admin-token-ngv5v Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: 0aec2ffb-1d77-11e9-a792-000c29e92d29 Type: kubernetes.io/service-account-token Data ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1uZ3Y1diIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBhZWMyZmZiLTFkNzctMTFlOS1hNzkyLTAwMGMyOWU5MmQyOSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.AIk_OeA1zRQJpY2qFgFoXWhM8ketEFnzMI82Jj8AVama8ycIHnV8m7nssGg7hegjhSUhPzdVun95VNjiA0dcmCWwvH0IkAqYFOh3hahFIHCO2HjM8F7wNotRS1LVuf221EJOV5f5VXyLEFp78sm69dS607-1X695X9BxH4iOIPz0R2GANHggJwC57oflUR_r0sfomYPfw49JKVqhGVsLl37RTcU-lTlvN5d-tT-GLlJVpowHZR7PUx27iODpViHn0iB0OhXPLR-Wdd3SlfcrrtIhupOL-PPI6fxVQwUii94nr0bmFwj6lh_aJFD5t3dERJxiEltICg3jkEsMyZEIPQ ca.crt: 1025 bytes namespace: 11 bytes Check dashboard listing port kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 39m kubernetes-dashboard NodePort 10.98.98.181 <none> 443:31896/TCP 6m51s Visit the dashboard via https://172.16.3.136:31896 Choose Token, paste the authentication token output from the previous command into the Token field, and choose SIGN IN. ### Deploy Kubernetes Cluster on CentOS Kubernetes is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. In this post, I will show you how to set up 2 nodes Kubernetes cluster on the virtual machine. Environment CentOS Linux Release 7.6.1810 64bit Docker version 18.06.1-ce Git commit: e68fc7a Kubernetes version: v1.13.2 Note that the number of available CPUs minimum is 2 Hostname IP Role k8s-master 172.16.3.136 k8s master node k8s-node-1 172.16.3.140 k8s slave node k8s-node-2 172.16.3.139 k8s slave node Prerequisites Setup network on CentOS 7 by nmtui Backup source mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup Download CentOS-Base.repo into /etc/yum.repos.d/ CentOS 5 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-5.repo CentOS 6 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo CentOS 7 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo Update source yum clean all && yum makecache && yum update Close firewall on all nodes systemctl stop firewalld systemctl disable firewalld Disable SELINUX on all nodes setenforce 0 vi /etc/selinux/config SELINUX=disabled Disable swap on all nodes swapoff -a Comment swap autoload config in the /etc/fstab file and varify with free -m. Setup hostname on each node hostnamectl --static set-hostname k8s-master hostnamectl --static set-hostname k8s-node-1 hostnamectl --static set-hostname k8s-node-2 Add following content on each /etc/hosts file 172.16.3.136 k8s-master 172.16.3.140 k8s-node-1 172.16.3.139 k8s-node-2 Create /etc/sysctl.d/k8s.conf file net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 Take change effect modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf Open kernel ipvs modules for kube-proxy on slave node cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 Install recommended ipvs management tools yum -y install ipset ipvsadm Setup Docker yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo Check latested Docker versions info yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable Note that Kubernetes 1.12 require Docker version is 1.11.1 or later. Install and lanuch Docker on each nodes yum install -y --setopt=obsoletes=0 \ docker-ce-18.06.1.ce-3.el7 systemctl start docker systemctl enable docker Check the FOWARD state in the iptables filter is ACCEPT iptables -nvL iptables -nvL Chain INPUT (policy ACCEPT 19 packets, 1444 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0 Setup Kubernetes Install kubelet, kubectl and kubeadm on each node cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF Note that check network via curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64, if you can't access that, maybe need VPN. Installation yum makecache fast yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.13.2-0 kubectl.x86_64 0:1.13.2-0 kubelet.x86_64 0:1.13.2-0 Dependency Installed: cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.6.0-0 socat.x86_64 0:1.7.3.2-2.el7 Download docker images docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 weeks ago 80.2MB k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 weeks ago 79.6MB k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 weeks ago 181MB k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 weeks ago 146MB k8s.gcr.io/coredns 1.2.6 f59dcacceff4 2 months ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 4 months ago 220MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 12 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 13 months ago 742kB Init Kubernetes cluster on master node, also reference kubeadm init official documention. # auto start on boot systemctl enable kubelet && systemctl start kubelet kubeadm init \ --kubernetes-version=v1.13.1 \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=172.16.3.136 --kubernetes-version Choose a specific Kubernetes version for the control plane. --apiserver-advertise-address The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface. --pod-network-cidr Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node. Detailed cluster initialization process [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.3.136] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.136 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.136 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.507237 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: p9e6q0.e55sga7ow6k05gpt [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.3.136:6443 --token p9e6q0.e55sga7ow6k05gpt --discovery-token-ca-cert-hash sha256:6d63ecaf8af2f27179d10535e9547cb089b70d0de6a3c9dac59181f716d49b87 Config kubectl on the master node echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo $KUBECONFIG Install Pod networks Download pod networks config file kube-flannel.yml and create pod networks by follwing command, reference Flannel Pods Documentation mkdir -p ~/k8s/ cd ~/k8s wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml If execute success, checked CoreDNS Pod running status kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-fvzh6 1/1 Running 0 7m29s 10.244.0.2 k8s-master <none> <none> kube-system coredns-86c58d9df4-jcjgn 1/1 Running 0 7m29s 10.244.0.3 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 6m39s 172.16.3.136 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 6m43s 172.16.3.136 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 6m29s 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-jjqbx 1/1 Running 0 16s 172.16.3.136 k8s-master <none> <none> kube-system kube-proxy-mlnkt 1/1 Running 0 7m28s 172.16.3.136 k8s-master <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 6m30s 172.16.3.136 k8s-master <none> <none> If you got error stuck at kube-flannel-ds Init:0/1, run kubeadm upgrade apply v1.13.2 to slove it. Check Kubernetes master status to check ready kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 21m v1.13.2 Slave node configuration Add slave nodes, run following command on each slave node systemctl enable kubelet.service kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> kubeadm join --token p9e6q0.e55sga7ow6k05gpt 172.16.3.136:6443 --discovery-token-ca-cert-hash sha256:6d63ecaf8af2f27179d10535e9547cb089b70d0de6a3c9dac59181f716d49b87 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "172.16.3.136:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.16.3.136:6443" [discovery] Requesting info from "https://172.16.3.136:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.3.136:6443" [discovery] Successfully established connection with API Server "172.16.3.136:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node-2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. If you forget token, run following command on master node kubeadm token list Status validation Run following commands on the master node kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 13m v1.13.2 k8s-node-1 Ready <none> 107s v1.13.2 k8s-node-2 Ready <none> 91s v1.13.2 Check Pod status kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-fvzh6 1/1 Running 0 14m 10.244.0.2 k8s-master <none> <none> kube-system coredns-86c58d9df4-jcjgn 1/1 Running 0 14m 10.244.0.3 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-fl8k6 1/1 Running 0 3m14s 172.16.3.139 k8s-node-2 <none> <none> kube-system kube-flannel-ds-amd64-jjqbx 1/1 Running 0 7m24s 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-sh79b 1/1 Running 0 3m30s 172.16.3.140 k8s-node-1 <none> <none> kube-system kube-proxy-mlnkt 1/1 Running 0 14m 172.16.3.136 k8s-master <none> <none> kube-system kube-proxy-pf7fp 1/1 Running 0 3m14s 172.16.3.139 k8s-node-2 <none> <none> kube-system kube-proxy-t54fr 1/1 Running 0 3m30s 172.16.3.140 k8s-node-1 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> Remove node in the cluster kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name> After removing the node in the cluster, reset cluster on the master by the following command kubeadm reset Now, we installed 2 nodes Kubernetes cluster on virtual machine. For more information about Kubernetes, please see the official documentation. Upgrade Kubernetes kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.13.2 [upgrade/versions] Latest stable version: v1.13.4 [upgrade/versions] Latest version in the v1.13 series: v1.13.4 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.13.2 v1.13.4 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.1 v1.13.4 Controller Manager v1.13.1 v1.13.4 Scheduler v1.13.1 v1.13.4 Kube Proxy v1.13.1 v1.13.4 CoreDNS 1.2.6 1.2.6 Etcd 3.2.24 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.13.4 Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.4. _____________________________________________________________________ Pull images on each nodes wget https://raw.githubusercontent.com/openthings/kubernetes-tools/master/kubeadm/2-images/kubernetes-pull-aliyun-1.13.4.sh sh kubernetes-pull-aliyun-1.13.4.sh kubeadm upgrade apply v1.13.4 --force [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.13.4" [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.13.2 [upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: - Specified version to upgrade to "v1.13.4" is higher than the kubeadm version "v1.13.2". Upgrade kubeadm first using the tool you used to install kubeadm [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.4"... Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-controller-manager-k8s-master hash: d4ff37ee76fe761a28f11175fd1c384e Static pod: kube-scheduler-k8s-master hash: 44b569a35761491825f4e7253fbf0543 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests313173818" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 430c0fb23b278ceaadfb9440eb5667d0 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master hash: d4ff37ee76fe761a28f11175fd1c384e Static pod: kube-controller-manager-k8s-master hash: b6ca67226d47ac720e105375a9846904 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master hash: 44b569a35761491825f4e7253fbf0543 Static pod: kube-scheduler-k8s-master hash: 4b52d75cab61380f07c0c5a69fb371d4 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.4". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. Related Article Deploy the Kubernetes Web UI Dashboard ### Relentless Pursuit of Engineering Excellence The level of research and development engineering capabilities directly affects the company's long-lasting innovation and the company's role in the market. Only by relentlessly pursuing excellence in engineering can we bring long-term core competencies and create value for every user, every business customer, and society as a whole. Excellent engineering practice comes from the combination of the company's engineering standards and the development toolchain, which can effectively improve software development efficiency and product quality. Cloud-based project engineering practice mapping ### Monitoring with Prometheus and Grafana for Go Microservices Prometheus Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Since its inception in 2012, many companies and organizations have adopted Prometheus, and the project has a very active developer and user community. It is now a standalone open source project and maintained independently of any company. To emphasize this, and to clarify the project's governance structure, Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes. Grafana Grafana is an open-source, general purpose dashboard and graph composer, which runs as a web application. It supports graphite, InfluxDB or opentsdb as backends. Downloading and running Prometheus without Docker $ tar xvfz prometheus-*.tar.gz $ cd prometheus-* Configuring Prometheus to monitor itself prometheus.yml # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config. - job_name: 'prometheus' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ['localhost:9090'] Starting Prometheus $ ./prometheus --config.file=prometheus.yml --web.listen-address=:8000 Metrics endpoint: http://localhost:8000/metrics Prometheus console: http://localhost:8000/graph Job metrics: http://lcoalhost:8000/api/v1/label/job/values Install Grafana Downloading and install Grafana. package main import ( "log" "net/http" "os" "time" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" "github.com/shirou/gopsutil/mem" ) func main() { // init log service logger := log.New(os.Stdout, "[Memory]", log.Lshortfile|log.Ldate|log.Ltime) // init http handler http.Handle("/metrics", promhttp.Handler()) // init container memoryPercent := prometheus.NewGaugeVec(prometheus.GaugeOpts{ Name: "memory_percent", Help: "memory use percent", }, []string{"percent"}, ) prometheus.MustRegister(memoryPercent) go func() { logger.Println("ListenAndServe at:0.0.0.0:8080") err := http.ListenAndServe("localhost:8080", nil) if err != nil { logger.Fatal("ListenAndServe: ", err) } }() // collection memory use percent for { logger.Println("start collect memory used percent!") v, err := mem.VirtualMemory() if err != nil { logger.Println("get memory use percent error:%s", err) } usedPercent := v.UsedPercent logger.Println("get memory use percent:", usedPercent) memoryPercent.WithLabelValues("usedMemory").Set(usedPercent) time.Sleep(time.Second * 2) } } Prometheus client metrics endpoint: http://localhost:9090/metrics Check instance alreday registered on Prometheus: $ curl http://localhost:8000/api/v1/targets { "status": "success", "data": { "activeTargets": [ { "discoveredLabels": { "__address__": "localhost:9090", "__metrics_path__": "/metrics", "__scheme__": "http", "job": "prometheus" }, "labels": { "instance": "localhost:9090", "job": "prometheus" }, "scrapeUrl": "http://localhost:9090/metrics", "lastError": "", "lastScrape": "2019-01-07T18:32:58.541151856+08:00", "health": "up" }], "droppedTargets": [] } } Grafana config file located at /usr/share/grafana/conf/defaults.ini # The http port to use, default 3000 http_port = 8080 # The public facing domain name used to access grafana from a browser domain = 0.0.0.0 Start Grafana server $ sudo grafana-server --homepath "/usr/share/grafana" Using the expression browser http://localhost:8080, default user name and password is admin. Add data source and set "Data Sources" Add dashboard in Dashboards manage Add Graph dashboard Edit dashboard Add metrics in the dashboard ### Wiener Philharmoniker Vienna New Year's Concert 2019 The 2019 Vienna Philharmonic New Year's Concert takes place on January 1, 2019, under the baton of Christian Thielemann in the Musikverein in Vienna. Christian Thielemann, Principal Conductor of the Dresden Staatskapelle and Artistic Director of the Salzburg Easter Festival, has since 2000 been a musical partner of the Vienna Philharmonic. The New Year's Concert 2019 represents Thielemann's first time to conduct this prestigious international concert event. The 2019 New Year's Concert will be broadcast in over 90 countries and followed by as many as 40 million television viewers around the world. Christian Thielemann The 2018/2019 season is Christian Thielemann’s seventh as Principal Conductor of the Staatskapelle Dresden. Following engagements at the Deutsche Oper Berlin, in Gelsenkirchen, Karlsruhe, Hanover and Dusseldorf, in 1988 he moved to Nuremberg to occupy the post of Generalmusikdirektor. In 1997 he returned to his hometown of Berlin to direct the Deutsche Oper until 2004, when he became Music Director of the Munich Philharmonic, a post he occupied until 2011. In addition to his current position in Dresden, Thielemann has been Artistic Director of the Salzburg Easter Festival since 2013, where the Staatskapelle is resident orchestra. Program Carl Michael Ziehrer Schönfeld March, op. 422 Josef Strauss Transactionen (Transactions). Walzer, op.184 Josef Hellmesberger, Jr. Elfenreigen (Dance of the Elves), o. op. Johann Strauss Jr. Express. Polka schnell, op. 311 Nordseebilder (North Sea Pictures). Walzer, op. 390 Eduard Strauss Mit Extrapost (Special Delivery). Polka schnell, op. 259 -- Pause -- Johann Strauss Jr. Ouvertüre zur Operette "Der Zigeunerbaron (The Gypsy Baron)" Josef Strauss Die Tänzerin (The Dancer). Polka française, op. 227 Johann Strauss Jr. Künstlerleben (An Artist’s Life). Walzer, op. 316 Die Bajadere (The Bayadere). Polka schnell, op. 351 Eduard Strauss Opern-Soiree (Opera Soirée). Polka française, op. 162 Johann Strauss Jr. Eva-Walzer (Eva Waltz). Nach Motiven aus "Ritter Pásmán" Csárdás aus „Ritter Pásmán“, op. 441 Egyptischer Marsch (Egyptian March), op. 335 Josef Hellmesberger, Jr. Entr'acte Valse Johann Strauss Jr. Lob der Frauen (In Praise of Women). Polka mazur, op. 315 Josef Strauss Sphärenklänge (Music of the Spheres). Walzer, op. 235 magnet:?xt=urn:btih:ca5990ce1ba8ad56014fc3250bc9df3e57d17186&dn=ViennaPhilharmonicOrchestra-NewYearsConcert2019withChristianThielemann-MPEG4HD422DUALDolbyE-UPLINKSRSL ### Deep Copy Object with Reflecting or gob in Go In some case, we need to deep copy value of the object into another object. I have tested using the gob and reflecting to do that. The gob package gob manages streams of gobs - binary values exchanged between an Encoder (transmitter) and a Decoder (receiver). Here is the benchmark file reflecting (github.com/mohae/deepcopy) vs gob, deepcopy_test.go: package deepcopy import ( "bytes" "encoding/gob" "testing" "github.com/mohae/deepcopy" ) type Basics struct { String string Strings []string StringArr [4]string Bool bool Bools []bool Byte byte Bytes []byte Int int Ints []int Int8 int8 Int8s []int8 Int16 int16 Int16s []int16 Int32 int32 Int32s []int32 Int64 int64 Int64s []int64 Uint uint Uints []uint Uint8 uint8 Uint8s []uint8 Uint16 uint16 Uint16s []uint16 Uint32 uint32 Uint32s []uint32 Uint64 uint64 Uint64s []uint64 Float32 float32 Float32s []float32 Float64 float64 Float64s []float64 Complex64 complex64 Complex64s []complex64 Complex128 complex128 Complex128s []complex128 Interface interface{} Interfaces []interface{} } var src = Basics{ String: "kimchi", Strings: []string{"uni", "ika"}, StringArr: [4]string{"malort", "barenjager", "fernet", "salmiakki"}, Bool: true, Bools: []bool{true, false, true}, Byte: 'z', Bytes: []byte("abc"), Int: 42, Ints: []int{0, 1, 3, 4}, Int8: 8, Int8s: []int8{8, 9, 10}, Int16: 16, Int16s: []int16{16, 17, 18, 19}, Int32: 32, Int32s: []int32{32, 33}, Int64: 64, Int64s: []int64{64}, Uint: 420, Uints: []uint{11, 12, 13}, Uint8: 81, Uint8s: []uint8{81, 82}, Uint16: 160, Uint16s: []uint16{160, 161, 162, 163, 164}, Uint32: 320, Uint32s: []uint32{320, 321}, Uint64: 640, Uint64s: []uint64{6400, 6401, 6402, 6403}, Float32: 32.32, Float32s: []float32{32.32, 33}, Float64: 64.1, Float64s: []float64{64, 65, 66}, Complex64: complex64(-64 + 12i), Complex64s: []complex64{complex64(-65 + 11i), complex64(66 + 10i)}, Complex128: complex128(-128 + 12i), Complex128s: []complex128{complex128(-128 + 11i), complex128(129 + 10i)}, Interfaces: []interface{}{42, true, "pan-galactic"}, } func Benchmark_GOBDeepCopy(b *testing.B) { // use b.N for looping for i := 0; i < b.N; i++ { var dst Basics err := GOBDeepCopy(&dst, &src) if err != nil { b.Error(err) } } } func Benchmark_ReflectDeepCopy(b *testing.B) { // use b.N for looping for i := 0; i < b.N; i++ { dst := deepcopy.Copy(src).(Basics) if !dst.Bool { b.Error("reflect deep copy failed") } } } // GOBDeepCopy provides the method to creates a deep copy of whatever is passed to // it and returns the copy in an interface. The returned value will need to be // asserted to the correct type. func GOBDeepCopy(dst, src interface{}) error { var buf bytes.Buffer if err := gob.NewEncoder(&buf).Encode(src); err != nil { return err } return gob.NewDecoder(bytes.NewBuffer(buf.Bytes())).Decode(dst) } Benchbark and profiling CPU and memory usage: $ go test -v -run=^$ -bench=. -benchtime=10s -cpuprofile=prof.cpu -memprofile=prof.mem -memprofilerate=2 goos: darwin goarch: amd64 Benchmark_GOBDeepCopy-4 5000 2918910 ns/op Benchmark_ReflectDeepCopy-4 50000 289784 ns/op PASS ok _/Users/xuri/Desktop/deepcopy 32.421s CPU flame graph Memory flame graph Deep copy with reflecting is 10x faster than gob and it will save more memory. ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2018 Date: Thursday, May 31, 2018 Time: 8:30 PM Place: Schönbrunn Palace, Palace Gardens 1130 Vienna Conductor: Valery Gergiev Soloist: Anna Netrebko Program GIOACCHINO ROSSINI Guillaume Tell: Overture: Finale GIUSEPPE VERDI La forza del destino: Overture FRANCESCO CILEA Adriana Lecouvreur: "Io son l'umile ancella" PIETRO MASCAGNI Cavalleria rusticana: Symphonic Interlude GIUSEPPE VERDI Aida Triumphal March Ballet Music PYOTR ILYICH TCHAIKOVSKY Swan Lake: Neapolitan Dance SERGEI PROKOFIEV Romeo and Juliet – Suite No. 2 op. 64b, No. 1. Montagues and Capulets GIACOMO PUCCINI Tosca: "Vissi d'arte, vissi d'amore" GIACOMO PUCCINI Manon Lescaut: Intermezzo RUGGERO LEONCAVALLO Pagliacci: "Stridono lassù" Magnet URI: magnet:?xt=urn:btih:8691845e7e58b6a5e207dc6f452a0597e3e5ab27&dn=Sommernachtskonzert in Schönbrunn 2018 General ID : 1 (0x1) Complete name : Sommernachtkonzert in Schönbrunn 2018/Wiener Philharminoker 31.05.2018.ts Format : MPEG-TS File size : 14.1 GiB Duration : 1h 16mn Overall bit rate mode : Variable Overall bit rate : 26.5 Mbps Maximum Overall bit rate : 35.5 Mbps Video ID : 4113 (0x1011) Menu ID : 1 (0x1) Format : AVC Format/Info : Advanced Video Codec Format profile : High 4:2:2@L4 Format settings : CABAC / 3 Ref Frames Format settings, CABAC : Yes Format settings, ReFrames : 3 frames Codec ID : 27 Duration : 1h 16mn Bit rate mode : Constant Bit rate : 24.8 Mbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:2 Bit depth : 8 bits Scan type : MBAFF Scan type, store method : Separated fields Scan order : Top Field First Bits/(Pixel*Frame) : 0.478 Stream size : 12.8 GiB (91%) Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Audio #1 ID : 4352 (0x1100) Menu ID : 1 (0x1) Format : AC-3 Format/Info : Audio Coding 3 Codec ID : 129 Duration : 1h 16mn Bit rate mode : Constant Bit rate : 640 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Frame rate : 31.250 fps (1536 SPF) Bit depth : 16 bits Compression mode : Lossy Delay relative to video : 5ms Stream size : 348 MiB (2%) Language : German Service kind : Complete Main Audio #2 ID : 4353 (0x1101) Menu ID : 1 (0x1) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 4 Duration : 1h 16mn Bit rate mode : Constant Bit rate : 384 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Frame rate : 41.667 fps (1152 SPF) Compression mode : Lossy Delay relative to video : 1ms Stream size : 209 MiB (1%) Language : English ### Alexander Suleiman Solo Recital in CCOM Alexander Suleiman studied with Eldar Issakadze, Daniil Shafran and Natalia Gutman. Suleiman is one of the most interesting and individual musicians of his generation. He has played with well-known orchestras throughout the world, where he has a reputation as an unusual and original interpreter. But in addition to these performances, he is also strongly motivated by cultural and political interests and is a committed ambassador of European musical culture to the Arab world, Jordan and Israel as well as Russia, the Caucasian countries, Asia, Africa and South America. His passion for contemporary music and his comprehensive mastery of the complete repertoire before 1900 have turned him into an artistic person who avoids well-trodden paths. He frequently performs cello concertos by leading composers of our time, such as Franz Hummel, Erik Sven Tueuer, Vaza Azarshvili, and Petris Vasks, many of which have dedicated works to him and many of which have been released as world premiere recordings on CD. His brilliant improvisational skills have long been admired, especially by his fellow musicians. 19:30 on 19 May 2018 1001 Room in Teaching building of China Central Conservatory of Music Piano: Zoe Yucong Wang Some Program Johannes Brahms: Scherzo in E-flat minor, op. 4 (1851) Robert Schumann: Drei Fantasiestücke Op. 73 (1849) Rodion Shchedrin: In the Style of Albéniz Saint-Saëns: The Swan from Carnival of the Animals Pieces of Sergei Vasilievich Rachmaninoff Astor Piazzolla: Le Grand Tango ### Wiener Philharmoniker Vienna New Year's Concert 2018 The 2018 Vienna Philharmonic New Year's Concert takes place on January 1, 2018, under the baton of Riccardo Muti in the Musikverein in Vienna. This year's concert will mark the fifth time - after 1993, 1997, 2000 and 2004 - that Riccardo Muti, whose close ties with the Vienna Philharmonic extend over several decades, conducts this prestigious event. The 2018 New Year's Concert will be broadcast in over 90 countries and followed by as many as 50 million television viewers around the world. The 2017 New Year's Concert takes place on January 1, 2017, under the baton of Gustavo Dudamel in the Vienna Musikverein. Gustavo Dudamel, who was born in Barquisimeto, Venezuela in 1981, becomes the youngest conductor in the 75 year history of the New Year's Concert. In 2007, Gustavo Dudamel conducted the Vienna Philharmonic for the first time at the Lucerne Festival and made his debut at the Vienna Philharmonic Subscription Concerts in 2011. Dudamel conducted the Summer Night Concert Schönbrunn in 2012 and led the orchestra on its traditional Vienna Philharmonic Week in Japan in 2014. The 2017 New Year's Concert will be broadcast in over 90 countries and followed by up to 50 million television viewers around the world. Riccardo Muti Over the course of his extraordinary career, Riccardo Muti has conducted the most important orchestras in the world: from the Berlin Philharmonic to the Bavarian Radio Symphony Orchestra, from the New York Philharmonic to the Orchestre National de France, as well as the Vienna Philharmonic, an orchestra to which he is linked by particularly close and important ties, and with which he has appeared in Vienna, Salzburg and around the world since 1971. In September 2010, Riccardo Muti became Music Director of the Chicago Symphony Orchestra and in the same year was named Musician of the Year by Musical America. Program Program Johann Strauss, Jr. Entrance March from the Operetta "The Gypsy Baron" Josef Strauss Wiener Fresken (Viennese Frescos), Waltz, op. 249 Johann Strauss, Jr. Brautschau (Bride Shopping), Polka, op. 417 Leichtes Blut (Light of Heart), Fast Polka, op. 319 Johann Strauss, sen. Marienwalzer (Maria Waltz), op. 212 William Tell Galop, op. 29b Franz von Suppé Overture to "Boccaccio" Johann Strauss, Jr. Myrthenblüten (Myrtle Blossoms), Waltz, op. 395 Alphons Czibulka Stephanie Gavotte, op. 312 Johann Strauss, Jr. Freikugeln (Magic Bullets), Fast Polka, op. 326 Tales from the Vienna Woods, Waltz, op. 325 Fest-Marsch (Festival March), op. 452 Stadt und Land (Town and Country), Polka Mazurka, op. 322 Un ballo in maschera (Masked Ball), Quadrille, op. 272 Rosen aus dem Süden (Roses from the South), Waltz, op. 388 Josef Strauss Eingesendet (Letters to the Editor), Fast Polka, op. 240 magnet:?xt=urn:btih:e9d36353375ad5c9003636fe4393c162c06575a2&dn=ViennaPhilharmonicOrchestra-NewYearsConcert2018withRiccardoMuti-MPEG4HD422DUALDolbyE-UPLINKSRSL ### Install Apache Kafka on Ubuntu Wiki Apache Kafka is a distributed streaming platform. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a pub/sub messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages. Environment Ubuntu Server 17.10 Artful Aardvark 4GB RAM Scala 2.11 Kafka 1.0.0 Install Java Before installing additional packages, update the list of available packages so you are installing the latest versions available in the repository: $ sudo apt-get update As Apache Kafka needs a Java runtime environment, use apt-get to install the default-jre package: $ sudo apt-get install default-jre $ java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.17.10.2-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode) Create a user called kafka using the useradd command: $ sudo useradd kafka -m Set its password using passwd: $ sudo passwd kafka Add it to the sudo group so that it has the privileges required to install Kafka's dependencies. This can be done using the adduser command: $ sudo adduser kafka sudo The Kafka user is now ready. Log into it using su: $ su - kafka Install ZooKeeper Apache ZooKeeper is an open source service built to coordinate and synchronize configuration information of nodes that belong to a distributed system. A Kafka cluster depends on ZooKeeper to perform—among other things—operations such as detecting failed nodes and electing leaders. Since the ZooKeeper package is available in Ubuntu's default repositories, install it using apt-get. $ sudo apt-get install zookeeperd After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181. To make sure that it is working, connect to it via Telnet: $ telnet localhost 2181 At the Telnet prompt, type in ruok and press ENTER. If everything's fine, ZooKeeper will say imok and end the Telnet session. $ echo ruok | nc localhost 2181 Download and Extract Kafka Binaries Now that Java and ZooKeeper are installed, it is time to download and extract Kafka. To start, create a directory called Downloads to store all your downloads. $ mkdir -p ~/Downloads Use wget to download the Kafka binaries. $ wget "http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz" -O ~/Downloads/kafka.tgz Create a directory called kafka and change to this directory. This will be the base directory of the Kafka installation. $ mkdir -p ~/kafka && cd ~/kafka Extract the archive you downloaded using the tar command. $ tar -xvzf ~/Downloads/kafka.tgz --strip 1 Configure the Kafka Server The next step is to configure the Kakfa server, open server.properties using text editor: $ vim ~/kafka/config/server.properties By default, Kafka doesn't allow you to delete topics. To be able to delete topics, add the following line at the end of the file: delete.topic.enable=true Save the file and exit the text editor. Start the Kafka Server Run the kafka-server-start.sh script using nohup to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session. $ ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties Run service in daemon: $ nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 & Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in ~/kafka/kafka.log: [2017-12-09 17:16:29,521] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) You now have a Kafka server which is listening on port 9092. Test the Installation Let us now create a topic named test with a single partition and only one replica: $ ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test We can now see that topic if we run the list topic command: $ ~/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181 Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to. Send some messages Run the producer and then type a few messages into the console to send to the server. $ ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test >This is a message >This is another message Kafka also has a command line consumer that will dump out messages to standard output. $ ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning >This is a message >This is another message Restrict the Kafka User Now that all installations are done, you can remove the kafka user's admin privileges. Before you do so, log out and log back in as any other non-root sudo user. If you are still running the same shell session you started the installation, simply type exit. To remove the Kafka user's admin privileges, remove it from the sudo group. $ sudo deluser kafka sudo To further improve your Kafka server's security, lock the kafka user's password using the passwd command. This makes sure that nobody can directly log into it. $ sudo passwd kafka -l At this point, only root or a sudo user can log in as kafka by typing in the following command: $ sudo su - kafka In the future, if you want to unlock it, use passwd with the -u option: $ sudo passwd kafka -u Conclusion You now have a secure Apache Kafka running on your Ubuntu server. You can easily make use of it in your projects by creating Kafka producers and consumers using Kafka clients which are available for most programming languages. To learn more about Kafka, do go through its documentation. ### Non-Deterministic Finite Automata in Regular Expression Engine Regular Expression Engine Regular expression engine can be divided into 2 different categories: DFA (Deterministic finite automaton) and NFA( Non-deterministic finite automaton). The NFA can be divided into Traditional NFA and POSIX NFA. Backtracking is allowed in DFA, while in NFA, it may or may not be allowed, so NFA is faster than DFA. NFA doesn't support capture groups. Language and tools that use the DFA: Awk, egrep and Lex. POSIX NFA POSIX or "Portable Operating System Interface for UNIX" is a collection of standards that define some of the functionality that a (UNIX) operating system should support. One of these standards defines two flavors of regular expressions. Commands involving regular expressions, such as grep and egrep, implement these flavors on POSIX-compliant UNIX systems. Several database systems also use POSIX regular expressions. POSIX NFA support longest-leftmost match, the DFA finds the longest possible match, period. Since it's the longest from among all possible matches that start equally furthest to the left, it's the "longest-leftmost" match. Most languages ​and tools use the traditional NFA engine, which has some features not supported by DFA: Capture groups and reverse Lookaround (pre-search): (?<=...), (?<!...), (?=...), (?!...) The non-greedy (Ignore optional quantifiers): ??, *?, +?, {m,n}?, {m,}? The composition of the string For the string abc, includes three characters and 4 positions ([position 0]a[position 1]b[position 2]c[position 3]). Character possession and zero-width In the regular expression matching process, if the sub-expression matches the content of the characters, rather than position, and is saved to the final match results, this sub-expression is character possession. If the sub-expression match the position, or the matched content isn't saved in the final match result, that sub-expression is zero-width. Character possession is exclusive, zero-width is non-exclusive. In other words, a character that can only be matched by a sub-expression at the same time, a position can be matched by multiple zero-width sub-expressions at the same time. Control and drive In the regular matching process, a sub-expression (which may be a normal character, meta-characters or meta-characters sequence) get the control permission, try to match from a position in the string. The beginning position of a sub-expression try to match is the previous sub-expression success match end position. Such as regular expression: (sub-expression 1)(sub-expression 2) If the (sub-expression 1) is a zero-width, since it matches beginning and end at the same position, for example position 0, so the (sub-expression 2) trying to match from position 0 If the (sub-expression 1) is character possession expression, since it matches the beginning and end at different position, for example the success of the match starts at position 0, at the end position 2, in this case the (sub-expression 2) should be starting match from position 2. For the entire expression, it's usually trying to match from position 0. If the entire expression match fails in some position, the regular engine will drive forwards, the entire expression will retry matching from position 1, and so on until matching successful or failed at the last position is tried. Basic matching Source string: abc Regular expression: abc Matching process: The character a get control permission, begin match from position 0, a matched a success, the character b get control permission and match from position 1, b matched b, the character c get control permission and matched c. The regular expression matching is complete, matches successfully. The matching result is abc, the starting position is 0, and the ending position is 3. Match priority quantifiers Source string: abc Regular expression: ab?c The quantifier ? is match priority quantifier, it will first try to match if can be match and can't match. The quantifier ? is used to modify the character b, so b? is together. Matching process: The character a get control permission, begin match from position 0, a matched a, match success, the character b? get control permission, ? is match priority quantifier it will try to match, b? matched b success, the character c get control permission and record an alternate status at the same time, c matched c success, discard alternative status. The regular expression matching is complete, matches successfully. The matching result is abc, the starting position is 0, and the ending position is 3. Source string: ac Regular expression: ab?c Matching process: The character a get control permission, begin match from position 0, a matched a, match success, the character b? get control permission will try to match, b? matched c failed and record an alternate status, backtracking and find alternative status, b? match end, give up control permission, the character c get control permission and matched c. Zero-width matching Source string: a12 Regular expression: ^(?=[a-z])[a-z0-9]+$ Meta-characters ^ and $ only match the position. Order lookaround (?=[a-z]) just match and not take character possession, and also not save the matching contents to the results, so it's zero-width. Matching process: The meta-characters ^ get control permission, begin match from position 0, match success. Order lookaround​ (?=[a-z]) get control permission, it requires must have a letter on the right of its location, since zero-width sub-expressions are not exclusive (the same position can be matched by multiple zero-width sub-expressions at the same time), so it will begin match from position 0. The right side of the position 0 is the character a, match success. Order lookaround (?=[a-z]) only match the position and not save the matching contents to the results, match result is position 0, [a-z0-9]+ get control permission, and also begin match from position 0, it will try to match a, match success. Then successfully match the next 1 and 2, which now matches to position 3, and there is no character on the right of position 3. The meta-characters $ get control permission, begin match from position 3, match success. The regular expression matching is complete, matches successfully. The matching result is a12, the starting position is 0, and the ending position is 3. ^ and (?=[a-z]) matched the position 0, [a-z0-9]+ matched the string a12, $ matched the position 3. ### The 10th Annual International Chamber Music Festival of the China Central Conservatory of Music Waratah String Quartet 7:30 PM 5th Nov. 2017 Concert Hall of CCOM Program Violin I: Wang Yi Violin II: Tomomi Kondo Brennan Viola: Jason Bunn Cello: Melissa Chominsky String Quartet No. 66 in G Major, Op. 77, No. 1 Josef Haydn I. Allegro moderato II. Adagio III. Presto IV. Vivace String Quartet No. 8 in C Minor, Op. 110 Dmitri Dmitriyevich Shostakovich I. Largo II. Allegro molto III. Allegretto IV. Largo V. Largo -- Intermission -- String Quartet No. 12 in F Major, Op. 96 "American" Antonín Leopold Dvorák I. Allegro, ma non troppo II. Lento III. Molto Vivace IV. Vivace, ma non troppo Jasmine Flower ### Memcache Internals Memcached, system of the distributed memory caching, is often used to increase the performance and availability of the hosted application through decreasing database load. It creates a common cache for all the application nodes and represents your application short-term memory. Performance Most of Memcache functionality (add, get, set, flush etc) are \(O (1)\). This means they are constant time functions. It does not matter how many items there are inside the cache, the functions will take just as long as they would with just 1 item inside the cache. Memcache uses the LRU algorithm to eliminate data for the slab. Internally, all objects have a "counter". This counter holds a timestamp. Every time a new object is created, that counter will be set to the current time. When an object gets FETCHED, it will reset that counter to the current time as well. As soon as Memcache needs to "evict" an object to make room for newer objects, it will find the lowest counter. That is the object that isn't fetched or is fetched the longest time ago (and probably isn't needed that much, otherwise the counter would be closed to the current timestamp). In effect, this creates a simple system that uses the cache very efficient. If it isn't used, it's kicked out of the system. Memory Allocation Memcached system uses the slab instead of the item-by-item memory allocation. As a result, it improves the usage of the memory and protects it from the fragmentation in case the data expires from the cache. Each slab consists of several 1 MB size pages and each page, in its turn, consists of the equal amount of blocks or chunks. After every data storing the Memcached defines the data size and looks for a suitable allocation in all slabs. If such allocation exists, the data is written to it. If there is no suitable allocation, the Memcached creates a new slab and divides it into the blocks of the necessary size. In the case you update the already stored item and its new value exceeds the size of the block allocation, it was stored in before, Memcached moves it to another, suitable slab. +---------------------------------------+ | Page | +-------+-------+-------+-------+-------+ < Slab Class #1 | Chunk | Chunk | Chunk | Chunk | Chunk | +-------+-------+-------+-------+-------+ +---------------------------------------+ | Page | +-------+-------+-------+-------+-------+ < Slab Class #1 | Chunk | Chunk | Chunk | Chunk | Chunk | +-------+-------+-------+-------+-------+ +---------------------------------------+ | Page | +-------+-------+-------+-------+-------+ < Slab Class #2 | Chunk | Chunk | Chunk | Chunk | Chunk | +-------+-------+-------+-------+-------+ +---------------------------------------+ | Page | +-------+-------+-------+-------+-------+ < Slab Class #1 | Chunk | Chunk | Chunk | Chunk | Chunk | +-------+-------+-------+-------+-------+ As a result, every instance has multiple pages distributed and allocated within the Memcached memory. This method of allocation prevents the memory fragmentation. But on the other hand it can cause the waste of memory if you have not enough amount of items with equal allocation size, i.e. there are only a few filled chunks on every page. Thus one more important point is the distribution of the stored items. Reference slabs.c typedef struct { unsigned int size; /* sizes of items */ unsigned int perslab; /* how many items per slab */ void *slots; /* list of item ptrs */ unsigned int sl_curr; /* total free items in list */ unsigned int slabs; /* how many slabs were allocated for this class */ void **slab_list; /* array of slab pointers */ unsigned int list_size; /* size of prev array */ size_t requested; /* The number of requested bytes */ } slabclass_t; Slab Allocation When Memcache starts, it partitions its allocated memory into smaller parts called pages. Each page is 1MB large (coincidentally, the maximum size that an object can have you can store in Memcache). Each of those pages can be assigned to a slab-class or can be unassigned (being a free page). A slab-class decides how large the objects can be that are stored inside that particular page. Each page that is designated to a particular slab-class will be divided into smaller parts called chunks. The chunks in each slab have the same size so there cannot be 2 different sized chunks on the same page. For instance, there could be a page with 64byte chunks (slab class 1), a page with 128byte chunks (slab class 2) and so on, until we get the largest slab with only 1 chunk (the 1MB chunk). There can be multiple pages for each slab-class, but as soon as a page is assigned a slab-class (and thus, split up into chunks), it cannot be changed to another slab-class. The smallest chunk-size starts at 80 bytes and increases with a factor of 1.25 (rounded up until the next power of 2). So the second smallest chunk size would be 100 etc. You can actually find it out by issuing the "-vv" flag when starting memcache. You can also set the factor (-f) and the initial chunk-size (-s), but unless you really know what you are doing, don't change the initial values. $ memcached -vv slab class 1: chunk size 96 perslab 10922 slab class 2: chunk size 120 perslab 8738 slab class 3: chunk size 152 perslab 6898 slab class 4: chunk size 192 perslab 5461 slab class 5: chunk size 240 perslab 4369 slab class 6: chunk size 304 perslab 3449 slab class 7: chunk size 384 perslab 2730 slab class 8: chunk size 480 perslab 2184 slab class 9: chunk size 600 perslab 1747 Reference slabs.c static void *do_slabs_alloc(const size_t size, unsigned int id, uint64_t *total_bytes, unsigned int flags) { slabclass_t *p; void *ret = NULL; item *it = NULL; if (id < POWER_SMALLEST || id > power_largest) { MEMCACHED_SLABS_ALLOCATE_FAILED(size, 0); return NULL; } p = &slabclass[id]; assert(p->sl_curr == 0 || ((item *)p->slots)->slabs_clsid == 0); if (total_bytes != NULL) { *total_bytes = p->requested; } assert(size <= p->size); /* fail unless we have space at the end of a recently allocated page, we have something on our freelist, or we could allocate a new page */ if (p->sl_curr == 0 && flags != SLABS_ALLOC_NO_NEWPAGE) { do_slabs_newslab(id); } if (p->sl_curr != 0) { /* return off our freelist */ it = (item *)p->slots; p->slots = it->next; if (it->next) it->next->prev = 0; /* Kill flag and initialize refcount here for lock safety in slab * mover's freeness detection. */ it->it_flags &= ~ITEM_SLABBED; it->refcount = 1; p->sl_curr--; ret = (void *)it; } else { ret = NULL; } if (ret) { p->requested += size; MEMCACHED_SLABS_ALLOCATE(size, id, p->size, ret); } else { MEMCACHED_SLABS_ALLOCATE_FAILED(size, id); } return ret; } Consistent Hashing Remainder Hash What memcache normally does is a simple, yet very effective load balance trick: for each key that gets stored or fetched, it will create a hash (you might see it as md5(key), but in fact, it's a more specialized - quicker - hash method). Now, the hashes we create are pretty much evenly distributed, so we can use a modulus function to find out which server to store the object to: node_id = hash_key % len(nodes) The trouble with this system: as soon as node_id (the number of servers) change, almost 100% of all keys will change server as well. Maybe some keys will get the same server id, but that would be a coincidence. In effect, when you change your memcache server count (either up or down, doesn't matter), you get a big stampede on your backend system since all keys are all invalidated at once. Consistent Hashing Consistent hashing use a counter that acts like a clock. Hash values \([0, 2^{32}-1]\) are distributed over the circle so that objects are instead assigned to the cache server that is closest in the clockwise direction. All nodes get a ​relatively equal number of keys, be able to add and remove nodes such as the ​fewest number of keys are moved around. Memcached Statistics Commands Field Sample value Description accepting_conns 1 The Memcached server is currently accepting new connections. auth_cmds 0 Number of authentication commands processed by the server - if you use authentication within your installation. The default is IP (routing) level security which speeds up the actual Memcached usage by removing the authentication requirement. auth_errors 0 Number of failed authentication tries of clients. bytes 6775829 Number of bytes currently used for caching items, this server currently uses ~6 MB of it's maximum allowed (limit_maxbytes) 1 GB cache size. bytes_read 880545081 Total number of bytes received from the network by this server. bytes_written 3607442137 Total number of bytes send to the network by this server. cas_badval 0 The cas command is some kind of Memcached's way to avoid locking. cas calls with bad identifier are counted in this stats key. cas_hits 0 Number of successful cas commands. cas_misses 0 cas calls fail if the value has been changed since it was requested from the cache. We're currently not using cas at all, so all three cas values are zero. cmd_flush 0 The flush_all command clears the whole cache and shouldn't be used during normal operation. cmd_get 1626823 Number of get commands received since server startup not counting if they were successful or not. cmd_set 2279784 Number of set commands serviced since startup. connection_structures 42 Number of internal connection handles currently held by the server. May be used as some kind of "maximum parallel connection count" but the server may destroy connection structures (don't know if he ever does) or prepare some without having actual connections for them (also don't know if he does). 42 maximum connections and 34 current connections (curr_connections) sounds reasonable, the live servers also have about 10% more connection_structures than curr_connections. conn_yields 1 Memcached has a configurable maximum number of requests per event (-R command line argument), this counter shows the number of times any client hit this limit. curr_connections 34 Number of open connections to this Memcached server, should be the same value on all servers during normal operation. This is something like the count of MySQL's SHOW PROCESSLIST result rows. curr_items 30345 Number of items currently in this server's cache. The production system of this development environment holds more than 8 million items. decr_hits 0 The decr command decreases a stored (integer) value by 1. A hit is a decr call to an existing key. decr_misses 0 "decr" command calls to undefined keys. delete_hits 138707 Stored keys may be deleted using the delete command, this system doesn't delete cached data itself, but it's using the Memcached to avoid recaching-races and the race keys are deleted once the race is over and fresh content has been cached. delete_misses 107095 Number of delete commands for keys not existing within the cache. These 107k failed deletes are deletions of non existent race keys (see above). evictions 0 Number of objects removed from the cache to free up memory for new items because Memcached reached it's maximum memory setting (limit_maxbytes). get_hits 391283 Number of successful get commands (cache hits) since startup, divide them by the cmd_get value to get the cache hitrate: This server was able to serve 24% of it's get requests from the cache, the live servers of this installation usually have more than 98% hits. get_misses 1235540 Number of failed get requests because nothing was cached for this key or the cached value was too old. incr_hits 0 Number of successful incr commands processed. incr is a replace adding 1 to the stored value and failing if no value is stored. This specific installation (currently) doesn't use incr/decr commands, so all their values are zero. incr_misses 0 Number of failed incr commands (see incr_hits). limit_maxbytes 1073741824 Maximum configured cache size (set on the command line while starting the memcached server), look at the "bytes" value for the actual usage. listen_disabled_num 0 Number of denied connection attempts because memcached reached it's configured connection limit ("-c" command line argument). pid 24040 Current process ID of the Memcached task. pointer_size 64 Number of bits of the hostsystem, may show "32" instead of "64" if the running Memcached binary was compiled for 32 bit environments and is running on a 64 bit system. reclaimed 14740 Numer of times a write command to the cached used memory from another expired key. These are not storage operations deleting old items due to a full cache. rusage_system 310.030000 Number of system time seconds for this server process. rusage_user 103.230000 Numer of user time seconds for this server process. threads 4 Number of threads used by the current Memcached server process. time 1323008181 Current unix timestamp of the Memcached's server. total_connections 27384 Numer of successful connect attempts to this server since it has been started. Roughly $number_of_connections_per_task * $number_of_webserver_tasks * $number_of_webserver_restarts. total_items 323615 Numer of items stored ever stored on this server. This is no "maximum item count" value but a counted increased by every new item stored in the cache. uptime 1145873 Numer of seconds the Memcached server has been running since last restart. 1145873 / (60 * 60 * 24) = ~13 days since this server has been restarted version 1.5.2 Version number of the server ### Nicholas Daniel Solo Recital in China Central Conservatory of Music Nicholas Daniel is a British oboist and conductor. In 2003 he was appointed Artistic Director of the Leicester International Music Festival. Nicholas Daniel is proud to play the world’s first ‘fair-trade’ oboe, certified by the Forest Stewardship Council and made especially for him by Marigaux in Paris. Nicholas Daniel shows Marigaux company for his special use of glass and plastic production of oboe on the concert. 19:30 on 14 Sep, 2017 Recital Hall in Music Practice Building of China Central Conservatory of Music Piano: Rosa Liu Some Program Gilles SIlvestrini: Six Oboe Etudes for oboe Robert Schumann: 3 Romanzen, Op. 94 (1849) - II Einfach, innig ### I/O Multiplexing in Linux I will talk about synchronous I/O and asynchronous I/O, blocking I/O and non-blocking I/O of network I/O in this article. We need to understand a few concepts first: User and Kernel Mode In any modern operating system, the CPU is actually spending time in two very distinct modes: Kernel Mode In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC. User Mode In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode. Process Switching A process switch is a operating system scheduler change from one running program to another. This requires saving all of the state of the currently executing program, including its register state, associated kernel state, and all of its virtual memory configuration. A process switch will through the following changes: Save the processor context, including program counters and other registers Update the process control block (PCB) informatrion Move the process of the PCB into the appropriate queue, such as ready queue, event block queue and other queues Select another process to execute and update its PCB Update the data structure in the memory Restore the PCB context Blocking Processes A blocking process is usually waiting for an event such as a semaphore being released or a message arriving in its message queue. In multitasking systems, such processes are expected to notify the scheduler with a system call that it is to wait, so that they can be removed from the active scheduling queue until the event occurs. A process that continues to run while waiting (i.e., continuously polling for the event in a tight loop) is said to be busy-waiting, which is undesirable as it wastes clock cycles which could be used for other processes. When the process into the blocked state, is not occupied by CPU resources. Buffered I/O Buffered output streams will accumulate write results into an intermediate buffer, sending it to the OS file system only when enough data has accumulated (or flush() is requested). This reduces the number of file system calls. Since file system calls can be expensive on most platforms (compared to short memcpy), buffered output is a net win when performing a large number of small writes. Unbuffered output is generally better when you already have large buffers to send -- copying to an intermediate buffer will not reduce the number of OS calls further, and introduces additional work. Data in the transmission process needs the data copy operation, and the operation operation brings CPU memory overhead is very high. File Descriptor (FD) In Unix and related computer operating systems, a file descriptor (FD, less frequently fildes) is an abstract indicator (handle) used to access a file or other input/output resource, such as a pipe or network socket. File descriptors form part of the POSIX application programming interface. It is a non-negative index value, many of the underlying program often based on it. When a read operation occurs, the data goes through two phases: Waiting for the data to be ready Copying the data from the kernel to the process Because these two phases, Linux system produced the following 5 network model: Blocking I/O The most prevalent model for I/O is the blocking I/O model (which we have used for all our examples in the previous sections). By default, all sockets are blocking. Nonblocking I/O When a socket is set to be nonblocking, we are telling the kernel "when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to sleep, but return an error instead". I/O Multiplexing With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call. Signal Driven I/O (uncommonly used) The kernel raises a signal (SIGIO) when something happens to a file descriptor, first establish a signal handler for SIGIO, then set the socket owner, enable signal driven I/O for the socket at last Asynchronous I/O Asynchronous I/O is defined by the POSIX specification, and various differences in the real-time functions that appeared in the various standards which came together to form the current POSIX specification have been reconciled. select, poll and epoll select, poll, epoll are I/O multiplexing module. select #include <sys/select.h> #include <sys/time.h> int select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout); /* Returns: positive count of ready descriptors, 0 on timeout, –1 on error */ select() overwrites the fd_set variables whose pointers are passed in as arguments readfds, writefds and exceptfds, telling it what to wait for. This makes a typical loop having to either have a backup copy of the variables, or even worse, do the loop to populate the bitmasks every time select() is to be called. Since select() uses bitmasks for file descriptor info with fixed size bitmasks (1024) it is much less convenient. poll #include <sys/poll.h> int poll (struct pollfd *fdarray, unsigned long nfds, int timeout); /* Returns: count of ready descriptors, 0 on timeout, –1 on error */ poll() provides functionality that is similar to select(), but use a pollfd pointer to instead 2 bitmap in the select(). struct pollfd { int fd; /* file descriptor */ short events; /* requested events to watch */ short revents; /* returned events witnessed */ }; poll() handles many file handles, like more than 1024 by default and without any particular work-arounds. poll() doesn't destroy the input data, so the same input array can be used over and over. epoll #include <sys/epoll.h> int epoll_create(int size); int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event); int epoll_wait(int epfd, struct epoll_event * events, int maxevents, int timeout); epoll() is the latest polling method in Linux (and only Linux). Well, it was actually added to kernel in 2002, so it is not so new. It differs both from poll and select in such a way that it keeps the information about the currently monitored descriptors and associated events inside the kernel, and exports the API to add/remove/modify those. To use epoll, much more preparation is needed. A developer needs to: Create the epoll descriptor by calling epoll_create Initialize the struct epoll structure with the wanted events and the context data pointer. Context could be anything, epoll passes this value directly to the returned events structure. We store there a pointer to our Connection class. Call epoll_ctl(... EPOLL_CTL_ADD) to add the descriptor into the monitoring set Call epoll_wait() to wait for 20 events for which we reserve the storage space. Unlike previous methods, this call receives an empty structure, and fills it up only with the triggered events. For example, if there are 200 descriptors and 5 of them have events pending, the epoll_wait will return 5, and only the first five members of the pevents structure will be initialized. If 50 descriptors have events pending, the first 20 would be copied and 30 would be left in queue, they won't get lost. Iterate through the returned items. This will be a short iteration since the only events returned are those which are triggered. Struct of epoll_event like this: struct epoll_event { __uint32_t events; /* Epoll events */ epoll_data_t data; /* User data variable */ }; epoll provides both edge-triggered and level-triggered modes. In edge-triggered mode, a call to epoll_wait will return only when a new event is enqueued with the epoll object, while in level-triggered mode, epoll_wait will return as long as the condition holds. Polling with libevent The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts. Reference Linux Programmer's Manual ### Goroutine, Concurrency and Parallelism Concurrency is not Parallelism. Parallelism is when two or more threads are executing code simultaneously against different processors. If you configure the runtime to use more than one logical processor, the scheduler will distribute goroutines between these logical processors which will result in goroutines running on different operating system threads. However, to have true parallelism you need to run your program on a machine with multiple physical processors. If not, then the goroutines will be running concurrently against a single physical processor, even though the Go runtime is using multiple logical processors. About the Goroutine scheduler There are 3 usual models for threading. One is N:1 where several userspace threads are run on one OS thread. This has the advantage of being very quick to context switch but cannot take advantage of multi-core systems. Another is 1:1 where one thread of execution matches one OS thread. It takes advantage of all of the cores on the machine, but context switching is slow because it has to trap through the OS. Go tries to get the best of both worlds by using a M:N scheduler. It schedules an arbitrary number of goroutines onto an arbitrary number of OS threads. You get quick context switches and you take advantage of all the cores in your system. The main disadvantage of this approach is the complexity it adds to the scheduler. To accomplish the task of scheduling, the Go Scheduler uses 3 main entities: M, P and G. The M represents an OS thread. It's the thread of execution managed by the OS and works pretty much like your standard POSIX thread. The G (goroutine) represents a goroutine. It includes the stack, the instruction pointer and other information important for scheduling goroutines, like any channel it might be blocked on. The P (processor) represents a context for scheduling. You can look at it as a localized version of the scheduler which runs Go code on a single thread. It's the important part that lets us go from a N:1 scheduler to a M:N scheduler. Look at the following code: package main import ( "fmt" "runtime" "sync" ) func main() { runtime.GOMAXPROCS(1) wg := sync.WaitGroup{} wg.Add(20) for i := 0; i < 10; i++ { go func() { fmt.Println("go routine 1 i: ", i) wg.Done() }() } for i := 0; i < 10; i++ { go func(i int) { fmt.Println("go routine 2 i: ", i) wg.Done() }(i) } wg.Wait() } The output of above code like this: go routine 2 i: 9 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 1 i: 10 go routine 2 i: 0 go routine 2 i: 1 go routine 2 i: 2 go routine 2 i: 3 go routine 2 i: 4 go routine 2 i: 5 go routine 2 i: 6 go routine 2 i: 7 go routine 2 i: 8 Concurrency in Go means that some of the functions in the code can run at the same time logically, but will not necessarily run at the same time in physically. The code runtime.GOMAXPROCS(1) make the value of P is 1, that make all goroutines bound to the same P. When a new G is created or an existing G becomes runnable, it is pushed onto a list of runnable goroutines of current P. When P finishes executing G, it first tries to pop a G from own list of runnable goroutines; if the list is empty, P chooses a random victim (another P) and tries to steal a half of runnable goroutines from it. So it will be output goroutine2 first. When we are using goroutines on loop iterator variables, the i variable in the first loops isn't passed as an argument to the anonymous function. Because of sharing memory between gorounties, the value of variable will be modified that goroutine read the pointer address, the value of i is 10 after loop when goroutine1 execute. In the second loop, by adding i as a parameter to the closure, i is evaluated at each iteration and placed on the stack for the goroutine, so each slice element is available to the goroutine when it is eventually executed, so the variable are not shared between iterations. Reference Rob Pike: Go Concurrency Patterns Rob Pike: Concurrency is not Parallelism Dmitry Vyukov: Scalable Go Scheduler Design Doc Wiki: CommonMistakes Wiki: LearnConcurrency ### TCP Congestion Control Algorithm Hybla Hybla aims to eliminate penalization of TCP connections that incorporate a high-latency terrestrial or satellite radio link, due to their longer round-trip times. It stems from an analytical evaluation of the congestion window dynamics, which suggests the necessary modifications to remove the performance dependence on RTT. Hybla scales the window increment rule to ensure fairness among the flows with different RTTs. Hybla behaves as TCP NewReno when the RTT of a flow is less than a certain reference RTT(e.g. 25ms). Otherwise, TCP Hybla increases the congestion window size more aggressively to compensate throughput drop due to RTT increase. Main algorithm: Equalize performance for connections with different RTTs. CWnd increase: \[cwnd+2^\mathrm{p}-1\ (slow\ start)\] \[cwnd=cwnd+\frac{p^\mathrm{2}}{cwnd}\ (congestion\ avoidance)\] \(p\) is a normalized \(RTT\): \[p=\frac{RTT}{RTT0}\] \(RTT0\) is in Linux set to 25ms. Designed for connections with long RTTs. CUBIC The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwith allocations among flows with different RTTs by making the window growth to be independent of RTT, thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggresively when the window is far from the saturation point, and the slowly when it is close to the saturation point. Because it stays longer near the previous saturation point than other variants, it can be sluggish to find the new saturation point if the saturation point has increased far beyond the last one. The key feature of CUBIC is that its window growth depends only on the real tiime between two consecutive congestion events. When RTTs are short, since the window growth rate is fixed, its growth rate could be slower than TCP standards. This feature is not needed after entensive testing due to the increased stability of CUBIC. CUBIC replaced BIC-TCP as the default algorithm in 2006 after version 2.6.18. NewReno TCP New Reno, defined by RFC 6582 (which obsoletes previous definitions in RFC 3782 and RFC 2582), improves retransmission during the fast-recovery phase of TCP Reno. During fast recovery, for every duplicate ACK that is returned to TCP New Reno, a new unsent packet from the end of the congestion window is sent, to keep the transmit window full. For every ACK that makes partial progress in the sequence space, the sender assumes that the ACK points to a new hole, and the next packet beyond the ACKed sequence number is sent. Because the timeout timer is reset whenever there is progress in the transmit buffer, this allows New Reno to fill large holes, or multiple holes, in the sequence space – much like TCP SACK. Because New Reno can send new packets at the end of the congestion window during fast recovery, high throughput is maintained during the hole-filling process, even when there are multiple holes, of multiple packets each. When TCP enters fast recovery it records the highest outstanding unacknowledged packet sequence number. When this sequence number is acknowledged, TCP returns to the congestion avoidance state. A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. When this happens, New Reno mistakenly enters fast recovery, but when the reordered packet is delivered, ACK sequence-number progress occurs and from there until the end of fast recovery, every bit of sequence-number progress produces a duplicate and needless retransmission that is immediately ACKed. New Reno performs as well as SACK at low packet error rates, and substantially outperforms Reno at high error rates. BBR Google contributed BBR ("Bottleneck Bandwidth and RTT"), a new congestion control algorithm, to the the Linux kernel TCP stack. The commit description in the Linux TCP BBR commit describes the background, motivation, design, and example performance results for BBR. Enable Congestion Control Algorithm Note that BBR require Linux Kernel verion 4.9 or later. Check kernel with command uname -r, add modify kernel parameters in /etc/sysctl.conf: net.core.default_qdisc=fq net.ipv4.tcp_congestion_control=bbr Save and take modify effect with command sudo sysctl -p, check configuration: $ sysctl net.ipv4.tcp_available_congestion_control $ sysctl net.ipv4.tcp_congestion_control Configure kernel modules to load at boot by add tcp_bbr in the file /etc/modules-load.d/modules.conf. Optimization of algorithms between different regions: Bandwidth Delay Loss Tolerance Recommended Algorithm Example Network High Low Low Hybla America - Asia High High Low BBR America - Europ High or Low High or Low High CUBIC Asia - Asia Reference Injong Rhee, and Lisong Xu: CUBIC: A New TCP-Friendly High-Speed TCP Variant TCP Hybla: a TCP enhancement for heterogeneous networks BBR: Congestion-Based Congestion Control Neal Cardwell, Yuchung Cheng,C. Stephen Gunn, Soheil Hassas Yeganeh, Van Jacobson: BBR Congestion Control ### All Mozart Program Concert in FCCH Forbidden City Concert Hall (FCCH) located inside the beautiful Zhongshan park, this small concert hall hosts mainly classical concerts and performances. Time: 19:30, Thursday, June 8, 2017 Orchestra: Beijing Symphony Orchestra Conductor: Yaron Gottfried Clarinet: Ron Selka Ron Selka Ron Selka, Principal Clarinetist of the Israel Philharmonic Orchestra, has appeared in recitals and chamber music concerts in Europe, the USA and Israel. He has performed as a soloist with the Israel Philharmonic Orchestra under the baton of Maestro Zubin Mehta as well as with the Jerusalem Symphony Orchestra, the Israel Symphony Orchestra, Rishon Lezion and the IDF Chamber Orchestra. Dan Gottfried Dan Gottfried studied piano at the Tel Aviv Academy of Music. In 1959 he won First Prize in the Mozart Piano competition. In 1962 he earned an undergraduate degree in economics at Hebrew University of Jerusalem and went on to study law. He began his career as a classical pianist in the early 1960s. In the 1970s he was one of the founders of the Israeli Jazz Workshop quartet, the group that recorded the first instrumental jazz album in Israel and fused jazz with Jewish and Arab motifs. In 1981 he founded the Jazz Faculty in the Jerusalem Academy of Music and Dance and served as head of the faculty for eight years. In 1985 he founded the Jazz Studies department of the Thelma Yellin Art High School. In 1992 he was commissioned by the Tel Aviv Museum of Art to direct the Jazz Concerts Series at the museum. In 1994 he started the Jazz Series Lectures at the Open University of Israel, a yearly series of ten lectures on the History of Jazz, which is still an ongoing project. In 1998 he created the "Jazz at The Opera" concert series at the Tel Aviv Opera House, which he ran until 2004. Beijing Symphony Orchestra Beijing Symphony Orchestra founded in 1977, is a classical orchestra in based in Beijing, China. One of its best-known performances was an unfolding of Chinese history and culture performed at Badaling in the Great Wall which was transmitted internationally via satellite. Program Wolfgang Amadeus Mozart - Overture Die Entführung aus dem Serail K.384 Wolfgang Amadeus Mozart - Clarinet Concerto in A major K.622 Wolfgang Amadeus Mozart - Symphony No.40 in G minor K.550 ### Alexander Fiterstein Solo Recital in China Central Conservatory of Music Clarinetist Alexander Fiterstein is recognized for playing that combines flawless technique and consummate musicianship with graceful phrasing and a warm soulful tone. Considered one of today’s most exceptional clarinet players, he has performed in recital and with prestigious orchestras and chamber music ensembles throughout the world. Winner of a 2009 Avery Fisher Career Grant Award, Mr. Fiterstein has been praised by The New York Times for possessing a "beautiful liquid clarity," and The Washington Post wrote, "Fiterstein treats his instrument as his own personal voice, dazzling in its spectrum of colors, agility and range. Every sound he makes is finely measured without inhibiting expressiveness." 19:00 on 28 May, 2017 Recital Hall in Music Practice Building of China Central Conservatory of Music Piano: Rosa Liu Program Carl Maria von Weber - 7 Variations on a Theme from Silvana for clarinet and piano, J.128, Op.33 Miklos Rozsa - Sonatina for Solo Clarinet Ronn Yedidia - Firewall Nathanial Leonard Bernstein - Sonata for Clarinet and Piano (1941-42) Alamiro Giampieri - Carnival of Venice ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2017 Date: Thursday, May 25, 2017 Time: 8:30 PM Place: Schönbrunn Palace, Palace Gardens (in front of the Neptune Fountain) 1130 Vienna Conductor: Christoph Eschenbach Soloist: Renée Fleming Program Antonín Dvorák (1841-1904) Carnival Concert Overture, op.92 "Za štíhlou gazelou" (As I merrily pursued a gazelle) Aria of Armida from "Armida", op.115 "Mesícku na nebi hlubokém" (Song to the Moon) from "Rusalka", op. 114 Peter Ilyich Tchaikovsky (1840-1893) "The Sleeping Beauty", Ballet Suite, op. 66a         II Adagio. Pas d'action         V Valse Sergei Rachmaninov (1873-1943) Sumerki"(Twilight), op. 21, No. 3. Orchestration: Walter Mnatsakanov "Ne poy, krasavitsa, pri mne!" (Never Sing to Me Again, Beautiful Maiden), op. 4, No. 4. Orchestration: Leonidas Leonardi "Vesenniyevodi" (Spring Waters), op. 14, No. 11. Orchestration: Walter Mnatsakanov Engelbert Humperdinck (1854-1921) Prelude to "Hansel and Gretel" John Williams (*1932) Hedwig's Theme from "Harry Potter" Igor Strawinsky (1882-1971) "L'Oiseau de feu" (The Firebird) Ballet Suite (1919 Version)         V Danse infernale du Roi Kachtcheï (Infernal Dance of King Kashcheï)         VI Berceuse (Lullaby)         VII Finale Magnet URI magnet:?xt=urn:btih:E08910E43680E24B628A68F8ED4456F2B4A35C87&dn=Sommernachtskonzert2017ORF2 General ID : 5 (0x5) Complete name : Sommernachtskonzert2017.ts Format : MPEG-TS File size : 5.49 GiB Duration : 1h 36mn Overall bit rate mode : Variable Overall bit rate : 8 173 Kbps Video ID : 3401 (0xD49) Menu ID : 5008 (0x1390) Format : AVC Format/Info : Advanced Video Codec Format profile : High@L4 Format settings, CABAC : Yes Format settings, ReFrames : 4 frames Format settings, GOP : M=3, N=15 Codec ID : 27 Duration : 1h 36mn Bit rate : 7 317 Kbps Maximum bit rate : 7 764 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 50.000 fps Standard : Component Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.159 Stream size : 4.92 GiB (90%) Color range : Limited Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Audio ID : 3441 (0xD71) Menu ID : 5008 (0x1390) Format : AC-3 Format/Info : Audio Coding 3 Format settings, Endianness : Big Codec ID : 6 Duration : 1h 36mn Bit rate mode : Constant Bit rate : 448 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Frame rate : 31.250 fps (1536 spf) Bit depth : 16 bits Compression mode : Lossy Delay relative to video : 104ms Stream size : 308 MiB (5%) Language : German Language, more info : Clean effects Service kind : Complete Main ### Machine Learning and TensorFlow Wiki TensorFlow is an open source software library for machine learning across a range of tasks, and developed by Google to meet their needs for systems capable of building and training neural networks to detect and decipher patterns and correlations, analogous to the learning and reasoning which humans use. Environment Versions Operating System : Ubuntu 17.04 (Zesty Zapus) 64-bit Server Edition Python : 2.7.13 Install TensorFlow on Linux (CPU-only) $ sudo apt-get install python-pip python-dev $ pip install tensorflow Installing collected packages: wheel, six, appdirs, pyparsing, packaging, setuptools, protobuf, funcsigs, pbr, mock, numpy, werkzeug, tensorflow Successfully installed appdirs-1.4.3 funcsigs-1.0.2 mock-2.0.0 numpy-1.12.1 packaging-16.8 pbr-3.0.1 protobuf-3.3.0 pyparsing-2.2.0 setuptools-35.0.2 six-1.10.0 tensorflow-1.1.0 werkzeug-0.12.2 wheel-0.29.0 Fit the plane example: import tensorflow as tf import numpy as np # Make 100 phony data points in NumPy. x_data = np.float32(np.random.rand(2, 100)) # Random input y_data = np.dot([0.100, 0.200], x_data) + 0.300 # Construct a linear model. b = tf.Variable(tf.zeros([1])) W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0)) y = tf.matmul(W, x_data) + b # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) # Fit the plane. for step in xrange(0, 201): sess.run(train) if step % 20 == 0: print step, sess.run(W), sess.run(b) # Learns best fit is W: [[0.100 0.200]], b: [0.300] This is an essay introduction of TensorFlow, I will keep updating this post. ### Content Security Policy Configuration Wiki Content Security Policy (CSP) is a computer security standard introduced to prevent cross-site scripting (XSS), clickjacking and other code injection attacks resulting from execution of malicious content in the trusted web page context. Allow everything but only from the same origin: default-src 'self'; Only Allow Scripts from the same origin: script-src 'self'; Allow Google Analytics, Google AJAX CDN and same Origin: script-src 'self' www.google-analytics.com ajax.googleapis.com; Nginx Content Security Policy Configuration Add following config to server section: add_header Content-Security-Policy "default-src 'self'; script-src 'self'; script-src 'self' www.google-analytics.com ajax.googleapis.com;"; Starter Policy This policy allows images, scripts, AJAX, and CSS from the same origin, and does not allow any other resources to load (eg object, frame, media, etc). It is a good starting point for many sites. default-src 'none'; script-src 'self'; connect-src 'self'; img-src 'self'; style-src 'self'; X-Frame-Options The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object> . Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites. The added security is only provided if the user accessing the document is using a browser supporting X-Frame-Options. There are three possible directives for X-Frame-Options: X-Frame-Options: DENY X-Frame-Options: SAMEORIGIN X-Frame-Options: ALLOW-FROM https://example.com/ To configure nginx to send the X-Frame-Options header, add this either to your http, server or location configuration: add_header X-Frame-Options SAMEORIGIN; If you got error like this: Multiple 'X-Frame-Options' headers with conflicting values ('SAMEORIGIN, DENY') encountered when loading 'https://example.com'. Falling back to 'DENY'. Set Nginx ignore the corresponding response headers in PHP fastcgi_hide_header X-Frame-Options; # FastCGI Mode proxy_hide_header X-Frame-Options; # Proxy Mode Reference Content Security Policy (CSP) Quick Reference Guide ### Cincinnati Symphony Orchestra Principals Chamber Music Concert The Cincinnati Symphony Orchestra (CSO) is an American orchestra based in Cincinnati, Ohio founded at 1895. Its primary concert venue is Music Hall. In addition to its symphony concerts, the orchestra gives "pops" concerts as the Cincinnati Pops Orchestra. 19:30 on 20 March, 2017 Concert Hall of China Central Conservatory of Music Program Adrian Shaposhnikov - Sonata for Flute and Harp Flute: Randolph Bowman Harp: Cillian Benet Sella Francis Poulenc - Sonata for Clarinet and Piano Clarinet: Richie Hawley Piano: Ding Yuan Intermission Johannes Brahms - Serenade No.1 in D. op.11 Violin I: Anna Reider Violin II: Stefani Collins Matsuo Viola: Christian Colberg Double Bass: Owen Lee Cello: Ilya Finkelshteyn Flute: Randolph Bowman Oboe: Dwight Parry Clarinet: Benjamin Freimuth Bassoon: Martin Garcia Horn: Elizabeth Freimuth ### Code Review and Audit with Phabricator Wiki Phabricator is a collection of web applications which help software companies build better software. Including the Differential code review tool, the Diffusion repository browser, the Herald change monitoring tool, the Maniphest bug tracker and the Phriction wiki. Phabricator integrates with Git, Mercurial, and Subversion. It is available as free software under the Apache License, version 2. Phabricator was originally developed as an internal tool at Facebook. Phabricator's principal developer is Evan Priestley. Priestley left Facebook to continue Phabricator's development in a new company called Phacility. Phabricator Homepage secure.phabricator.com This post contains basic install instructions to get Phabricator up and running. Phabricator is a LAMP (Linux, Apache, MySQL, PHP) application. To install Phabricator, you will need reference: Phabricator User Documentation. There are auto install script on Configuration Guide page, but I recommend manual installation. Because some of the components of the older version, resulting in difficult to maintain. To setup LAMP server reference Configure LAMP ( Apache, MySQL, PHP ) on RedHat Enterprise Linux 6 or Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu. Environment Versions Operating System : Ubuntu 16.10 Yakkety Yak 64-bit Server Edition PHP Version : 7.1.2 MySQL Version : 5.7.17 (Note that Phabricator will be use about 58 databases) Node Version : 7.7.2 Node Version Manager : 0.33.1 Virtual Host Name : codereview.xuri.me Virtual LDAP Server : ldap.xuri.me Virtual SMTP Server : smtp.xuri.me Virtual Email : no-reply@xuri.me Create necessary users and add phd user to sudoers: $ sudo adduser phd --home /home/phd $ sudo adduser phd sudo $ sudo adduser git And create repo directory if Phabricator will be hosting repos: $ sudo mkdir /var/repo && sudo chmod phd:phd -R /var/repo Install Phabricator After installation, download Phabricator from git: $ su phd $ cd /home/phd $ git clone https://github.com/phacility/phabricator.git Installing APCu APCu is APC stripped of opcode caching, it can provide a compatibility mode, such that it can provide a drop in replacement for the applicable parts of APC. $ sudo apt-get install php-apcu To config MySQL settings with command: $ cd /home/phd/phabricator $ ./bin/config set mysql.host codereview.xuri.me $ ./bin/config set mysql.port 3306 $ ./bin/config set mysql.user username $ ./bin/config set mysql.pass password Config Phabricator daemons process lanuch user: $ ./bin/config set phd.user phd Config Phabricator Base URI: $ ./bin/config set phabricator.base-uri https://codereview.xuri.me/ SSHD Setup: $ ./bin/config set diffusion.ssh-port 2222, $ ./bin/config set diffusion.ssh-user git, For local-disk file storage only: $ mkdir /home/phd/phabricator-files $ chmod -R 755 /home/phd/phabricator-files $ ./bin/config set storage.local-disk.path /home/phd/phabricator-files Set true if you want to allow public http cloning: $ ./bin/config set policy.allow-public true Set to true if you want to allow http pushes $ ./bin/config set diffusion.allow-http-auth false You most likely want prototype apps, they are very useful: $ ./bin/config set phabricator.show-prototypes true Recommended silliness-enabling settings: $ ./bin/config set files.enable-imagemagick true $ ./bin/config set remarkup.enable-embedded-youtube true $ ./bin/config set audit.can-author-close-audit false Configuring Outbound E-mail Instructions for configuring Phabricator to send mail. Phabricator can send outbound email via several different providers, called "Adapters". Such as use sendmail: $ sudo apt-get install sendmail imagemagick python-pygments Configuring mail adapter: $ ./bin/config set metamta.mail-adapter PhabricatorMailImplementationPHPMailerAdapter PHP Mailer settings: $ ./bin/config set phpmailer.mailer smtp $ ./bin/config set phpmailer.smtp-host smtp.xuri.me $ ./bin/config set phpmailer.smtp-port 465 $ ./bin/config set phpmailer.smtp-user no-reply@xuri.me $ ./bin/config set phpmailer.smtp-password password Set custom E-mail meta to [CodeReview] if you want: $ ./bin/config set metamta.conpherence.subject-prefix [CodeReview] $ ./bin/config set metamta.differential.subject-prefix [CodeReview] $ ./bin/config set metamta.diffusion.subject-prefix [CodeReview] $ ./bin/config set metamta.files.subject-prefix [CodeReview] $ ./bin/config set metamta.maniphest.subject-prefix [CodeReview] $ ./bin/config set metamta.macro.subject-prefix [CodeReview] $ ./bin/config set metamta.paste.subject-prefix [CodeReview] $ ./bin/config set metamta.pholio.subject-prefix [CodeReview] $ ./bin/config set metamta.phriction.subject-prefix [CodeReview] Note that change settings in Phabricator need restart to take effect. Configure MySQL and storage: Add these new lines to /etc/mysql/my.cnf under the [mysqld] heading: port = 4443 max_allowed_packet = 32M innodb_buffer_pool_size = 1600M sql_mode = STRICT_ALL_TABLES Restart MySQL and run Phabricator storage upgrade: $ sudo service mysql restart $ ./bin/storage upgrade Configure PHP Adjust the following fields in /etc/php/7.1/cli/php.ini post_max_size = 8M date.timezone = Etc/UTC opcache.validate_timestamps = 0 Then restart Apache $ sudo service apache2 restart Custom homepage HTML title by edit file src/view/page/AphrontPageView.php: $title = $this->getTitle(); // Hacked here: if ($title == 'Phabricator') { $title = 'Code Review'; } Set mail sender as custom name to edit file externals/phpmailer/class.phpmailer.php, change $this->FromName = $name; to $this->FromName = 'CodeReview';. Restart phd daemons: $ ./bin/phd restart Setup Phabricator via browser. Create administrator's account and resolve setup issues. Configure the Phabricator LDAP Login First of all, you should test your LDAP service with following script to see whether the service itself works: $ ldapsearch -h ldap.xuri.me -p 389 -x -b "dc=domain,dc=com" -D "domain\yourLDAPusername" "(sAMAccountName=targetUser)" -W | less This command will let you search the targetUser with yourLDAPusername and yourLDAPpassword(-W), so if this works, means that you have almost all the information for configuring the Phabricator LDAP Login. Next, try to configure your LDAP Login in the Phabricator with following data: Hostname : ldap.xuri.me Port : 389 (can be 3268) Base Distinguished Name : dc=domain,dc=com Search Attribute : (sAMAccountName=$(login)) Always Search : Yes Anonymous Username : com\yourLDAPusername Anonymous Password : yourLDAPpassword Username Attributes : sAMAccountName Real Name attributes : displayName LDAP Version : 3 Referrals : No Use TLS : No Active Directory Domain : domain Now you can test this configuration in the command line: $ ./bin/auth ldap Import LDAP Users Applications → People → USER ADMINISTRATION → Import from LDAP For example, if your want to filer people in group: memberOf:: bWVtYmVyb2Y9Q049R3JvdXAsQ049VXNlcnMsREM9ZG9tYWluLERDPWNvbQ== Use the LDAP query: memberof=CN=Group,CN=Users,DC=domain,DC=com Notifications Server Setup and Configuration By default, Phabricator delivers information about events (like users creating tasks or commenting on code reviews) through email and in-application notifications. Installing Node and modules. The notification server uses Node.js, so you'll need to install it first. Install Node Version Manager (nvm) use cURL: $ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.1/install.sh | bash To download, compile, and install the latest release of node: $ nvm install node $ nvm use 7.7.2 You will also need to install the ws module for Node. This needs to be installed into the notification server directory: $ cd ~/phabricator/support/aphlict/server/ $ npm install ws Once Node.js and the ws module are installed, you're ready to start the Aphlict server: $ ./bin/aphlict start Configuring Phabricator After starting the server, configure Phabricator to connect to it by adjusting notification.servers. This configuration option should have a list of servers that Phabricator should interact with. Normally, you'll list one client server and one admin server, like this (~/phd/phabricator/conf/aphlict/aphlict.default.json): { "servers": [{ "type": "client", "port": 22280, "listen": "0.0.0.0", "ssl.key": null, "ssl.cert": null, "ssl.chain": null }, { "type": "admin", "port": 22281, "listen": "127.0.0.1", "ssl.key": null, "ssl.cert": null, "ssl.chain": null }], "logs": [{ "path": "/var/log/aphlict.log" }], "pidfile": "/var/tmp/aphlict/pid/aphlict.pid" } Provides Chinese Locales for Phabricator Drop the code phabricator-zh_CN into phabricator/src/extensions/, Configure Settings → Personal Account → Settings Account → Account Settings → Translation select Chinese (zh_CN) and Config global user default preference at Settings → All Settings → Global Default Settings as administrator. Passphrase Set credential in Application → Passphrase → Create Credential. Project Organize users and objects with projects. Create a project first and active it, add related user into this project. Repository Hosting Phabricator can host repositories and provide authenticated read and write access to them over HTTP and SSH. Create Repository in the Diffusion. Herald Herald allows you to write rules which run automatically when objects (like tasks or commits) are created or updated. For instance, you might want to get notified every time someone sends out a revision that affects some file you're interested in, even if they didn't add you as a reviewer. Create Herald Rule in Application → Herald → Create Herald Rule. Legalpad Using Legalpad to track agreements and signatures on legal documents. Legalpad is a simple application for tracking signatures on legal agreements. You can add legal documents, users can sign them, and you can keep track of who has signed what. Create a documents for sign in Application → Legalpad → Create → Document. Phabricator supports two similar but separate code review workflows: "review" and "audit". Review occurs in Differential, before changes are published. Audit occurs in Diffusion, after changes are published. Discusses the differences between "review" and "audit" workflows reference User Guide: Review vs Audit. Review Config on client install Arcanist and libphutil. Arcanist is a command-line interface to Phabricator and the libphutil is a shared library Arcanist depends upon. $ git clone https://github.com/phacility/libphutil.git $ git clone https://github.com/phacility/arcanist.git Add command arcanist: $ export PATH="$PATH:/home/phd/arcanist/bin/" Install Arcanist certificate for Phabricator authority. $ arc install-certificate Configuring a new project, edit .arcconfig file under project root folder: { "project_id" : "name", "conduit_uri" : "http://codereview.xuri.me/" } Submit review in your project: $ git commit -a $ arc diff When your revision has been accepted, you can usually push it like this in Git (Merges <branch> into master and pushes): $ arc land <branch> arc land makes some assumptions about your workflow which might not be true. Consult the documentation before you use it. You should also look at arc amend, which may fit your workflow better. Audit Create a Herald rule to trigger the audit. Issues If you got Call to undefined function utf8_decode() when create question in ponder, replace strlen(utf8_decode($string)); with count(phutil_utf8v($string)); in the file ~/libphutil/src/utils/utf8.php. Other Custom Settings Enable show filetree Auto enable a sidebar showing affected files. Enable Show Filetree in Settings → Global Default Settings → Diff Preference. When this option is enabled, press F to show or hide the sidebar. Auto show sidebar hack code ~/phabricator/webroot/rsrc/js/core/behavior-phabricator-nav.js, replace .setData({ key : 'nav-collapsed', value : (collapsed ? 1 : 0) }) with .setData({ key : 'nav-collapsed', value : (collapsed ? 1 : 1) }). Custom logo and header color Set custom value for ui.header-color, ui.logo and ui.footer-items in Application → Config → User Interface User → Interface Configuration. Query a git repository for ref information at a specific commit Sometimes we need to query audit status by given commit ID. Phabricator provides developer API method in Application → Conduit → diffusion.refsquery. But I always got ERR-CONDUIT-CORE error code and following info: [You Shall Not Pass: Restricted Repository] (Can View) You do not have permission to view this object. // Members of the project "Restricted Project" can take this action. Since I did not find a better solution, I created a custom script to handle this: <?php /** * Licensed under The MIT License * Redistributions of files must retain the above copyright notice. * * @uses Phabricator https://www.phacility.com * @author Ri Xu https://xuri.me * @copyright Copyright (c) 2013 - 2017 Ri Xu Online All rights reserved. * @link https://xuri.me/2017/03/14/code-review-and-audit-with-phabricator.html * @since 14 Mar, 2017 * @license Licensed under The MIT License http://www.opensource.org/licenses/mit-license.php * @version 0.1 * * This script provieds method to query audit status by given commit ID. PDO * extensions is required. Place this file in the Plabricator webroot/rsrc * directory. Request URL like this: * http://codereview.xuri.me/rsrc/audit.status.php?repo=name&commit=commitid&token=token */ // Get the browser request. $repo = $_GET["repo"]; $cmit = $_GET["commit"]; $token = $_GET["token"]; $error = array( 'result' => [], 'error_code' => 500, 'error_info' => null ); // Token validation. if ($token != "token") { $error['error_info'] = "Token is invalid"; echo json_encode($error); return; } // Parameters validation. if (empty($repo) && empty($cmit)) { $error['error_info'] = "repo and commit field can't be empty"; echo json_encode($error); return; } // Database connection use PDO. $username = 'username'; $passwd = 'password'; try { $dbh = new PDO('mysql:host=localhost;port=3306;dbname=phabricator_repository;', $username, $passwd); } catch (PDOException $e){ $error['error_info'] = 'Database connection failed: ' . $e->getMessage(); echo json_encode($error); } $stmt = $dbh->prepare("SELECT phid FROM repository_commit WHERE repositoryID = (SELECT id FROM repository WHERE name = :name) AND commitIdentifier = :cmit;"); $stmt->bindValue(':name', $repo, PDO::PARAM_STR); $stmt->bindValue(':cmit', $cmit, PDO::PARAM_STR); $stmt->execute(); $phid = $stmt->fetchAll(); if (empty($phid)) { $error['error_info'] = 'Failed to get audit status'; echo json_encode($error); return; } $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://codereview.xuri.me/api/audit.query"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, "api.token=api-token&commitPHIDs[0]=".$phid[0]["phid"]); curl_setopt($ch, CURLOPT_POST, 1); $headers = array(); $headers[] = "Content-Type: application/x-www-form-urlencoded"; curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); $result = curl_exec($ch); if (curl_errno($ch)) { curl_close ($ch); $error['error_info'] = 'Error:' . curl_error($ch); echo json_encode($error); return; } curl_close ($ch); echo $result; return; API response like this: { "result": [ { "id": "200", "commitPHID": "PHID-CMIT-commit", "auditorPHID": "PHID-PROJ-project", "reasons": [ "H3 Triggered Audit" ], "status": "accepted" } ], "error_code": null, "error_info": null } Import SVN repositories by sub-path If you using a SVN library (for example svn.xuri.me) without grant root directory permissions, only import the subdirectory of the subversion repositories (https://svn.xuri.me/svn/subdir1/subdir2), but Phabricator doesn't support this feature currently, there are ways to circumvent this issue. Set https://svn.xuri.me/svn/subdir1/ for repo's URI and set subdir2/ in Subversion → Import Only config Edit function newFormattedCommand in the file src/applications/diffusion/protocol/DiffusionSubversionCommandEngine.php before return statement: foreach($argv as $key => $value) { if (substr_count($value, '/subdir1/subdir2')) { $argv[$key] = str_replace('/subdir1/subdir2/subdir1/subdir2', '/subdir1/subdir2', $value); } if (substr_count($value, '/subdir1')) { $argv[$key] = str_replace('/subdir1/subdir1', '/subdir1', $value); } } Or use regular expression matching: $repo = $this->getRepository(); $baseURI = $repo->getSubversionBaseURI(); $pathURI = $repo->getSubversionPathURI(); preg_match('/^.*svn/', $pathURI, $matches, PREG_OFFSET_CAPTURE); $base = str_replace($matches[0][0],"",$pathURI); $sub = str_replace("/@","",str_replace($pathURI,"",$baseURI)); foreach ($argv as $key => $value) { if (substr_count($value, $base.$sub)){ $argv[$key] = str_replace($base.$sub.$base.$sub,$base.$sub, $value); } if (substr_count($value, $base)){ $argv[$key] = str_replace($base.$base,$base, $value); } } This hack method only support 2-level deep subdirectories, if we need import more deep subdirectories, we need to get permission access root dir. Create SVN Repo $ mkdir /var/repo && svnadmin create /var/repo/svn Config access permission in the file /var/repo/svn/conf/svnserve.conf: [general] anon-access = none auth-access = write password-db = passwd authz-db = authz Config user's password in the file /var/repo/svn/conf/password: [users] harry = harryssecret sally = sallyssecret lucy = lucyssecret michael = michaelssecret Config authorization in the file /var/repo/svn/conf/authz: [groups] admin = harry,michael dev = sally,lucy [/repo1] @admin = rw @dev = r [/path/to/repo2] @admin = rw @dev = Start SVN Server $ svnserve -d -r /var/repo/svn --listen-host svn.xuri.me Get SVN repo info $ svn info -username harry svn://svn.xuri.me/ Commit and push changed to the repo $ mkdir repo1 $ svn add repo1 $ svn commit -m "create repo1" repo1 Add encoding support for repositories For example, add GBK encoding support for repoistory. Choose "Basics" in the "Manage Repository" page, add new text Enconding in by click "Edit Text Encoding". Conclusion Phabricator not only a place for repository hosting or issue tracking. It represents the workflow that we agree with. Phabricator is suitable for a complete project because it contains many different units for project management. In the way of dealing with the project, Phabricator is different with GitHub Enterprise and Atlassian series tools. Github responds to the fork and merge pattern, each project and developer's degree of freedom is quite high. Phabricator's rebase mode is more conducive to management and suitable for agile software development. ### Setup Cassandra Multi-Node Cluster Wiki The Apache Cassandra database is the right choice when you need scalability and high availability without compromising performance. Linear scalability and proven fault-tolerance on commodity hardware or cloud infrastructure make it the perfect platform for mission-critical data.Cassandra's support for replicating across multiple datacenters is best-in-class, providing lower latency for your users and the peace of mind of knowing that you can survive regional outages. DataStax OpsCenter is the web-based visual management and monitoring. solution for DataStax Enterprise (DSE). In this post I will write how to setup Cassandra and OpsCenter on Ubuntu server. OpsCenter 5.2.x only supports up to Cassandra 2.1. OpsCenter 6.0 only supports DataStax Enterprise. So there is no way to use OpsCenter with Apache Cassandra 2.2+. More info in DataStax product compatibility table. Environment Versions Ubuntu 16.10 Yakkety Yak 64-bit Server Edition Node1 - 172.16.136.147 Cassandra 2.1.5 Node2 - 172.16.136.148 Cassandra 2.1.5 Node3 - 172.16.136.149 Cassandra 2.1.5 Node4 - 172.16.136.150 DataStax OpsCenter 5.2.5 (2 GB of RAM available to OpsCenter) Install Java Apache Cassandra requires Java to be installed, so let's begin by installing Java, update the source list: $ sudo apt-get update The OpenJDK project is the default version of Java, that is provided from a supported Ubuntu repository. $ sudo apt-get install openjdk-8-jdk These commands will update the package information on your server and then install Java. After executing these commands, execute the following command to verify that Java has been installed: $ java -version If Java has been installed, this should display the version details as illustrated in the following output: openjdk version "1.8.0_121" OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.10.2-b13) OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode) If you already have Java JDK installed on your system, then you need not run the above command. Configuring SSH The Apache Cassandra control scripts rely on SSH to peform cluster-wide operations. For example, there is a script for stopping and starting all the daemons in the clusters. To work seamlessly, SSh needs to be etup to allow password-less login for the Apache Cassandra user from machines in the cluster. The simplest ay to achive this is to generate a public/private key pair, and it will be shared across the cluster. Apache Cassandra requires SSH access to manage its nodes, i.e. remote machines plus your local machine. For our multi-node setup of Apache Cassandra, we therefore need to configure SSH access to localhost for the hduser user we created in the earlier. We have to generate an SSH key for the current user. $ ssh-keygen -t rsa -P '' -P '', here indicates an empty password You have to enable SSH access to your local machine with this newly created key which is done by the following command: $ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys The final step is to test the SSH setup by connecting to the local machine with the current user. The step is also needed to save your local machine's host key fingerprint to the current user's known hosts file. $ ssh localhost If the SSH connection fails, we can try the following (optional): Enable debugging with ssh -vvv localhost and investigate the error in detail. Check the SSH server configuration in /etc/ssh/sshd_config. If you made any changes to the SSH server configuration file, you can force a configuration reload with sudo /etc/init.d/ssh reload. Installation Add the Apache repository of Cassandra to /etc/apt/sources.list.d/cassandra.sources.list on every node: $ echo "deb http://debian.datastax.com/community stable main" | sudo tee -a /etc/apt/sources.list.d/datastax.community.list Add the Apache Cassandra repository keys on every node:: $ curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add - Update the repositories and install python support: $ sudo apt-get update && sudo apt install python-pip -y $ wget http://launchpadlibrarian.net/109052632/python-support_1.0.15_all.deb $ sudo dpkg -i python-support_1.0.15_all.deb Install Cassandra: $ sudo apt-get install dsc21=2.1.5-1 cassandra=2.1.5 cassandra-tools=2.1.5 -y Configuring Cassandra for clustering Stop Cassandra: $ sudo service cassandra stop Edit Cassandra's configuration file: /etc/cassandra/cassandra.yaml on every node: - seeds: "172.16.136.147,172.16.136.148,172.16.136.149" and set listen_address and rpc_address. Edit /etc/cassandra/cassandra-env.sh on every node, replace IP address with <public name> and uncomment the following line: JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=172.16.136.147" Set LOCAL_JMX parameter to no, reference Nodetool or JMX connections failing on remote nodes: LOCAL_JMX=no Set JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate= parameter to false: JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=false" Start Cassandra $ sudo service cassandra start Check the nodes $ sudo nodetool status The output should be similar to the following: Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 172.16.136.147 157.74 KiB 256 65.6% bd21c728-0b6e-4574-ba29-7a712f6ca01d rack1 UN 172.16.136.148 236.27 KiB 256 65.0% e39141fe-41ba-4a34-a413-43cba395495f rack1 UN 172.16.136.149 152.05 KiB 256 69.5% 22dde31d-22a5-47b3-af3d-9be6d86ead21 rack1 Testing the Cassandra Cluster To test our Cassandra cluster, we will create a keyspace, a table and a row to one of the nodes, and then we will expect it to be added on the other nodes. Enter Cassandra's command line client: Install the latest python driver $ sudo -H pip install cassandra-driver Export a user variable to override the bundled driver $ export CQLSH_NO_BUNDLED=true Create a keyspace, a table and a row to one of the nodes: $ cqlsh Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 2.1.5 | CQL spec 3.2.0 | Native protocol v3] Use HELP for help. cqlsh> CREATE KEYSPACE test WITH replication = { ... 'class': 'SimpleStrategy', ... 'replication_factor': '1' ... }; cqlsh> USE test; cqlsh:test> CREATE TABLE users ( ... name text, ... PRIMARY KEY (name)); ... ); cqlsh:test> INSERT INTO users (name) VALUES ('xuri'); cqlsh:test> SELECT * FROM users; name ------ xuri (1 rows) Check the others nodes: $ cqlsh 172.16.136.147 Connected to Test Cluster at 172.16.136.147:9042. [cqlsh 5.0.1 | Cassandra 2.1.5 | CQL spec 3.2.0 | Native protocol v3] Use HELP for help. cqlsh> USE test; cqlsh:test> SELECT * FROM users; name ------ xuri (1 rows) The user exists, the cluster is working. Install OpsCenter Because install OpsCenter via apt-get will be install java-9-oracle automatically, this will be affect the operation of Cassandra, so deploy OpsCenter to Node4 separately. $ sudo apt-get install opscenter=5.2.5 -y If got error: keytool error: java.lang.Exception: Key pair not generated, alias <agent_key> already exists, run following command to remove this key: $ keytool -delete -alias agent_key -keystore /var/lib/opscenter/ssl/agentKeyStore After install visit dashboard on http://172.16.136.150:8888, select Manage Existing Cluster: Add Cluster Install Agents ### Profiling PHP Applications with Xdebug Introduction Xdebug is a PHP extension for powerful debugging. It supports stack and function traces, profiling information and memory allocation and script execution analysis. Install Xdebug on Windows Download php_xdebug.dll from official website. Edit php.ini like this: [xdebug] xdebug.profiler_enable = 1 xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_name = cachegrind.out.%p xdebug.profiler_output_dir = "..\ext\xdebug_profilers" xdebug.trace_output_dir = "..\ext\xdebug_traces" xdebug.auto_trace = 0 xdebug.collect_params = 4 xdebug.collect_return = 1 xdebug.show_mem_delta = 1 zend_extension = "..\ext\php_xdebug.dll" We can get more configuration information from the official documents. After edited the config file, restart the web server. <?php register_shutdown_function('xdebug_stop_trace'); xdebug_start_trace("..\ext\xdebug_traces"); // Code to trace ... xdebug_stop_trace(); WinCacheGrind is a viewer for cachegrind.out files generated by Xdebug and QCacheGrind (KCacheGrind) Windows build is a GUI to parse and view Xdebug trace files. If you got MSVCP100.dll Missing error, download and install Microsoft Visual C++ 2010 Redistributable Package (x86, x64). Related Article Debugging with Xdebug and PhpStorm on OS X Profiling PHP Applications with XHProf ### Wiener Philharmoniker Vienna New Year's Concert 2017 The 2017 New Year's Concert takes place on January 1, 2017, under the baton of Gustavo Dudamel in the Vienna Musikverein. Gustavo Dudamel, who was born in Barquisimeto, Venezuela in 1981, becomes the youngest conductor in the 75 year history of the New Year's Concert. In 2007, Gustavo Dudamel conducted the Vienna Philharmonic for the first time at the Lucerne Festival and made his debut at the Vienna Philharmonic Subscription Concerts in 2011. Dudamel conducted the Summer Night Concert Schönbrunn in 2012 and led the orchestra on its traditional Vienna Philharmonic Week in Japan in 2014. The 2017 New Year's Concert will be broadcast in over 90 countries and followed by up to 50 million television viewers around the world. Gustavo Dudamel As an internationally renowned symphonic and operatic conductor, Gustavo Dudamel is motivated by a profound belief in music's power to unite and inspire. Currently serving as Music & Artistic Director of the Los Angeles Philharmonic and Music Director of the Simón Bolívar Symphony Orchestra of Venezuela, the impact of his leadership extends from the greatest concert stages to classrooms, cinemas and innovative digital platforms around the world. Dudamel also appears as guest conductor with some of the world’s most famous musical institutions. In 2017, he becomes the youngest-ever conductor to lead the Vienna Philharmonic's famous New Year's Day Concert. Program Franz Lehár Nechledil Marsch aus der Operette Wiener Frauen Émile Les Patineurs. Walzer, op. 183 Johann Strauss, Jr. S' gibt nur a Kaiserstadt,s' gibt nur a Wien. Polka, op. 291 Josef Strauss Winterlust. Polka (schnell), op. 121 Johann Strauss, Jr. Mephistos Höllenrufe. Walzer, op. 101 So ängstlich sind wir nicht! Schnell-Polka, op. 413 -- Pause -- Franz von Suppé Ouvertüre zu Pique Dame Carl Michael Ziehrer Hereinspaziert! Walzer aus der Operette „Der Schätzmeister“, op. 518 Otto Nicolai Die lustigen Weiber von Windsor (The Merry Wives of Windsor), Moon Choir Johann Strauss, Jr. Pepita-Polka, op. 138 Rotunde-Quadrille, op. 360 Die Extravaganten. Walzer, op. 205 Johann Strauss, sen. Indianer-Galopp. op. 111 Josef Strauss Die Nasswalderin. Polka mazur,op. 267 Johann Strauss, Jr. Auf zum Tanze! Polka schnell, op. 436 Tausend und eine Nacht. Walzer nach Motiven der Operette "Indigo" Tik-Tak. Polka schnell, op. 365 Vienna Philharmonic Orchestra - New Year's Concert 2017 with Gustavo Dudamel - MPEG4HD 422+DUAL DolbyE - UPLINK [SRSL] (35.46GB) *Spc Recording Studio LABORATORY* *PRESENTS* ******************************************************************************* Vienna Philharmonic Orchestra New Year's Concert 2017 with Gustavo Dudamel LIVE ! Satellite UPLINK FULL HD 1.1.2017 1080i 50 MPEG-4 HD 4:2:2 32,86Mbit/s ******************************************************************************* ------------------------------------------------------------------------------- General Information ------------------------------------------------------------------------------- Type.................: Live Event (Live Concert) ------------------------------------------------------------------------------- Release Notes ------------------------------------------------------------------------------- General Satellite Information: Channel ID : NYC 2 Fi01 WIEN Satellite Sign : Eutelsat 7B (E7B, 3D, Eutelsat 3D) Satellite Position : 7°E Satellite Position 1.1.2017 : 7.04°E Satellite Name : Eutelsat 7B Norad : 39163 Cospar number : 2012-022A Operator : Eutelsat Communications S.A. Launch date : 14-May-2013 Launch site : Baikonur Cosmodrome Launch vehicle : Proton M Launch mass (kg) : 5470 Dry mass (kg) : N/A Manufacturer : Thales Alenia Space Model (bus) : Spacebus-4000C3 Orbit : Geostationary orbit Expected lifetime : 15 Years International Uplink: Frequency : 10964.164 MHz SymbolRate : 14399 Polarity : Vertical Video PID : 512 (MPEG4 422) Audio 1 PID : 4112 (MPEG1L2) Audio 2 PID : 4128 (AES-3) DOLBY-E Audio 3 PID : 4144 (MPEG1L2) Audio 4 PID : 4160 (AES-3) DOLBY-E BISS Decryption Device : nVIDIA PASCAL GP104-200 GPU +BISS-E KEY : 0A485AACED2EAEC9 ORFAT UPLINK: Frequency : 10964.164 MHz SymbolRate : 14399 Polarity : Horizontal Video PID : 512 (MPEG4 422) Audio 1 PID : 4112 (MPEG1L2) Audio 2 PID : 4128 (AES-3) DOLBY-E Audio 3 PID : 4144 (MPEG1L2) Audio 4 PID : 4160 (AES-3) DOLBY-E BISS Decryption Device : nVIDIA PASCAL GP104-200 GPU +BISS-E KEY : 0A485AACED2EAEC9 DVB-S2 : YES FEC : 4/3 Modulation : AUTO System Hardware : Spc TriChea™ Modular System Dish : 2 meters Dish Calibration : Manual, Exact Low Noise Block : Inverto BLACK ULTRA Quad High Gain BISS Decryption Device : nVIDIA PASCAL GP104-200 GPU +CUDA Version : 6.1 +Multi Processors : 15 +Cores Available : 1920 +Clock : 2078MHz +Grid Size : 2147483647 x 65535 x 65535 +Wrap Size : 32 +CoProcessor : Intel(R) Core(R)i7 CPU 3930K +Clock : 4625MHz +Units : 6 + 6HT URL : http://www.nvidia.com/object/tesla-supercomputing-solutions.html Capture Device : TeVii s472 DVB-S2 PCIe, x16 Capture OS : Windows Server 2016 Datacenter Capture Program : Spc TriChea™ Packet Capture 7.0 Format : MPEG-2 4:2:2 HD (1080i50) Format/Info : Moving Picture Experts Group Stream Type : Program Sector Size : 2048 Packets Per Block : 1 Video Buffer Size : 232 KB bit rate total : 32.86 Mbps Video Format : MPEG-4 4:2:2 Format profile : High Profile 422 Codec ID : MPEG-4 Codec ID/Hint : MPEG-4 Bit rate : 26,900 Mbps Width : 1920 pixels Height : 1080 pixels Display aspect ratio : 16/9 Mode : Interlaced Frame rate : 25 fps Resolution : 24 bits DC Coefficient : 8 bit I-Frames : 15 B-Frames : 2 Color Primaries : ITU-R Rec. 624-4 System B,G Transfer : ITU-R Rec. 624-4 System B,G Matrix Coefficients : ITU-R Rec. 624-4 System B,G Colorimetry : 4:2:2 Audio 1: Format : MPEG1 Layer2 Format version : MP2 Format profile : MPG Bit rate mode : Constant Bit rate : 384 Kbps Minimum bit rate : 384 Kbps bit rate per channel : 192 kbps Channel(s) : 2 (STEREO) Sampling rate : 48.0 KHz Resolution : 16 bits Audio 2: Format : Dolby E Format version : AES-3 Format profile : AES-3 PCM Audio 8.0 Bit rate mode : Constant Bit rate : 2448 Kbps Minimum bit rate : 2448 Kbps bit rate per channel (Uncompressed) : 1125 kbps Total bitrate (Uncompressed) : 9000 kbps Channel(s) : 8 Sampling rate : 48.0 KHz Resolution : 24 bits Note : Dolby-E and Linear Audio always use 48kHz Rate ! Note : Dolby-E must be synchronized to the video signal ! Mix Input : 8.0 DolbyE Audio 3: Format : MPEG1 Layer2 Format version : MP2 Format profile : MPG Bit rate mode : Constant Bit rate : 384 Kbps Minimum bit rate : 384 Kbps bit rate per channel : 192 kbps Channel(s) : 2 (STEREO) Sampling rate : 48.0 KHz Resolution : 16 bits Audio 4: Format : Dolby E Format version : AES-3 Format profile : AES-3 PCM Audio 8.0 Bit rate mode : Constant Bit rate : 2448 Kbps Minimum bit rate : 2448 Kbps bit rate per channel (Uncompressed) : 1125 kbps Total bitrate (Uncompressed) : 9000 kbps Channel(s) : 8 Sampling rate : 48.0 KHz Resolution : 24 bits Note : Dolby-E and Linear Audio always use 48kHz Rate ! Note : Dolby-E must be synchronized to the video signal ! Mix Input : 8.0 DolbyE ### Richard Clayderman Great Hall of the People Christmas Piano Recital Concert 2016 Richard Clayderman 2016 Christmas Piano Recital Concert held at Great Hall of the People today. Richard Clayderman is a French pianist who has released numerous albums including the compositions of Paul de Senneville and Olivier Toussaint, instrumental renditions of popular music, rearrangements of movie soundtracks, ethnic music, and easy-listening arrangements of popular works of classical music. The concert starts at 19:30. There are many classic tracks, such as A comme amour, Ballade pour Adeline, Mariage d'amour, Souvevirs d'Enfance, Love Story, Love Theme from "Romeo & Julia", Give a Little Time to Your Love, Lyphard Melodie and Chariots of Fire, there are also chinese music Butterfly Lovers. Movies soundtrack: Star Wars, My Heart Will Go On, Shape of My Heart. Cong Zhao, the chief musician of the Central National Orchestra played House of Flying Daggers. ### Global Internet Architecture Conference of China 2016 Today and yesterday I joined the Global Internet Architecture Conference of China 2016 (GIAC) at Crowne Plaza Beijing Chaoyang U-Town. The GIAC organised by "ARCHNOTES", a community focus on high availability architecture. There are 40 presentations at the conference. I have to meet up to 16 presentations of these two days. I wanna talk about some impressive presentations. Xiaolai Li: Study and Growth - Who the Future Belongs to (at the main venue). Xiaolai Li is a VC, the person with the most Bitcoin in China, and he is also the earliest research of block chain technology in China. "Make Time with Friends" is his book. Yuming Yang: Big Data In Hulu (first dat at big data and algorithm sub-venues). Zhifeng Yang: The evolution of OceanBase architecture in recent years (first day afternoon at architecture sub-venues). Zhifeng Yang works for Alibaba Group. The OcenBase is a distributed, shared-nothing relational database created by Alibaba Group. Hao Chen: Programming paradigm in programming (second day of programming language and architecture sub-venues). Hao Chen is a software engineer, formerly at the Amazon, Alibaba Group and Thomson Reuters Corporation. He talks about from the C language generic to functional programming in his presentation. Dongxu Huang: Real world Rust - Why and how we use Rust in TiKV (second day at rogramming language and architecture sub-venues). Dongxu Huang is the CTO of PingCAP Inc. He given an introduction of Rust language and how they use Rust in TiKV. TiKV is distributed transactional key value database powered by Rust and Raft. Jingyu Wang: Ali distributed database (second day afternoon at database and extensible storage sub-venues). Jingyu Wang currently at Alibaba Group. He talks about DRDS architecture evolution process. I also listened to some other presentations, but I think just so-so. Because this two-day conference has only a few overseas company issues, so I consider the conference assembly will be known as the "Global" name may not be appropriate. ### Testing and Benchmark with HPE LoadRunner Wiki HPE LoadRunner is a software testing tool from Hewlett Packard Enterprise. It is used to test applications, measuring system behaviour and performance under load. HPE acquired LoadRunner as part of its acquisition of Mercury Interactive in November 2006. HPE LoadRunner can simulate thousands of users concurrently using application software, recording and later analysing the performance of key components of the application. This article documented the process of installing and creates a simple benchmark with LoadRunner. Official Website www.hp.com/go/loadrunner Testing Environment Operating System : Windows Server 2016 Datacenter HPE LoadRunner Version : 12.53 Community Edition Installation LoadRunner require Microsoft Visual C++ Redistributable Package and .NET Framework v3.5 SP1. Let's start LoadRunner Full Setup. Accept license agreement and start install. We can skip step HP Authentication Settings. Testing and Benchmark Create testing script with HP Virtual User Generator, for example create Action.c file by File → New script solution → Webservice - HTTP/HTML to benchmark a HTTP web server listening on 127.0.0.1:3000. Action() { web_url("WebTours", "URL=http://127.0.0.1:3000/public", "Resource=0", "RecContentType=text/html", "Referer=", "Snapshot=t1.inf", "Mode=HTML", LAST); lr_rendezvous("collection_point"); return 0; } Save testing project at C:\Users\Administrator\Documents\VuGen\Scripts\WebHttpHtml1. Use HP LoadRunner Controller to open C:\Program Files (x86)\HP\LoadRunner\scenario\Scenario.lrs file, in Script Path column choose WebHttpHtml1 just created. Adjust the parameters such as Initalize, Start Vusers, Duration, Stop Vusers and so on in Scenario Schedule panal's Glogal Sechedule section. After that click Start Scenario to run benchmark and testing. After scenario click Analyze Results from Results menu to open HP LoadRunner Analysis. We can get analysis details at here. ### Generators in PHP PHP official documents say: Generators provide an easy way to implement simple iterators without the overhead or complexity of implementing a class that implements the Iterator interface. Before understanding Generator, we should be understand Object Iteration. We usually use foreach to traversal an array, if wanna to traversal an object, the object must implement Iteration interface. There are 5 methods Iteration interface should implement: Iterator extends Traversable { /* Methods */ abstract public mixed current ( void ) // return current location element abstract public scalar key ( void ) // Returns the key corresponding to the current element abstract public void next ( void ) // Moves to a location that points to the next element abstract public void rewind ( void ) // Reverting to the location of the first element abstract public boolean valid ( void ) // Determines whether the current position is valid } These 5 methods are very simple and explicit, in the foreach traversal process, these methods will be implicitly invoked, which next() method is to control the element movement, current() can access the current position of the element. Iterator interface extends the Traversable interface, Traversable is an empty interface, it's a flag, all Iterator interface to achieve the class certainly implements Traversable interface, so we can usually use the following code to determine whether a variable can be traversed by the foreach: <?php if( !is_array( $items ) && !$items instanceof Traversable ) //Throw exception here Generator Object Generator in PHP is generated by a function, it can be used to traverse the foreach, so we can infer that Generator is a class that implements the Iterator interface. Let's look at a classic example of Generator: function xrange($start, $end, $step = 1) { for ($i = $start; $i <= $end; $i += $step) { yield $i; } } foreach (xrange(1, 1000000) as $num) { echo $num, "\n"; } The xrange() function shown above provides the same functionality as the built-in range() function. The only difference is that range() will return an array with one million numbers in the above case, whereas xrange() returns an iterator that will emit these numbers, but never actually compute an array with all of them. Because xrange() returns a Generator object. Sticking to the above example, if you call xrange(1, 1000000) no code in the xrange() function is actually run. Instead PHP just returns an instance of the Generator class which implements the Iterator interface: $range = xrange(1, 1000000); var_dump($range); // object(Generator)#1 var_dump($range instanceof Iterator); // bool(true) The second var_dump output object(Generator)#1, means that variable $range is a Generator object, return by function xrange. This object can be traversal by foreach, so it must implements the Iterator interface. Let's look at class synopsis of Generator: Generator implements Iterator { /* Methods */ public mixed current ( void ) public mixed getReturn ( void ) public mixed key ( void ) public void next ( void ) public void rewind ( void ) public mixed send ( mixed $value ) public mixed throw ( Exception $exception ) public bool valid ( void ) public void __wakeup ( void ) } It implements the 5 methods in the Iterator and provides 3 new methods, the __wakeup is a magic method for serialization, and Generator implements this method to prevent serialization. The other 2 new methods are throw and send, we'll talk about the send method later. Now we know Generator object can be traversal because it implement Iterator interface, but how it came about? yield The yield keyword in PHP can only be used in functions, and the use of the yield keyword function will return a Generator object, we call this function generator function. Here is an example: <?php function gen() { yield 1; } $g = gen(); echo $g->valid(); // 1 echo $g->current(); // 1 echo $g->next(); echo $g->valid(); // echo $g->current(); // Call the gen function to generate a Generator object assigned to the variable $g, because the Generator object implements the Iterator interface, so you can directly use the Iterator interface method. Call the valid method returns 1, which indicates that the object is currently in an iterative state, and then calls the current method, which also outputs 1, which is the value returned by yield, which is the value of the element of the current iteration. It's the first element in this example, then the next method has been called, it will Generator object to do an iteration, will move the current iteration position to next bit, and then call valid() again, this time the output is empty, the iteration of Generator is terminated, and the call to current() again returns null. In this example, gen function is a generator function, call this function will return a Generator object and assigned to the variable $g. Because only one yield statement in gen function, so traversal of $g can only be done once. <?php function gen() { yield 1; yield 2; yield 3; } $g = gen(); echo $g->valid(); echo $g->current(); echo "\n"; echo $g->next(); echo $g->valid(); echo $g->current(); echo "\n"; echo $g->next(); echo $g->valid(); echo $g->current(); echo "\n"; echo $g->next(); echo $g->valid(); echo $g->current(); The iterative element in the Generator object is the set of values returned by all yield statements, in this case [1,2,3]. Looks like an array, but it is essentially different from an array. Every iteration of the iteration of the generator object will only execute the code after the previous yield statement. Execution to the yield statement returns a value, this is equivalent to returning from the generator function. Of course, we don't write the above code in fact, we used loop in the generator function instead of, use foreach to traversal, same as xrange() function above. send yield can also be used in the context of an expression, for example, to the right of an assignment statement: $data = (yield $value); The yield is equivalent to an expression, it needs to be used in conjunction with the send function in the Generator object. The send function receive a parameter and passes the value of this parameter to the Generator object as a result of the current yield expression, and also resumes the execution of the generator function. <?php function gen() { $ret = (yield 'yield1'); var_dump($ret); $ret = (yield 'yield2'); var_dump($ret); } $g = gen(); var_dump($g->current()); var_dump($g->send('ret1')); var_dump($g->send('ret2')); The output of the code: string(6) "yield1" string(4) "ret1" string(6) "yield2" string(4) "ret2" NULL In the above code, first calls the function gen to generate a Generator object, and then calls the object's current method to return the first value, which is the return value of the first yield statement: 'yield1', the execution of the gen function will be aborted at this time, then execution the statement var_dump($g->send('ret1'));. Call $g->send('ret1') with parameters ret1, it will be assigned to the first yield expression: yield in (yield 'yield1') (Not that not include yield1 at this time), the value of it is ret1, then it will be assigned to $ret, so the second output: ret1 is the return by first var_dump in gen function. At this time iteration of the Generator object will resume and continue. In fact call the next function. It will execute to the next yield statement: yield 'yield2', this statement will return yield2. It will be the return value of $g->send('ret1'), so the second var_dump will output yield2 outside the function. Finally, have to call the send function again with parameters ret2. The element at the current position of the Generator object is on the second yield of the gen function, ret2 will be passed to the second yield expression, as the value of yield in (yield 'yield2') assigned to the $ret. Then gen function resumes execution, the last var_dump in function gen, at this point the generator object $g traversal is over, the second send function return value is NULL, which is the function of the last var_dump output. yield as a statement in the generator function. Any value of expression after the yield will be return value of call generator function. If there is no expression after the yield (variables or constants), it will return NULL, which is consistent with the return statement. yield is also an expression whose value is the value passed by the send function, call the send method, if the Generator object iteration is not the end, the current position of the yield will be getting the value come from send method. We can think of yield as both a statement (return a value for the generator function) and an expression (receive the value from the Generator object). If not consider the use of Generator to achieve coroutine, generator be accompanied by a large amount of data collection traversal to save space, it is obvious. We write a simple benchmark to compare the range function with xrange function of the time and space overhead: <?php $n = 100000; $startTime = microtime(true); $startMemory = memory_get_usage(); $array = range(1, $n); foreach($array as $a) { } echo memory_get_usage() - $startMemory, " bytes\n"; echo microtime(true) - $startTime. " ms\n"; function xrange($start,$end,$step=1) { for($i=$start;$i<$end;$i+=$step) { yield $i; } } $startTime = microtime(true); $startMemory = memory_get_usage(); $g = xrange(1,$n); foreach($g as $i) { } echo memory_get_usage() - $startMemory, " bytes\n"; echo microtime(true) - $startTime. " ms\n"; The output of the code in (PHP 5.5.38): 14649144 bytes 0.015892028808594 ms 408 bytes 0.067026853561401 ms From this test range function to generate an array containing 100000 integers, and then traverse the data, it requires storage space for the 14649152 bytes, about 14 MB. The use of Generator doesn't need to generate an array of all the elements, so its space overhead is 408 bytes. Reference Cooperative multitasking using coroutines (in PHP!) ### Troubleshooting MySQL on Ubuntu Environment Operating System : Ubuntu Server 16.10 Yakkety Yak 64-bit Server Edition MySQL Server Version : Ver 14.14 Distrib 5.7.16, for Linux (x86_64) Slove MySQL Server Keeps Stopping Unexpectedly Today I upgrade my server to Ubuntu 16.10, but MySQL server keeps stopping unexpectedly. I sloved by add Swap on Ubuntu and decrease the innodb_buffer_pool_size for InnoDB stuff. The configuration file name and directory change in Ubuntu 16.10. Add or replace existing line to the /etc/mysql/mysql.conf.d/mysqld.cnf: innodb_buffer_pool_size = 20M Slove Error in Log: [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). Add following line to [mysqld] section to the /etc/mysql/mysql.conf.d/mysqld.cnf: explicit_defaults_for_timestamp = 1 Slove Error in Log: [Warning] Changed limits: max_open_files: 1024 (requested 5000) The limits for the MySQL service are defined in the Systemd configuration file: /lib/systemd/system/mysql.service, add the following lines to the bottom of the file: LimitNOFILE=infinity LimitMEMLOCK=infinity You could also set a numeric limit, eg LimitNOFILE=4510. Now reload the Systemd configuration with: $ sudo systemctl daemon-reload Restart MySQL and it should now obey the max_open_files or table_open_cache directive. ### Add Swap on Ubuntu Introduction One of the easiest way of increasing the responsiveness of your server and guarding against out of memory errors in your applications is to add some swap space. Swap is an area on a hard drive that has been designated as a place where the operating system can temporarily store data that it can no longer hold in RAM. Basically, this gives you the ability to increase the amount of information that your server can keep in its working "memory", with some caveats. The space on the hard drive will be used mainly when space in RAM is no longer sufficient for data. The information written to disk will be slower than information kept in RAM, but the operating system will prefer to keep running application data in memory and use swap for the older data. Overall, having swap space as a fall back for when your system's RAM is depleted is a good safety net. In this guide, we'll cover how to create and enable a swap file on an Ubuntu 16.10 server. Although swap is generally recommended for systems utilizing traditional spinning hard drives, using swap with SSDs can cause issues with hardware degradation over time. Due to this consideration, I do not recommend enabling swap on SSD storage. Doing so can impact the reliability of the underlying hardware for you and your neighbors. Add Swap for server. Check the system for swap information first. Before we begin, we will take a look at our operating system to see if we already have some swap space available. We can have multiple swap files or swap partitions, but generally one should be enough. We can see if the system has any configured swap by typing: $ sudo swapon -s Filename Type Size Used Priority If you only get back the header of the table, as I've shown above, you do not currently have any swap space enabled. Another, more familiar way of checking for swap space is with the free utility, which shows us system memory usage. We can see our current memory and swap usage in Megabytes by typing: $ free -m total used free shared buff/cache available Mem: 487 275 13 64 198 135 Swap: 0 0 0 Check Available Space on the Hard Drive Partition The typical way of allocating space for swap is to use a separate partition devoted to the task. However, altering the partitioning scheme is not always possible. We can just as easily create a swap file that resides on an existing partition. Before we do this, we should be aware of our current disk usage. We can get this information by typing: $ df -h Filesystem Size Used Avail Use% Mounted on udev 225M 0 225M 0% /dev tmpfs 49M 668K 49M 2% /run /dev/vda1 20G 7.6G 11.0G 62% / tmpfs 244M 0 244M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 244M 0 244M 0% /sys/fs/cgroup tmpfs 49M 0 49M 0% /run/user/1000 Create a Swap File Now that we know our available hard drive space, we can go about creating a swap file within our filesystem. We will create a file called swapfile in our root (/) directory. The file must allocate the amount of space we want for our swap file by using the fallocate program. This command creates a file of a preallocated size instantly, without actually having to write dummy contents. We can create a 2 Gigabyte file by typing: $ sudo fallocate -l 2G /swapfile The prompt will be returned to you almost immediately. We can verify that the correct amount of space was reserved by typing: $ ls -lh /swapfile -rw-r--r-- 1 root root 2.0G Nov 25 08:23 /swapfile As you can see, our file is created with the correct amount of space set aside. Enabling the Swap File Right now, our file is created, but our system does not know that this is supposed to be used for swap. We need to tell our system to format this file as swap and then enable it. Before we do that though, we need to adjust the permissions on our file so that it isn't readable by anyone besides root. Allowing other users to read or write to this file would be a huge security risk. We can lock down the permissions by typing: $ sudo chmod 600 /swapfile Verify that the file has the correct permissions by typing: $ ls -lh /swapfile -rw------- 1 root root 2.0G Nov 25 08:23 /swapfile As you can see, only the columns for the root user have the read and write flags enabled. Now that our file is more secure, we can tell our system to set up the swap space by typing: $ sudo mkswap /swapfile Setting up swapspace version 1, size = 2097148 KiB no label, UUID=e2f1e9cf-c0a9-4ed4-b8ab-714b8a7d6944 Our file is now ready to be used as a swap space. We can enable this by typing: $ sudo swapon /swapfile We can verify that the procedure was successful by checking whether our system reports swap space now: $ sudo swapon -s Filename Type Size Used Priority /swapfile file 2097148 0 -1 We have a new swap file here. We can use the free utility again to corroborate our findings: $ free -m total used free shared buff/cache available Mem: 487 275 13 64 198 135 Swap: 2047 0 2047 Our swap has been set up successfully and our operating system will begin to use it as necessary. Make the Swap File Permanent We have our swap file enabled, but when we reboot, the server will not automatically enable the file. We can change that though by modifying the fstab file. Edit the file with root privileges in your text editor: $ sudo vim /etc/fstab At the bottom of the file, you need to add a line that will tell the operating system to automatically use the file you created: /swapfile none swap sw 0 0 Save and close the file when you are finished. Tweak Swap Settings There are a few options that you can configure that will have an impact on your system's performance when dealing with swap. The swappiness parameter configures how often your system swaps data out of RAM to the swap space. This is a value between 0 and 100 that represents a percentage. With values close to zero, the kernel will not swap data to the disk unless absolutely necessary. Remember, interactions with the swap file are "expensive" in that they take a lot longer than interactions with RAM and they can cause a significant reduction in performance. Telling the system not to rely on the swap much will generally make your system faster. Values that are closer to 100 will try to put more data into swap in an effort to keep more RAM space free. Depending on your applications' memory profile or what you are using your server for, this might be better in some cases. We can see the current swappiness value by typing: $ cat /proc/sys/vm/swappiness 60 For a Desktop, a swappiness setting of 60 is not a bad value. For a VPS system, we'd probably want to move it closer to 0. We can set the swappiness to a different value by using the sysctl command. For instance, to set the swappiness to 10, we could type: $ sudo sysctl vm.swappiness=10 vm.swappiness = 10 This setting will persist until the next reboot. We can set this value automatically at restart by adding the line to our /etc/sysctl.conf file: $ sudo vim /etc/sysctl.conf At the bottom, you can add: vm.swappiness=10 Save and close the file when you are finished. Another related value that you might want to modify is the vfs_cache_pressure. This setting configures how much the system will choose to cache inode and dentry information over other data. Basically, this is access data about the filesystem. This is generally very costly to look up and very frequently requested, so it's an excellent thing for your system to cache. You can see the current value by querying the proc filesystem again: $ cat /proc/sys/vm/vfs_cache_pressure 100 As it is currently configured, our system removes inode information from the cache too quickly. We can set this to a more conservative setting like 50 by typing: $ sudo sysctl vm.vfs_cache_pressure=50 vm.vfs_cache_pressure = 50 Again, this is only valid for our current session. We can change that by adding it to our configuration file like we did with our swappiness setting: $ sudo vim /etc/sysctl.conf At the bottom, add the line that specifies your new value: vm.vfs_cache_pressure = 50 Save and close the file when you are finished. Change swap size Make all swap off: $ sudo swapoff -a Resize the swapfile: $ sudo dd if=/dev/zero of=/swapfile bs=1M count=1024 Make swapfile usable: $ sudo mkswap /swapfile Make swapon again: $ sudo swapon /swapfile Conclusion Following the steps in this guide will give you some breathing room in terms of your RAM usage. Swap space is incredibly useful in avoiding some common problems. If you are running into OOM (out of memory) errors, or if you find that your system is unable to use the applications you need, the best solution is to optimize your application configurations or upgrade your server. Configuring swap space, however, can give you more flexibility and can help buy you time on a less powerful server. Original Link How To Add Swap on Ubuntu 14.04 ### aurora - Cross-platform Beanstalkd Queue Server Admin Console aurora is a web-based Beanstalkd queue server admin console written in Go and works on macOS, Linux, and Windows machines. The main idea behind using Go for backend development is to utilize the ability of the compiler to produce zero-dependency binaries for multiple platforms. aurora was created as an attempt to build a very simple and portable application to work with a local or remote Beanstalk server. Homepage xuri.me/aurora GitHub github.com/xuri/aurora Features Cross-platform support macOS/Linux/Windows 32/64-bit Simple installation (distributed as a single binary) Zero dependencies Common list of servers in the config for all users + optional Basic Auth The full list of available tubes Complete statistics about jobs in tubes Real-time auto-update with highlighting of changed values You can view jobs in ready/delayed/buried states in every tube You can add/kick/delete jobs in every tube You can select multiple tubes by regular expression and clear them You can set the statistics overview graph for every tube You can move jobs between tubes Ability to Pause tubes Search jobs data field Customizable UI (code highlighter, choose columns, edit auto refresh seconds, pause tube seconds) ScreenShots Read More Production-Ready Beanstalkd with Laravel Queues Laravel Beanstalkd Queuing Service Exception Handling ### Secure DNS with DNSSEC The Domain Name System Security Extensions (DNSSEC) is a suite of Internet Engineering Task Force (IETF) specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. It is a set of extensions to DNS which provide to DNS clients (resolvers) origin authentication of DNS data, authenticated denial of existence, and data integrity, but not availability or confidentiality. Nowadays DNS have some weaknesses: DNS cache poisoning DNS spoofing (or DNS cache poisoning) is a computer hacking attack, whereby data is introduced into a Domain Name System (DNS) resolver's cache, causing the name server to return an incorrect IP address, diverting traffic to attacker's computer (or any other computer). Man-in-the-middle attack In cryptography and computer security, a man-in-the-middle attack (often abbreviated MitM, MiM attack, MitMA or the same using all capital letters) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other. Domain hijacking Domain hijacking is the act of changing the registration of a domain name without the permission of its original registrant, it can be done in several ways, generally by exploiting a vulnerability in the domain name registrar's system, through social engineering, or gaining unauthorized access to the domain owner's email account that is associated with the domain name registration. DNSSEC add four new record types DNSKEY - The DNSKEY record type holds the public key that is used in the DNSSEC authentication process. RRSIG - Resource Record Signature, digital signatures for a record set are stored in RRSIG records. NSEC - NSEC records are designed to prove that no records exist between two different points. DS - Delegation Signer record, DNS key validation application. DNSSEC protects against forged DNS answers. DNSSEC protected zones are digitally signed to ensure the DNS records received are identical to the DNS records published by the zone owner. DNSSEC Validation Process +------------+ | DNS RR |--------------------------------+ +------------+ | | v | Hash +------------+ v | DNS RR | +------------+ +------------+ Server | Hash |------------------------>| RRSIG | +------------+ +------------+ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|- - - - - - - v Client +------------+ Hash +------------+ | Hash A |<------------------------| DNS Data | +------------+ +------------+ | | RRSIG | | +------------+ | | v v +------------+ +------------+ +------------+ | Validate |<-----| Hash B |<----| DNS Key | +------------+ +------------+ +------------+ I have configuration DNSSEC on my domain registrar and CND service provider. Use SHA256 digest and 13 algorithm. After configuration test DNSSEC with dig command. $ dig @8.8.8.8 +dnssec xuri.me ; <<>> DiG 9.8.3-P1 <<>> @8.8.8.8 +dnssec xuri.me ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -<<HEADER>>- opcode: QUERY, status: NOERROR, id: 16132 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 512 ;; QUESTION SECTION: ;xuri.me. IN A ;; ANSWER SECTION: xuri.me. 299 IN A 104.28.14.58 xuri.me. 299 IN A 104.28.15.58 xuri.me. 299 IN RRSIG A 13 2 300 20160912074210 20160910054210 35273 xuri.me. /FTGXpH//f9YFPvDmxc7hG790Gth9Lv6BByICHotCJFx744RwFs2DQ1J FlKuN1BTshNKqWdnTa1R9ClOD21d7Q== ;; Query time: 112 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Sat Sep 11 14:40:32 2016 ;; MSG SIZE rcvd: 171 If the DNSSEC signed strings output, means that DNSSEC configuration successful. Reference DNSSEC – What Is It and Why Is It Important? DNSSEC Operational Practices, Version 2 Resource Records for the DNS Security Extensions DNS Security (DNSSEC) NextSECure (NSEC) RDATA Format ### Excelize - Go Language Library for Reading and Writing Spreadsheet Documents Last week I create a Go language spreadsheet document library named Excelize. Sometimes we want to generate a spreadsheet document (XLSX / XLSM / XLTM) without Excel application, .Net or Java applications, according to assessment of some popular open-source library, I can't find out one library that supports save charts after writing an exists Excel document. Sometimes these libraries will lose some colors or style of Excel, so I decided to solve this problem and create library support to read and write a spreadsheet with charts. Before spreadsheet document operations, we need to understand the document structure and Open XML and OOXML (Office Open XML) standard. Let's take a look of spreadsheet document structure. For example Workbook1.xlsx: Unzip Workbook1.xlsx file we can got directory tree like this: Workbook1 ├── [Content_Types].xml ├── _rels ├── docProps │   ├── app.xml │   └── core.xml └── xl ├── _rels │   └── workbook.xml.rels ├── charts │   ├── _rels │   │   ├── chart1.xml.rels │   │   └── chart2.xml.rels │   ├── chart1.xml │   ├── chart2.xml │   ├── colors1.xml │   ├── colors2.xml │   ├── style1.xml │   └── style2.xml ├── drawings │   ├── _rels │   │   └── drawing1.xml.rels │   └── drawing1.xml ├── sharedStrings.xml ├── styles.xml ├── theme │   └── theme1.xml ├── workbook.xml └── worksheets ├── _rels │   └── sheet1.xml.rels ├── sheet1.xml └── sheet2.xml Content Types Every package must have a [Content_Types].xml, found at the root of the package. This file contains a list of all of the content types of the parts in the package. Every part and its type must be listed in [Content_Types].xml. The following is a content type for the main content part: <Override PartName="/xl/workbook.xml" ContentType="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.main+xml"/> </Override> It's important to keep this in mind when adding new parts to the package. Relationships Every package contains a relationships part that defines the relationships between the other parts and to resources outside of the package. This separates the relationships from content and makes it easy to change relationships without changing the sources that reference targets. For an OOXML package, there is always a relationships part .rels within the _rels folder that identifies the starting parts of the package, or the package relationships. For example, the following defines the identity of the start part for the content: <Relationship Id="rId1" Type="http://schemas.openxmlformats.org/officeDocument/2006/relationships/officeDocument" Target="xl/workbook.xml"/> Document Properties There are also typically relationships within .rels for app.xml and core.xml. In addition to the relationships part of the package, each part that is the source of one or more relationships will have its own relationships part. Each such relationship part is found within a _rels sub-folder of the part and is named by appending .rels to the name of the part. Typically the main content part workbook.xml has its own relationships part workbook.xml.rels. It will contain relationships to the other parts of the content, such as sheet1.xml, sharedStrings.xml, styles.xml, theme1,xml, as well as the URIs for external links. Chartsheet Contains a chart that is stored in its owne sheet. A package can contain multiple such parts, referenced from the workbook part. Drawings Contains the presentation and layout information for one or more drawing elements that are present in the worksheet. There should be drawings part for each worksheet that has a drawing. Shared String Table Contains one occurrence of each unique string that occurs in any worksheet within the workbook. There is one such part of a package. Styles Contains all the characteristics for all cells in the workbook, including numeric and text formatting, alignment, font, color, and border. A package contains no more than one such part. Workbook Contains data and references to all of the worksheets. There must be one and only one workbook part. Worksheet Contains all the data, formulas, and characteristics of a given worksheet. There is one such part of each worksheet in the package. Open XML is an open ECMA-376 standard and is also approved as the ISO/IEC 29500 standard that defines a set of XML schemas for representing spreadsheets, charts, presentations, and word processing documents. Microsoft Office Word 2007, Excel 2007, PowerPoint 2007, and the later versions all use Open XML as the default file format. The Open XML file formats are useful for developers because they use an open standard and are based on well-known technologies: ZIP and XML. We can create a library to read and write a spreadsheet document by OOXML standard, Excelize is a Go implementation of that. Excelize can read and write files created by Office Excel 2007™ and later. Basic Usage Installation $ go get github.com/xuri/excelize/v2 Create spreadsheet Here is a minimal example usage that will create spreadsheet file. package main import ( "fmt" "github.com/xuri/excelize/v2" ) func main() { f := excelize.NewFile() // Create a new sheet. index := f.NewSheet("Sheet2") // Set the value of a cell. f.SetCellValue("Sheet2", "A2", "Hello world.") f.SetCellValue("Sheet1", "B2", 100) // Set active sheet of the workbook. f.SetActiveSheet(index) // Save spreadsheet by the given path. if err := f.SaveAs("Book1.xlsx"); err != nil { fmt.Println(err) } } Reading spreadsheet The following constitutes the bare to read a spreadsheet document. package main import ( "fmt" "github.com/xuri/excelize/v2" ) func main() { f, err := excelize.OpenFile("Book1.xlsx") if err != nil { fmt.Println(err) return } // Close the spreadsheet. defer func() { if err := f.Close(); err != nil { fmt.Println(err) } }() // Get value from cell by given worksheet name and axis. cell, err := f.GetCellValue("Sheet1", "B2") if err != nil { fmt.Println(err) return } fmt.Println(cell) // Get all the rows in the Sheet1. rows, err := f.GetRows("Sheet1") if err != nil { fmt.Println(err) return } for _, row := range rows { for _, colCell := range row { fmt.Print(colCell, "\t") } fmt.Println() } } Add chart to spreadsheet file With Excelize chart generation and management is as easy as a few lines of code. You can build charts based on data in your worksheet or generate charts without any data in your worksheet at all. package main import ( "fmt" "github.com/xuri/excelize/v2" ) func main() { categories := map[string]string{ "A2": "Small", "A3": "Normal", "A4": "Large", "B1": "Apple", "C1": "Orange", "D1": "Pear"} values := map[string]int{ "B2": 2, "C2": 3, "D2": 3, "B3": 5, "C3": 2, "D3": 4, "B4": 6, "C4": 7, "D4": 8} f := excelize.NewFile() for k, v := range categories { f.SetCellValue("Sheet1", k, v) } for k, v := range values { f.SetCellValue("Sheet1", k, v) } if err := f.AddChart("Sheet1", "E1", `{ "type": "col3DClustered", "series": [ { "name": "Sheet1!$A$2", "categories": "Sheet1!$B$1:$D$1", "values": "Sheet1!$B$2:$D$2" }, { "name": "Sheet1!$A$3", "categories": "Sheet1!$B$1:$D$1", "values": "Sheet1!$B$3:$D$3" }, { "name": "Sheet1!$A$4", "categories": "Sheet1!$B$1:$D$1", "values": "Sheet1!$B$4:$D$4" }], "title": { "name": "Fruit 3D Clustered Column Chart" } }`); err != nil { fmt.Println(err) return } // Save spreadsheet by the given path. if err := f.SaveAs("Book1.xlsx"); err != nil { fmt.Println(err) } } Add picture to spreadsheet file package main import ( "fmt" _ "image/gif" _ "image/jpeg" _ "image/png" "github.com/xuri/excelize/v2" ) func main() { f, err := excelize.OpenFile("Book1.xlsx") if err != nil { fmt.Println(err) return } // Close the spreadsheet. defer func() { if err := f.Close(); err != nil { fmt.Println(err) } }() // Insert a picture. if err := f.AddPicture("Sheet1", "A2", "image.png", ""); err != nil { fmt.Println(err) } // Insert a picture to worksheet with scaling. if err := f.AddPicture("Sheet1", "D2", "image.jpg", `{"x_scale": 0.5, "y_scale": 0.5}`); err != nil { fmt.Println(err) } // Insert a picture offset in the cell with printing support. if err := f.AddPicture("Sheet1", "H2", "image.gif", `{ "x_offset": 15, "y_offset": 10, "print_obj": true, "lock_aspect_ratio": false, "locked": false }`); err != nil { fmt.Println(err) } // Save the spreadsheet with the origin path. if err = f.Save(); err != nil { fmt.Println(err) } } I will continue to update the program and improve the performance of this package, welcome contributions, open a pull request to fix a bug, or open an issue to discuss a new feature or change. Performance Figures Performance comparison of similar libs The following graph shows performance comparison of generation 102400*50 plain text matrix by the major open source Excel libs under personal computer (2.6 GHz 6-Core Intel Core i7, 16 GB 2667 MHz DDR4, 500GB SSD, macOS Monterey 12.3.1), including Go, Python, Java, PHP and NodeJS. Reference Standard ECMA-376 Office Open XML File Formats Office developer documentation Structure of a SpreadsheetML document (Open XML SDK) Open XML SDK 2.5 for Microsoft Office (only works on the Windows) Google Chrome Extensions: OOXML Tools ### WebSocket Server Go Implementation In 2001, C10k problem are raised, there are already many ways to solve this problem. In this post, I will implement related problem to create a WebSocket server by Go. A WebSocket server is a TCP application listening on any port of a server that follows a specific protocol, simple as that. The task of creating a custom server tends to scare people; however, it can be easy to implement a simple WebSocket server on your platform of choice. You will need to already know how HTTP works and have medium programming experience. Depending on language support, knowledge of TCP sockets may be required. The scope of this guide is to present the minimum knowledge you need to write a WebSocket server. Exchanging Data Frames Either the client or the server can choose to send a message at any time — that's the magic of WebSockets. However, extracting information from these so-called "frames" of data is a not-so-magical experience. Although all frames follow the same specific format, data going from the client to the server is masked using XOR encryption (with a 32-bit key). Section 5 of the specification describes this in detail. Format Each data frame (from the client to the server or vice-versa) follows this same format: 0 1 2 3 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 +-+-+-+-+-------+-+-------------+-------------------------------+ |F|R|R|R| opcode|M| Payload len | Extended payload length | |I|S|S|S| (4) |A| (7) | (16/64) | |N|V|V|V| |S| | (if payload len==126/127) | | |1|2|3| |K| | | +-+-+-+-+-------+-+-------------+ - - - - - - - - - - - - - - - + 4 5 6 7 + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + | Extended payload length continued, if payload len == 127 | + - - - - - - - - - - - - - - - +-------------------------------+ 8 9 10 11 + - - - - - - - - - - - - - - - +-------------------------------+ | |Masking-key, if MASK set to 1 | +-------------------------------+-------------------------------+ 12 13 14 15 +-------------------------------+-------------------------------+ | Masking-key (continued) | Payload Data | +-------------------------------- - - - - - - - - - - - - - - - + : Payload Data continued ... : + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + | Payload Data continued ... | +---------------------------------------------------------------+ RSV1-3 can be ignored, they are for extensions. The MASK bit simply tells whether the message is encoded. Messages from the client must be masked, so your server should expect this to be 1. (In fact, section 5.1 of the spec says that your server must disconnect from a client if that client sends an unmasked message.) When sending a frame back to the client, do not mask it and do not set the mask bit. We'll explain masking later. Note: You have to mask messages even when using a secure socket. The opcode field defines how to interpret the payload data: 0x0 for continuation, 0x1 for text (which is always encoded in UTF-8), 0x2 for binary, and other so-called "control codes" that will be discussed later. In this version of WebSockets, 0x3 to 0x7 and 0xB to 0xF have no meaning. The FIN bit tells whether this is the last message in a series. If it's 0, then the server will keep listening for more parts of the message; otherwise, the server should consider the message delivered. More on this later. Decoding Payload Length To read the payload data, you must know when to stop reading. That's why the payload length is important to know. Unfortunately, this is somewhat complicated. To read it, follow these steps: Read bits 9-15 (inclusive) and interpret that as an unsigned integer. If it's 125 or less, then that's the length; you're done. If it's 126, go to step 2. If it's 127, go to step 3. Read the next 16 bits and interpret those as an unsigned integer. You're done. Read the next 64 bits and interpret those as an unsigned integer (The most significant bit MUST be 0). You're done. The following package has been created Websocket abstract connection method, you can use their to create WebSocket server with Go. $ go get github.com/gorilla/websocket $ go get github.com/rgamba/evtwebsocket WebSocket Server hub.go package main // hub maintains the set of active clients and broadcasts messages to the clients. type hub struct { // Registered clients. connections map[*connection]bool // Inbound messages from the clients. broadcast chan []byte // Register requests from the clients. register chan *connection // Unregister requests from clients. unregister chan *connection } var h = hub{ broadcast: make(chan []byte), register: make(chan *connection), unregister: make(chan *connection), connections: make(map[*connection]bool), } func (h *hub) run() { for { select { case c := <-h.register: h.connections[c] = true case c := <-h.unregister: if _, ok := h.connections[c]; ok { delete(h.connections, c) close(c.send) } case m := <-h.broadcast: for c := range h.connections { select { case c.send <- m: default: delete(h.connections, c) close(c.send) } } } } } connection.go package main import ( "fat" "github.com/gorilla/websocket" "net/http" ) var online int // connection is an middleman between the websocket connection and the hub. type connection struct { // The web socket connection ws *websocket.Conn // Buffered channel of outbound messages. send chan []byte } // reader pumps messages from the websocket connection to the hub. func (c *connection) reader() { for { _, message, err := c.ws.ReadMessage() if err != nil { fmt.Println(err) break } h.broadcast <- message } c.ws.Close() online -= 1 fmt.Println(online) } // write writes a message with the given message type and payload. func (c *connection) writer() { for message := range c.send { err := c.ws.WriteMessage(websocket.TextMessage, message) if err != nil { fmt.Println(err) break } } c.ws.Close() } var upgrader = &websocket.Upgrader{ReadBufferSize: 1024, WriteBufferSize: 1024, CheckOrigin: func(r *http.Request) bool { return true }} // wsHandler handles websocket requests from the peer. func wsHandler(w http.ResponseWriter, r *http.Request) { ws, err := upgrader.Upgrade(w, r, nil) if err != nil { fmt.Println(err) return } online += 1 fmt.Println(online) c := &connection{send: make(chan []byte, 256), ws: ws} h.register <- c defer func() { h.unregister <- c }() go c.writer() c.reader() } main.go package main import ( "flag" "go/build" "log" "net/http" "path/filepath" "text/template" ) var ( addr = flag.String("addr", ":8080", "http service address") assets = flag.String("assets", defaultAssetPath(), "path to assets") homeTempl *template.Template ) func defaultAssetPath() string { p, err := build.Default.Import("github.com/gorilla/websocket", "", build.FindOnly) if err != nil { return "." } return p.Dir } func homeHandler(c http.ResponseWriter, req *http.Request) { homeTempl.Execute(c, req.Host) } func benchmarkHandler(c http.ResponseWriter, req *http.Request) { h.broadcast <- []byte("test message") } func main() { flag.Parse() homeTempl = template.Must(template.ParseFiles(filepath.Join(*assets, "home.html"))) go h.run() http.HandleFunc("/", homeHandler) http.HandleFunc("/ws", wsHandler) http.HandleFunc("/benchmark", benchmarkHandler) if err := http.ListenAndServe(*addr, nil); err != nil { log.Fatal("ListenAndServe:", err) } } WebSocket Client home.html <html> <head> <title>Chat Example</title> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { var conn; var msg = $("#msg"); var log = $("#log"); function appendLog(msg) { var d = log[0] var doScroll = d.scrollTop == d.scrollHeight - d.clientHeight; msg.appendTo(log) if (doScroll) { d.scrollTop = d.scrollHeight - d.clientHeight; } } $("#form").submit(function() { if (!conn) { return false; } if (!msg.val()) { return false; } conn.send(msg.val()); msg.val(""); return false }); if (window["WebSocket"]) { conn = new WebSocket("ws://<server_ip:server_port>/ws"); conn.onclose = function(evt) { appendLog($("<div><b>Connection closed.</b></div>")) } conn.onmessage = function(evt) { appendLog($("<div/>").text(evt.data)) } } else { appendLog($("<div><b>Your browser does not support WebSockets.</b></div>")) } }); </script> <style type="text/css"> html { overflow: hidden; } body { overflow: hidden; padding: 0; margin: 0; width: 100%; height: 100%; background: gray; } #log { background: white; margin: 0; padding: 0.5em 0.5em 0.5em 0.5em; position: absolute; top: 0.5em; left: 0.5em; right: 0.5em; bottom: 3em; overflow: auto; } #form { padding: 0 0.5em 0 0.5em; margin: 0; position: absolute; bottom: 1em; left: 0px; width: 100%; overflow: hidden; } </style> </head> <body> <div id="log"></div> <form id="form"> <input type="submit" value="Send" /> <input type="text" id="msg" size="64"/> </form> </body> </html> benchmark.go package main import ( "fat" "log" "github.com/rgamba/evtwebsocket" "golang.org/x/net/websocket" ) func main() { for i := 0; i < 1000; i++ { go func() { c := evtwebsocket.Conn{ // When connection is established OnConnected: func(w *websocket.Conn) { fmt.Println("Connected") }, // When a message arrives OnMessage: func(msg []byte) { log.Printf("Received uncatched message: %s\n", msg) }, // When the client disconnects for any reason OnError: func(err error) { fmt.Printf("** ERROR **\n%s\n", err.Error()) }, // This is used to match the request and response messages MatchMsg: func(req, resp []byte) bool { return string(req) == string(rest) }, } // Connect if err := c.Dial("ws://<server_ip:server_port>/ws"); err != nil { log.Fatal(err) } // Create the message with a callback msg := evtwebsocket.Msg{ Body: nil, Callback: func(resp []byte) { fmt.Printf("Got back: %s\n", resp) }, } log.Printf("%s\n", msg.Body) }() } select {} } Server Parameters Optimization TCP/IP parameter Modify /etc/sysctl.conf, add following code: net.ipv4.tcp_wmem = 4096 87380 4161536 net.ipv4.tcp_rmem = 4096 87380 4161536 net.ipv4.tcp_mem = 786432 2097152 3145728 The file-max parameter The file-max parameter sets the maximum number of file-handles that the Linux kernel will allocate. Temporary settings by run: $ sudo echo 1000000 > /proc/sys/fs/file-max Persistent settings edit the /etc/sysctl.conf file and add the following line: fs.file-max = 1000000 Optimization ulimit parameter Linux itself has a Max Processes per user limit. This feature allows us to control the number of processes an existing user on the server may be authorized to have. To improve performance, you can safely set the limit of processes for the super-user root to be unlimited. A hard limit can only be raised by root (any process can lower it). So it is useful for security: a non-root process cannot overstep a hard limit. But it's inconvenient in that a non-root process can't have a lower limit than its children. A soft limit can be changed by the process at any time. So it's convenient as long as processes cooperate, but no good for security. Check current ulimit settings by command: $ ulimit -a Increase ulimit parameter edit the /etc/security/limits.conf file and add the following line: * hard no file 1000000 * soft no file 1000000 root hard no file 1000000 root soft no file 1000000 Note that the hard limit can't greater than /proc/sys/fs/nr_open value, NR_OPEN is the maximum number of files that can be opened by process, so you maybe change nr_open value: $ sudo echo 2000000 > /proc/sys/fs/nr_open Client Parameters Optimization By using following virtual IP address configuration, you can create (6535-1024)*4=258044 connections: $ ifconfig eth0:0 172.17.0.2 netmask 255.255.255.0 up $ ifconfig eth0:0 172.17.0.3 netmask 255.255.255.0 up $ ifconfig eth0:0 172.17.0.4 netmask 255.255.255.0 up $ ifconfig eth0:0 172.17.0.5 netmask 255.255.255.0 up Open the /etc/sysctl.conf file and add the following line: net.ipv4.ip_local_port_range = 1024 65535 Make configuration to take effect: $ sudo /sbin/sysctl -p OutOfMemory Killer If RAM of server not enough in some case might have a process "Killed" problems. You can close oom-killer to solve this problem with execult following command, but the better way is increase RAM. $ sudo echo -17 > /proc/$(pidof java)/oom_adj Websocket Server Environment CPU : Intel(R) Xeon(R) CPU E5-2620 @ 2.10GHz (6 Core, 12 Thread) RAM : 4GB DDR3 1333 MHz HDD : 64GB HDD OS : CentOS Linux release 7.2.1511 (Core) Kernel : Linux 3.10.0-327.22.2.el7.x86_6 Each benchmark program can initiate 5000 connection, deploy the benchmark program on the other two servers, simulation 10000 client. Use nmon as performance monitor and use Hey to send HTTP request to WebSocket server. Hey is a tiny program that sends some load to a web application. It's similar to Apache Bench (ab), but with better availability across different platforms and a less troubling installation experience. once the server received request, it will send a broadcast to all client. $ sudo nmon16e_x86_rhel65 -f nmon -s 1 -c 180 -t $ hey -n 1000 -c 10 http://<server_ip:server_port>/benchmark Summary: Total: 31.9855 secs Slowest: 5.6196 secs Fastest: 0.0007 secs Average: 0.3034 secs Requests/sec: 31.2642 Status code distribution: [200] 1000 responses Response time histogram: 0.001 [1] | 0.563 [921] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎ 1.124 [47] |∎∎ 1.686 [12] | 2.248 [4] | 2.810 [11] | 3.372 [1] | 3.934 [1] | 4.496 [1] | 5.058 [0] | 5.620 [1] | Latency distribution: 10% in 0.1193 secs 25% in 0.1606 secs 50% in 0.2246 secs 75% in 0.2715 secs 90% in 0.4822 secs 95% in 0.7415 secs 99% in 2.5077 secs You will be get a .nmon file when benchmark finished. Open that file with NMONVisualizer, it's a Java GUI tool for analyzing nmon system files from both AIX and Linux. It also parses IOStat files, IBM verbose GC logs, Windows Perfmon & ESXTop CSV data and JSON data. You can get some benchmark charts. Reference Mozilla Developer Network - Writing WebSocket servers ### LRU and LFU Cache Algorithms Least Recently Used (LRU) Discards the least recently used items first. This algorithm requires keeping track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping "age bits" for cache-lines and track the "Least Recently Used" cache-line based on age-bits. In such an implementation, every time a cache-line is used, the age of all other cache-lines changes. LRU Cache Elimination Process Golang Implement: github.com/golang/groupcache/blob/master/lru/lru.go Least-Frequently Used (LFU) Least Frequently Used (LFU) is a type of cache algorithm used to manage memory within a computer. The standard characteristics of this method involve the system keeping track of the number of times a block is referenced in memory. When the cache is full and requires more room the system will purge the item with the lowest reference frequency. The simplest method to employ an LFU algorithm is to assign a counter to every block that is loaded into the cache. Each time a reference is made to that block the counter is increased by one. When the cache reaches capacity and has a new block waiting to be inserted the system will search for the block with the lowest counter and remove it from the cache. LFU Cache Elimination Process Golang Implement: github.com/bluele/gcache/blob/master/lfu.go The difference between LRU and LFU For example, if cache size is 3, the data access sequence as: set(2,2), set(1,1), get(2), get(1), get(2), set(3,3), set(4,4) When set(4,4) the LFU algorithm will eliminated (3,3), LRU will eliminated (1,1). ### Disjoint-set Forests Go Implementation Disjoint-set Disjoint-set data structure also called a union–find data structure or merge–find set. It maintains a collection \(S = \lbrace S_1, S_2,...,S_k\rbrace\) of disjoint dynamic sets. We identify each set by a \(representative\), which is some member of the set. In some applications, it doesn't matter which member is used as the representative; we care only that if we ask for the representative of a dynamic set twice without modifying the set between the requests, we get the same answer both times. Other applications may require a prespecified rule for choosing the representative, such as choosing the smallest member in the set (assuming, of course, that the elements can be ordered). It supports two useful operations: Find: Determine which subset a particular element is in. Find typically returns an item from this set that serves as its \(representative\); by comparing the result of two Find operations, one can determine whether two elements are in the same subset. Union: Join two subsets into a single subset. Disjoint-set Forests Disjoint-set forests are data structures where each set is represented by a tree data structure, in which each node holds a reference to its parent node. They were first described by Bernard A. Galler and Michael J. Fischer in 1964, although their precise analysis took years. In a disjoint-set forest, the representative of each set is the root of that set's tree. Find follows parent nodes until it reaches the root. Union combines two trees into one by attaching the root of one to the root of the other. One way of implementing these might be: MakeSet, Find, and Union. A sequence of \(2n-1\) operations on \(n\) objects that takes \(\theta (n^2)\) time, or \(\theta (n)\) time per operation on average, using the linked-list set representation and the simple implementation of UNION. \[ m + \sum^{n-1}_{i=1}{i} = \theta(n^2)\] The total number of operations is \(2n-1\), and so each operation on average requires \(\theta (n)\) time. That is, the amortized time of an operation is \(\theta (n)\). Go Implementation disjoint.go: package disjoint type Element struct { Parent *Element Value interface{} } func Makeset() *Element { e := new(Element) e.Parent = e return e } func Find(e *Element) *Element { if e.Parent == e { return e } e.Parent = Find(e.Parent) return e.Parent } func Union(e1, e2 *Element) { root1 := Find(e1) root2 := Find(e2) root1.Parent = root2 } disjoint_test.go: package disjoint import ( "testing" ) func TestMakeSet(t *testing.T) { e1 := Makeset() if e1.Parent != e1 { t.Errorf("Incorrect parent in Maketset") } } func TestUnion(t *testing.T) { e1 := Makeset() e2 := Makeset() Union(e1, e2) if Find(e1) != e2 { t.Errorf("Incorrect parent after a union") } } func TestPathCompression(t *testing.T) { e1 := Makeset() e2 := Makeset() e3 := Makeset() Union(e2, e1) Union(e3, e2) if e3.Parent != e1 { t.Errorf("Path was incorrectly compressed after 2 unions") } } Reference Chapter 21 Data Structures for Disjoint Sets of Introduction to Algorithms, 3rd Edition By Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein ### Continuous Integration with Jenkins Cluster Introduction Jenkins is an open source continuous integration tool written in Java. The project was forked from Hudson after a dispute with Oracle. Jenkins provides continuous integration services for software development. It is a server-based system running in a servlet container such as Apache Tomcat. It supports SCM tools including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, Clearcase and RTC, and can execute Apache Ant and Apache Maven based projects as well as arbitrary shell scripts and Windows batch commands. The primary developer of Jenkins is Kohsuke Kawaguchi. Released under the MIT License, Jenkins is free software. Builds can be started by various means, including being triggered by commit in a version control system, by scheduling via a cron-like mechanism, by building when other builds have completed, and by requesting a specific build URL. Environment and Versions Master Ubuntu Server 16.04 (Xenial Xerus) 64-bits 172.16.136.142 Jenkins v.2.7.1 Slave Ubuntu Server 16.04 (Xenial Xerus) 64-bits 172.16.136.138 Jenkins v.2.7.1 Set date and timezone: $ sudo apt-get install nptdate $ sudo dpkg-reconfigure tzdata $ sudo ntpdate pool.ntp.org Install Jenkins on all nodes: $ wget -q -O - https://jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add - $ sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' $ sudo apt-get update $ sudo apt-get install jenkins Setup jenkins user password on slave node: $ sudo passwd jenkins Configuration Jenkins via Web UI on all nodes: Unlock Jenkins Install suggested plugins Create first admin user Slave nodes configuration on master node: Enter Jenkins → Credentials → System → Global credentials (unrestricted) → Add Credentials Add slave nodes: Jenkins → Manage Jenkins → New Node Slave node configurations: Launch slave agent Jenkins → Nodes → slave → Launch agent ### Debugging with Xdebug and PhpStorm on OS X Xdebug is a PHP extension which provides debugging and profiling capabilities. It uses the DBGp debugging protocol. PhpStorm IDE is an intelligent PHP IDE with refactorings, code completion, on-the-fly code analysis and coding productivity orientation. PHP 5.5.37 PhpStorm 2016.1 Build #PS-145.258 Xdebug 2.4.0 Homebrew 0.9.9 Install Xdebug for PHP Homebrew make install Xdebug simple $ brew install php55-xdebug We can get those info of installation with the following command: $ brew info php55-xdebug To check Xdebug was installed used the php-cli command or phpinfo(). Both should contain the Xdebug configuration. $ php -i | grep xdebug Configure Xdebug and PhpStorm If you are using php-fpm, change the default port of Xdebug because 9000 is taken by php-fpm on machine. Configure /usr/local/etc/php/5.5/conf.d/ext-xdebug.ini like this: [xdebug] zend_extension="/usr/local/opt/php55-xdebug/xdebug.so" xdebug.remote_enable=1 xdebug.remote_port=9001 xdebug.remote_handler="dbgp" xdebug.remote_host="localhost" xdebug.idekey=PHPSTORM Restart PHP $ brew services restart php55 And also change the port on PhpStorm: Preferences → Languages & Frameworks → PHP → Development environment: Setting PHP language level and Interpretr Preferences → Languages & Frameworks → PHP → Debug: Setting Xdebug port to 9001 Preferences → Languages & Frameworks → PHP → Debug → DBGp Proxy: Set IDE Key to PHPSTORM, Host to host and Port is 80 Preferences → Languages & Frameworks → PHP → Servers: Add new server with your onwn Name (localhost), Host (localhost), Port (80) and Debugger Xdebug settings Click Edit Configurations... on PhpStorm upper right corner open Run/Debug Configurations box, setting Server, Start URL and Browser Launch Debug Session On PhpStorm from the menu: Run → Start Listening for PHP Debug Connections Chrome Extension Xdebug helper Firefox Add-ons The easiest Xdebug Related Article Profiling PHP Applications with XHProf Profiling PHP Applications with Xdebug ### Implement Go RPC Service Based on Apache Thrift Remote Procedure Call (RPC) is a protocol that one program can use to request a service from a program located in another computer in a network without having to understand network details. (A procedure call is also sometimes known as a function call or a subroutine call.) RPC uses the client/server model. The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages. Thrift Network Stack Server - Single-threaded, event-driven etc. Processor - Compiler generated. Protocol - JSON, compact etc. Transport - Raw TCP, HTTP etc. Protocol Layer The protocol layer provides serialization and deserialization. Thrift supports the following protocols: TBinaryProtocol - A straight-forward binary format encoding numeric values as binary, rather than converting to text. TCompactProtocol - Very efficient, dense encoding of data (See details below). TDenseProtocol - Similar to TCompactProtocol but strips off the meta information from what is transmitted, and adds it back in at the receiver. TDenseProtocol is still experimental and not yet available in the Java implementation. TJSONProtocol - Uses JSON for encoding of data. TSimpleJSONProtocol - A write-only protocol using JSON. Suitable for parsing by scripting languages. TDebugProtocol - Uses a human-readable text format to aid in debugging. Tranport Layer The transport layer is responsible for reading from and writing to the wire. Thrift supports the following: TSocket - Uses blocking socket I/O for transport. TFramedTransport - Sends data in frames, where each frame is preceded by a length. This transport is required when using a non-blocking server. TFileTransport - This transport writes to a file. While this transport is not included with the Java implementation, it should be simple enough to implement. TMemoryTransport - Uses memory for I/O. The Java implementation uses a simple ByteArrayOutputStream internally. TZlibTransport - Performs compression using zlib. Used in conjunction with another transport. Not available in the Java implementation. Install Thrift Install Apache Thrift on OS X via Homebrew and validate Thrift version: $ brew install thrift && thrift -version Thrift IDL The Thrift interface definition language (IDL) allows for the definition of Thrift Types. A Thrift IDL file is processed by the Thrift code generator to produce code for the various target languages to support the defined structs and services in the IDL file. The first thing to know about are types. The available types in Thrift are: bool - Boolean, one byte i8 (byte) - Signed 8-bit integer i16 - Signed 16-bit integer i32 - Signed 32-bit integer i64 - Signed 64-bit integer double - 64-bit floating point value string - String binary - Blob (byte array) map<t1,t2> - Map from one type to another list<t1> - Ordered list of one type set<t1> - Set of unique elements of one type Create mythrift.thrift file: /** * Thrift files can namespace, package, or prefix their output in various * target languages. */ namespace go mythrift.demo namespace php mythrift.demo /** * Structs are the basic complex data structures. They are comprised of fields * which each have an integer identifier, a type, a symbolic name, and an * optional default value. * * Fields can be declared "optional", which ensures they will not be included * in the serialized output if they aren't set. Note that this requires some * manual management in some languages. */ struct Article{ 1: i32 id, 2: string title, 3: string content, 4: string author, } const map<string,string> MAPCONSTANT = {'hello':'world', 'goodnight':'moon'} service myThrift { list<string> CallBack(1:i64 callTime, 2:string name, 3:map<string, string> paramMap), void put(1: Article newArticle), } Compile IDL File: $ thrift -r --gen go mythrift.thrift $ thrift -r --gen php mythrift.thrift $ thrift -r --gen php:server mythrift.thrift After execute above command will generate a gen-go folder, and copy gen-go/mythrift folder to $GOPATH. Get Go Thrift package: $ go get github.com/apache/thrift/lib/go/thrift Go Implement Thrift Server Create main.go file in thrift-server folder: package main import ( "fmt" "os" "time" "git.apache.org/thrift.git/lib/go/thrift" "mythrift/demo" // Import interface package generated by Thrift ) const ( NetworkAddr = "127.0.0.1:9090" ) type mythriftThrift struct{} func (this *mythriftThrift) CallBack(callTime int64, name string, paramMap map[string]string) (r []string, err error) { fmt.Println("-->from client Call:", time.Unix(callTime, 0).Format("2006-01-02 15:04:05"), name, paramMap) r = append(r, "key:"+paramMap["a"]+" value:"+paramMap["b"]) return } func (this *mythriftThrift) Put(s *demo.Article) (err error) { fmt.Printf("Article--->id: %d\tTitle:%s\tContent:%s\tAuthor:%s\n", s.ID, s.Title, s.Content, s.Author) return nil } func main() { transportFactory := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory()) protocolFactory := thrift.NewTBinaryProtocolFactoryDefault() serverTransport, err := thrift.NewTServerSocket(NetworkAddr) if err != nil { fmt.Println("Error!", err) os.Exit(1) } handler := &mythriftThrift{} processor := demo.NewMyThriftProcessor(handler) server := thrift.NewTSimpleServer4(processor, serverTransport, transportFactory, protocolFactory) fmt.Println("thrift server in", NetworkAddr) server.Serve() } Go Implement Thrift Client Create main.go file in thrift-client folder: package main import ( "fmt" "net" "os" "strconv" "time" "git.apache.org/thrift.git/lib/go/thrift" "mythrift/demo" ) const ( HOST = "127.0.0.1" PORT = "9090" ) func main() { startTime := currentTimeMillis() transportFactory := thrift.NewTFramedTransportFactory(thrift.NewTTransportFactory()) protocolFactory := thrift.NewTBinaryProtocolFactoryDefault() transport, err := thrift.NewTSocket(net.JoinHostPort(HOST, PORT)) if err != nil { fmt.Fprintln(os.Stderr, "error resolving address:", err) os.Exit(1) } useTransport := transportFactory.GetTransport(transport) client := demo.NewMyThriftClientFactory(useTransport, protocolFactory) if err := transport.Open(); err != nil { fmt.Fprintln(os.Stderr, "Error opening socket to "+HOST+":"+PORT, " ", err) os.Exit(1) } defer transport.Close() for i := 0; i < 10; i++ { paramMap := make(map[string]string) paramMap["a"] = "mythrift.demo" paramMap["b"] = "test" + strconv.Itoa(i+1) r1, _ := client.CallBack(time.Now().Unix(), "Go client", paramMap) fmt.Println("Go client call->", r1) } model := demo.Article{1, "Send from Go Thrift Client", "Hello World!", "Go"} client.Put(&model) endTime := currentTimeMillis() fmt.Printf("The call took:%d-%d=%d Millis \n", endTime, startTime, (endTime - startTime)) } func currentTimeMillis() int64 { return time.Now().UnixNano() / 1000000 } PHP Implement Thrift Client Clone git@github.com:apache/thrift.git and import thrift/lib/php/lib in PHP script, create phpthrift-client folder copy gen-php folder into it. Create main.php in phpthrift-client folder: <?php /** * Thrift RPC - PHP Client */ namespace mythrift\demo; error_reporting(E_ALL); $startTime = getMillisecond(); $ROOT_DIR = realpath(dirname(__FILE__).'/'); $GEN_DIR = realpath(dirname(__FILE__).'/') . '/gen-php'; require_once $ROOT_DIR . '/Thrift/ClassLoader/ThriftClassLoader.php'; use Thrift\ClassLoader\ThriftClassLoader; use Thrift\Protocol\TBinaryProtocol; use Thrift\Transport\TSocket; use Thrift\Transport\TSocketPool; use Thrift\Transport\TFramedTransport; use Thrift\Transport\TBufferedTransport; $loader = new ThriftClassLoader(); $loader->registerNamespace('Thrift',$ROOT_DIR); $loader->registerDefinition('mythrift\demo', $GEN_DIR); $loader->register(); $thriftHost = '127.0.0.1'; $thriftPort = 9090; $socket = new TSocket($thriftHost,$thriftPort); $socket->setSendTimeout(10000); // Sets the send timeout. $socket->setRecvTimeout(20000); // Sets the receive timeout. $transport = new TFramedTransport($socket); $protocol = new TBinaryProtocol($transport); $client = new \mythrift\demo\myThriftClient($protocol); $transport->open(); $socket->setDebug(TRUE); for ($i=0; $i < 10; $i++) { $item = array(); $item['a'] = 'mythrift.demo'; $item['b'] = 'test' + $i; $result = $client->CallBack(time(), 'PHP Client', $item); echo 'PHP Client Call->', implode('', $result), PHP_EOL; } $s = new \mythrift\demo\Article(); $s->id = 1; $s->title = 'Send from PHP Thrift Client'; $s->content = 'Hello World!'; $s->author = 'PHP'; $client->put($s); $endTime = getMillisecond(); echo 'The call took:', $endTime, '-', $startTime, '=', ($endTime - $startTime), ' Millis', PHP_EOL; function getMillisecond() { list($t1, $t2) = explode(' ', microtime()); return (float)sprintf('%.0f', (floatval($t1) + floatval($t2)) * 1000); } $transport->close(); Build and Testing Build and run thrift-server: $ go build thrift-server && ./thrift-server thrift server in 127.0.0.1:9090 Build and run thrift-client: $ go build thrift-client && ./thrift-client Go client call-> [key:mythrift.demo value:test1] Go client call-> [key:mythrift.demo value:test2] Go client call-> [key:mythrift.demo value:test3] Go client call-> [key:mythrift.demo value:test4] Go client call-> [key:mythrift.demo value:test5] Go client call-> [key:mythrift.demo value:test6] Go client call-> [key:mythrift.demo value:test7] Go client call-> [key:mythrift.demo value:test8] Go client call-> [key:mythrift.demo value:test9] Go client call-> [key:mythrift.demo value:test10] The call took:1467012021504-1467012021428=76 Millis Thirft server output: -->from client Call: 2016-06-27 15:20:21 Go client map[a:mythrift.demo b:test1] -->from client Call: 2016-06-27 15:20:21 Go client map[b:test2 a:mythrift.demo] -->from client Call: 2016-06-27 15:20:21 Go client map[b:test3 a:mythrift.demo] -->from client Call: 2016-06-27 15:20:21 Go client map[a:mythrift.demo b:test4] -->from client Call: 2016-06-27 15:20:21 Go client map[b:test5 a:mythrift.demo] -->from client Call: 2016-06-27 15:20:21 Go client map[a:mythrift.demo b:test6] -->from client Call: 2016-06-27 15:20:21 Go client map[b:test7 a:mythrift.demo] -->from client Call: 2016-06-27 15:20:21 Go client map[b:test8 a:mythrift.demo] -->from client Call: 2016-06-27 15:20:21 Go client map[a:mythrift.demo b:test9] -->from client Call: 2016-06-27 15:20:21 Go client map[a:mythrift.demo b:test10] Article--->id: 1 Title:Send from Go Thrift Client Content:Hello World! Author:Go Run phpthrift-client/main.php: $ php main.php PHP Client Call->key:mythrift.demo value:0 PHP Client Call->key:mythrift.demo value:1 PHP Client Call->key:mythrift.demo value:2 PHP Client Call->key:mythrift.demo value:3 PHP Client Call->key:mythrift.demo value:4 PHP Client Call->key:mythrift.demo value:5 PHP Client Call->key:mythrift.demo value:6 PHP Client Call->key:mythrift.demo value:7 PHP Client Call->key:mythrift.demo value:8 PHP Client Call->key:mythrift.demo value:9 The call took:1467018064708-1467018064646=62 Millis Thirft server output: -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:0] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:1] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:2] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:3] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:4] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:5] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:6] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:7] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:8] -->from client Call: 2016-06-27 17:01:04 PHP Client map[a:mythrift.demo b:9] Article--->id: 1 Title:Send from PHP Thrift Client Content:Hello World! Author:PHP ### Write on Graduation Season June 24, 2016, the end of my college life. It also means that 16 years of campus life ended. I will face more challenges never meet in school. This is a new start. When I was a freshman, I joined UCAIS team learns web front-end technology with teacher Zhao, and seniors Yu Liu. As a sophomore I joined TrustWorty team at Heilongjiang Institue of Technogly. Junior year I worked as Co-founder at Harbin Wizard Technology Co., Ltd. with Mr. Luo. I started my entrepreneurial life at this time. Last year of college I joined Qihoo 360 Technology Co. Ltd.. Enterprise Security Group, worked as a software engineer intern. Thanks very much for everyone support and help of my life. In a word, my college life makes me feel so memorable and excited. ### Embedded Icon in Go Windows Application Install MinGW MinGW is a compiler system uses GCC to produce Windows programs. Win32 ports of GCC, GDB, binutils to build native Win32 programs that rely on no 3rd party DLLs. Install GCC Install GCC (GNU C Compiler) from MinGW, select mingw32-gcc package from All Packages → MinGW → MinGW Based System in left sidebar, and click Apply change in Installation menu. Environment Variable Setting Add MinGW key and C:\MinGW in operating system environment variable and set C:\MinGW\bin append to Path value. After do that, checking version info of gcc in command prompt window by gcc -v command. C:\Documents and Settings\Administrator\go_res>gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=c:/mingw/bin/../libexec/gcc/mingw32/4.9.3/lto-wrapper.exe Target: mingw32 Configured with: ../src/gcc-4.9.3/configure --build=x86_64-pc-linux-gnu --host=mingw32 --prefix=/mingw --disable-win32-registry --target=mingw32 --with-arch=i586 --enable-languages=c,c++,objc,obj-c++,fortran,ada --enable-static --enable-shared --enable-threads --with-dwarf2 --disable-sjlj-exceptions --enable-version-specific-runtime-libs --enable-libstdcxx-debug --with-tune=generic --enable-nls Thread model: win32 gcc version 4.9.3 (GCC) Create main.exe.manifest file and put following content into it: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="1.0.0.0" processorArchitecture="x86" name="controls" type="win32" /> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="*" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> </assembly> Create main.rc file and put following content into it: IDI_ICON1 ICON "icon.ico" 1 24 "main.exe.manifest" Also add version info: 1 VERSIONINFO FILEVERSION 1,0,0,0 PRODUCTVERSION 1,0,0,0 FILEFLAGS 0x0L FILEFLAGSMASK 0x3fL FILESUBTYPE 0 BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904B0" BEGIN VALUE "FileDescription", "File description" VALUE "FileVersion", "1.0" VALUE "LegalCopyright", "Copyright" VALUE "InternalName", "Internel name" VALUE "OriginalFilename", "File name" VALUE "ProductName", "Product name" VALUE "ProductVersion", "1.0" END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x0409, 0x04B0 END END 1 ICON "icon.ico" 1 RT_MANIFEST "main.exe.manifest" icon.ico is the name of the application icon, make sure put the icon file in the same directory with main.exe.manifest, main.rc and main.go files. Build and Embedded Icon Extract windres.exe from MinGW installer, we can download from SourceForge. Use windres.exe generate a .syso file, the Go compiler will automatically search and link this file in current compile directory. C:\Documents and Settings\Administrator\go_res>windres -o main-res.syso main.rc && go build -i C:\Documents and Settings\Administrator\go_res>dir Volume in driver C has no label. Volume Serial Number is E43D-2258 Directory of C:\Documents and Settings\Administrator\go_res 2016-06-15 15:27 <DIR> . 2016-06-15 15:27 <DIR> .. 2016-06-15 15:07 150,876 icon.ico 2016-06-15 15:22 152,624 main-res.syso 2016-06-15 15:22 2,144,256 go_res.exe 2016-06-15 15:10 603 main.exe.manifest 2016-06-15 15:00 77 main.go 2016-06-15 15:13 51 main.rc 2011-11-30 19:20 1,811,470 windres.exe Reference MSDN: Resource-Definition Statements - VERSIONINFO resource Update: if you got exit status 0xc0000139 on Windows error, download 64-bit toolchains form GCC 13.2.0 (with POSIX threads), after that execute gcc -v command again, the versions info should be like this: gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=C:/MinGW/bin/../libexec/gcc/x86_64-w64-mingw32/13.2.0/lto-wrapper.exe OFFLOAD_TARGET_NAMES=nvptx-none Target: x86_64-w64-mingw32 Configured with: ../configure --prefix=/R/winlibs64ucrt_stage/inst_gcc-13.2.0/share/gcc --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --enable-offload-targets=nvptx-none --with-pkgversion='MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8' --with-tune=generic --enable-checking=release --enable-threads=posix --disable-sjlj-exceptions --disable-libunwind-exceptions --disable-serial-configure --disable-bootstrap --enable-host-shared --enable-plugin --disable-default-ssp --disable-rpath --disable-libstdcxx-debug --disable-version-specific-runtime-libs --with-stabs --disable-symvers --enable-languages=c,c++,fortran,lto,objc,obj-c++ --disable-gold --disable-nls --disable-stage1-checking --disable-win32-registry --disable-multilib --enable-ld --enable-libquadmath --enable-libada --enable-libssp --enable-libstdcxx --enable-lto --enable-fully-dynamic-string --enable-libgomp --enable-graphite --enable-mingw-wildcard --enable-libstdcxx-time --enable-libstdcxx-pch --with-mpc=/d/Prog/winlibs64ucrt_stage/custombuilt --with-mpfr=/d/Prog/winlibs64ucrt_stage/custombuilt --with-gmp=/d/Prog/winlibs64ucrt_stage/custombuilt --with-isl=/d/Prog/winlibs64ucrt_stage/custombuilt --disable-libstdcxx-backtrace --enable-install-libiberty --enable-__cxa_atexit --without-included-gettext --with-diagnostics-color=auto --enable-clocale=generic --with-libiconv --with-system-zlib --with-build-sysroot=/R/winlibs64ucrt_stage/gcc-13.2.0/build_mingw/mingw-w64 CFLAGS='-I/d/Prog/winlibs64ucrt_stage/custombuilt/include/libdl-win32 -march=nocona -msahf -mtune=generic -O2' CXXFLAGS='-Wno-int-conversion -march=nocona -msahf -mtune=generic -O2' LDFLAGS='-pthread -Wl,--no-insert-timestamp -Wl,--dynamicbase -Wl,--high-entropy-va -Wl,--nxcompat -Wl,--tsaware' LD=/d/Prog/winlibs64ucrt_stage/custombuilt/share/binutils/bin/ld.exe Thread model: posix Supported LTO compression algorithms: zlib zstd gcc version 13.2.0 (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r8) ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2016 Date: Thursday, May 26, 2016 Time: 8:30 PM Place: Schönbrunn Palace, Palace Gardens (in front of the Neptune Fountain) 1130 Vienna Conductor: Semyon Bychkov Piano: Katia and Marielle Labèque Music Program Georges Bizet - Farandole from L`Arlésienne, Suite No. 2 Hector Berlioz - Rákóczy March from "La Damnation de Faust", op. 24 Francis Poulenc - Concerto for Two Pianos, D minor, FP 61 Maurice Ravel - Daphnis et Chloé, Suite No. 2 Maurice Ravel - Boléro, Ballet for Orchestra Semyon Bychkov Semyon Bychkov’s approach to music making combines innate musicality with the rigours of Russian pedagogy. With his time carefully balanced between the concert hall and the opera house, Bychkov conducts repertoire that spans four centuries. Born in Leningrad (St. Petersburg) in 1952, Semyon Bychkov was 20 when he won the Rachmaninov Conducting Competition. Two years later, having been denied his prize of conducting the Leningrad Philharmonic Orchestra he left the former Soviet Union where, from the age of five, he had been singled out for an extraordinarily privileged education in music. Katia and Marielle Labèque Katia and Marielle Labèque are sibling pianists renowned for their ensemble of synchronicity and energy. Daughters of Ada Cecchi (herself a pupil of Marguerite Long), Katia and Marielle had a childhood filled with music. Their musical ambitions started at an early age and they rose to international fame with their contemporary rendition of Gershwin's Rhapsody in Blue (one of the first gold records in classical music) and have since developed a stunning career with performances worldwide. They perform regularly with the most prestigious orchestras. Download Wiener Philharmoniker Summer Night Concert Schönbrunn 2016.mp4[720P] 2.07GB Stream 0 Type : Video Codec : H264 - MPEG4 AVC (part 10) (avc1) Resolution : 1280x720 Frame rate : 25 Stream 1 Type : Audio Codec : MPEG AAC Audio (mp4a) Sample rate : 48000 Hz Bit per sample : 16 Bitrate : 1536 kb/s ### Xi'an Travels First Day Bell Tower of Xi'an The Bell Tower of Xi'an built in 1384 during the early Ming Dynasty, is a symbol of the city of Xi'an and one of the grandest of its kind in China. The Bell Tower also contains several large bronze-cast bells from the Tang Dynasty. The jingyun bell, cast in 711 during the Tang Dynasty, 247 cm high and 6500 kg. Shaanxi History Museum Shaanxi History Museum, which is located to the northwest of the Giant Wild Goose Pagoda in the ancient city Xi'an, in the Shaanxi province of China, is one of the first huge state museums with modern facilities in China and one of the largest. The museum houses over 370,000 items, including murals, paintings, pottery, coins, as well as bronze, gold, and silver objects. The modern museum was built between 1983 and 2001 and its appearance recalls the architectural style of the Tang Dynasty. Thi-colored Horse in Shaanxi History Museum Tang Dynasty (618-907) - Excavated from nearby of Zhaoling mausoleum, Liquan Country. Giant Wild Goose Pagoda Giant Wild Goose Pagoda or Big Wild Goose Pagoda, is a Buddhist pagoda located in southern Xi'an, Shaanxi province, China. It was built in 652 during the Tang dynasty and originally had five stories. The structure was rebuilt in 704 during the reign of Empress Wu Zetian, and its exterior brick facade was renovated during the Ming dynasty. One of the pagoda's many functions was to hold sutras and figurines of the Buddha that were brought to China from India by the Buddhist translator and traveler Xuanzang. Second Day Chongyang Palace Chongyang Palace is one of the three birthplace of Taoism Quanzhen, par with Beijing White Cloud Temple and Shaanxi Ruicheng Yongle Palace. Which located in the Zuan town, Huxian country of Xi'an, Shaanxi Province. The Palace is the founder of Wang Chongyang Quanzhen and burial site of the convent. The Louguantai Temple The Louguantai Temple in Zhouzhi county, Shaanxi province, about 70 km, west of Xi'an, is the place where tradition says that Lao Tze composed the Tao Te Ching. Northwestern Polytechnical University Northwestern Polytechnical University (also known as NPU or NWPU) is a National Key University, directed by the Ministry of Industry and Information Technology of the People's Republic of China, located in Xi'an, Shaanxi, China. Third Day Zaoyuan Revolutionary Site Zaoyuan Revolutionary Site in Zaoyuan village, Baota country, Yan'an, Shaanxi. 1944-1947 March by the CPC Central Committee Yangjialing moved in here, during the leadership of the party to carry out large-scale production of military and civilian movement rectification movement and the liberated areas to carry out preparations for the Chinese Communist Party 7'th. Yan'an Revolutionary Memorial Hall Founded in July 1950 in Yan'an City of Shaanxi Province, the Yan'an Revolutionary Memorial is one of the earliest revolutionary memorials in China. It was built in July 1950 and was previously called the Yan'an Revolutionary Museum. It opened to the public in February 1951. The memorial boasts a collection of 35,000 pieces of cultural relics, 5,500 historical photos and 12,000 volumes of books. Among all, more than 1,700 items belong to Class One and Class Two collections. The exhibits include genuine footballs, sewing machines, old uniforms, weaponry, historical literature, graphs, photos, Marxist books by Lenin, Stalin and Trotsky translated into Chinese and hundreds of communist weapons. Especially noteworthy is a white horse (it has been made into a specimen) that is said to have carried Mao Zedong. Yan'an Pagoda Hill Yan'an Pagoda HillYan'an is of great historical importance in relation to the Chinese revolutionary movement and for that reason Yan'an Pagoda Hill is a must to be explored if you want to really enjoy yourself in this remarkable city. It stands to the southeast of the city, and its name is derived from the nine-story pagoda on its summit that was erected during the Tang Dynasty (618-907). Qingliang Mountain of Yan'an Traditional Foods Hula soup(胡辣汤) Spicy big lump of meat (大肉辣子疙瘩) Willow Lane noodles in Chopping Board Street (案板街柳巷面) Water slurry noodles (浆水面) Egg and fermented glutinous rice soup (蛋花醪糟汤) Baitang noodles (摆汤面) Beef cook steamed (牛肉煮馍) Mirror cake (镜糕) Liangpi - Roll dough (擀面皮) Chinese burger (肉夹馍) Mustard pimple (芥菜疙瘩) Huxian soft noodles (户县软面) ### Profiling PHP Applications with XHProf Introduction XHProf is a function-level hierarchical profiler for PHP and has a simple HTML based navigational interface. The raw data collection component is implemented in C (as a PHP extension). Install XHProf on OS X $ brew install php56-xhprof For example, profiling fibonacci.php: <?php xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY); $XHPROF_ROOT = realpath(dirname(__FILE__) .'/..'); include_once $XHPROF_ROOT . "/xhprof-0.9.4/xhprof-0.9.4/xhprof_lib/utils/xhprof_lib.php"; include_once $XHPROF_ROOT . "/xhprof-0.9.4/xhprof-0.9.4/xhprof_lib/utils/xhprof_runs.php"; $xhprof_runs = new XHProfRuns_Default(); function printFibonacci($n) { $first = 0; $second = 1; echo "Fibonacci Series \n"; echo $first . ' ' . $second . ' '; for ($i = 2; $i < $n; $i++) { $third = $first + $second; echo $third . ' '; $first = $second; $second = $third; } } // Function call to print Fibonacci series upto 30 numbers. printFibonacci(30); $xhprof_data = xhprof_disable(); $run_id = $xhprof_runs->save_run($xhprof_data, "xhprof_foo"); echo '<a href="<xhprof-ui-address>/xhprof_html/index.php?run='.$run_id.'&source=xhprof_foo" target="_blank">count</a>'; Viewing Callgraphs Generation of callgraphs required graphviz utility installed. Use XHprof and PHP on Windows Download old version releases of PHP from archives and download XHprof for Windows from windows.php.net. Edit php.ini: [xhprof] extension = ./ext/php_xhprof.dll xhprof.output_dir = ./dev Related Article Debugging with Xdebug and PhpStorm on OS X Profiling PHP Applications with Xdebug ### Staatskapelle Dresden with Rudolf Buchbinder, National Center for the Performing Arts Founded by Prince Elector Moritz von Sachsen in 1548, it is one of the oldest orchestras in the world and steeped in tradition. Over its long history many distinguished conductors and internationally celebrated instrumentalists have left their mark on this onetime court orchestra. Previous directors include Heinrich Schütz, Johann Adolf Hasse, Carl Maria von Weber and Richard Wagner, who called the ensemble his miraculous harp. The list of prominent conductors of the last 100 years includes Ernst von Schuch, Fritz Reiner, Fritz Busch, Karl Böhm, Joseph Keilberth, Rudolf Kempe, Otmar Suitner, Kurt Sanderling, Herbert Blomstedt and Giuseppe Sinopoli. The orchestra was directed by Bernard Haitink from 2002-2004 and most recently by Fabio Luisi from 2007-2010. Principal Conductor since the 2012/2013 season has been Christian Thielemann. The first and only Conductor Laureate in the history of the Staatskapelle, from 1990 until his death in April 2013, was Sir Colin Davis. Myung-Whun Chung has been Principal Guest Conductor since the 2012/2013 season. Richard Strauss and the Staatskapelle were closely linked for more than sixty years. Nine of the composer's operas were premiered in Dresden, including Salome, Elektra and Der Rosenkavalier, while Strauss's Alpine Symphony was dedicated to the orchestra. Countless other famous composers have written works either dedicated to the orchestra or first performed in Dresden. In 2007 the Staatskapelle reaffirmed this tradition by introducing the annual position of Capell-Compositeur. Following on from Hans Werner Henze, Wolfgang Rihm and Sofia Gubaidulina, György Kurtág will hold this title for the 2015/2016 season. As CapellVirtuos the pianist Yefim Bronfman will take part in various concerts. The Staatskapelle's home is the Semperoper, where it performs around 260 operas and ballets each season. In addition the ensemble presents another 50 symphonic and chamber concerts in the opera house, as well as playing at various musical events in Dresden's Frauenkirche. As one of the world's most celebrated and popular symphony orchestras, the Staatskapelle regularly travels abroad to the world's leading classical venues. From 2013 the Staatskapelle Dresden is the resident orchestra of the Salzburg Easter Festival, whose Artistic Director is none other than Christian Thielemann. The Staatskapelle also does valuable work to support the local region: Since October 2008 it has been the patron orchestra of Meetingpoint Music Messiaen (launched on January 15, 2015) in the double city of Görlitz-Zgorzelec. And in September 2010 the orchestra helped found the International Shostakovich Festival in Gohrisch (Saxon Switzerland), which is the only such annual event dedicated to the music and life of Dmitri Shostakovich. Pianist & Conductor: Rudolf Buchbinder Rudolf Buchbinder is not only an eminent pianist firmly established in today's classical music scene, but also celebrated for his exceptional versatility in music, recognized as the most important and insightful connoisseur in the world of art today. Rudolf Buchbinder was born in 1946. At age 5, he became the youngest student to attend the Vienna Academy of Music. He staged his first open concert when he was 9 and won a special prize at the Van Cliburn International Piano Competition at 20. Different from other pianists who gained fame at their prime, Rudolf Buchbinder was not so keen on advancing his career as on devoting himself to perfecting his piano finesse, making him the poster child of those great piano artists who were only bestowed recognition later in their careers. Rudolf Buchbinder has never ceased to explore new vistas in the field of art. In addition to his distinguished track record as a pianist, he also showed extraordinary talent in conducting, launching him into the league of Christoph Eschenbach and Daniel Barenboim, those rare masters who are soloist and conductor rolled into one. The epitome of him playing the duo roles is the Beethoven and Mozart piano concertos DVD recorded by Vienna Philharmonic, making him the first musician ever in the orchestra's history to record the complete Beethoven cycle in both capacities. Coming next, Rudolf Buchbinder will conduct and work with, among others, Mariinsky Theatre Orchestra, Orchestra Del Maggio Musicale Fiorentiono, Barcelona Symphony Orchestra and Wurttemberg Chamber Orchestra. Following the Beethoven concertos by Vienna Philharmonic in China which he conducted in 2013, he will bring to China the chamber orchestra consisting of the cream of Staatskapelle Dresden around the summer of 2016 and once again charm the Chinese audience with his conducing virtuosity. Programme Piano Concerto No.27 in B flat Major, KV 595 - Wolfgang Amadeus Mozart Piano Concerto No.21 in C Major, KV 467 - Wolfgang Amadeus Mozart Intermission Piano Concerto No. 20 in D Minor, KV 466 - Wolfgang Amadeus Mozart Impromptu Encore Piano Duet Overture Die Fledermaus Op.56 - Johann Strauss II ### Go Programs Performance Measurement and Profiling Performance measurement and profiling Profiling Basics Before you profile, you must have a stable environment to get repeatable results. The machine must be idle - don't profile on shared hardware, don't browse the web while waiting for a long benchmark to run. Watch out for power saving and thermal scaling. Avoid virtual machines and shared cloud hosting; they are too noisy for consistent measurements. There is a kernel bug on OS X versions less than El Capitan; upgrade or avoid profiling on OS X. CPU profiling CPU profiling is the most common type of profile. When CPU profiling is enabled, the runtime will interrupt itself every 10ms and record the stack trace of the currently running coroutines. Once the profile is saved to disk, we can analyze it to determine the hottest code paths. The more times a function appears in the profile, the more time that code path is taking as a percentage of the total runtime. Memory profiling Memory profiling records the stack trace when a heap allocation is made like CPU profiling is sample based. By default memory profiling samples 1 in every 1000 allocations. This rate can be changed. Stack allocations are assumed to be free and are not tracked in the memory profile. Because of memory profiling is sample based and because it tracks allocations not use, using memory profiling to determine your application's overall memory usage is difficult. Using Dave Cheney's excellent profile library pprof. $ go tool pprof /path/to/your/binary /path/to/your/profile The binary argument must be the binary that produced this profile. The profile argument must be the profile generated by this binary. For example, create fibonacci folder and put following files into it, fibonacci.go: package fibonacci // fibonacci is a function that returns a function that returns an int. func fibonacci() func() int { a, b := 0, 1 return func() int { a, b = b, a+b return a } } func loop(j int) { f := fibonacci() for i := 0; i < j; i++ { println(f()) } } fibonacci_test.go: package fibonacci import ( "testing" ) func Test_fibonacci(t *testing.T) { loop(10) } func Benchmark_fibonacci(b *testing.B) { // use b.N for looping for i := 0; i < b.N; i++ { loop(10) } } In Use $ go test -v -run=^$ -bench=. -benchtime=3s -cpuprofile=prof.cpu -memprofile=prof.mem -memprofilerate=2 CPU Profiling $ go tool pprof fibonacci.test prof.cpu Entering interactive mode (type "help" for commands) (pprof) top 2.18s of 2.22s total (98.20%) Dropped 6 nodes (cum = 0.02s) flat flat% sum% cum cum% 1.93s 86.94% 86.94% 1.93s 86.94% runtime.write 0.05s 2.25% 89.19% 2s 90.09% runtime.gwrite 0.04s 1.80% 90.99% 2.21s 99.55% _/home/xuri/Desktop/fibonacci.loop 0.04s 1.80% 92.79% 0.07s 3.15% runtime.printunlock 0.04s 1.80% 94.59% 0.04s 1.80% runtime/internal/atomic.Xchg 0.02s 0.9% 95.50% 0.02s 0.9% runtime.mallocgc 0.02s 0.9% 96.40% 0.98s 44.14% runtime.printuint 0.02s 0.9% 97.30% 1.95s 87.84% runtime.writeErr 0.01s 0.45% 97.75% 0.99s 44.59% runtime.printint 0.01s 0.45% 98.20% 0.02s 0.9% runtime.printlock (pprof) top --cum 2.06s of 2.22s total (92.79%) Dropped 6 nodes (cum <= 0.01s) Showing top 10 nodes out of 19 (cum >= 1.05s) flat flat% sum% cum cum% 0 0% 0% 2.21s 99.55% _/home/xuri/Desktop/fibonacci.Benchmark_fibonacci 0.04s 1.80% 1.80% 2.21s 99.55% _/home/xuri/Desktop/fibonacci.loop 0 0% 1.80% 2.21s 99.55% runtime.goexit 0 0% 1.80% 2.21s 99.55% testing.(*B).launch 0 0% 1.80% 2.21s 99.55% testing.(*B).runN 0.05s 2.25% 4.05% 2s 90.09% runtime.gwrite 0.02s 0.9% 4.95% 1.95s 87.84% runtime.writeErr 1.93s 86.94% 91.89% 1.93s 86.94% runtime.write 0.01s 0.45% 92.34% 1.09s 49.10% runtime.printnl 0.01s 0.45% 92.79% 1.05s 47.30% runtime.printstring (pprof) list loop Total: 2.22s ROUTINE ======================== _/home/xuri/Desktop/fibonacci.loop in /home/xuri/Desktop/fibonacci/fibonacci.go 40ms 2.21s (flat, cum) 99.55% of Total . . 9: return a . . 10: } . . 11:} . . 12: . . 13:func loop(j int) { . 20ms 14: f := fibonacci() . . 15: for i := 0; i < j; i++ { 40ms 2.19s 16: println(f()) . . 17: } . . 18:} (pprof) web The web command writes a graph of the profile data in the SVG format and opens it in a web browser. (There is also a gv command that writes PostScript and opens it in Ghostview. For either command, you need graphviz installed.) Save the callgraph as a PNG file: $ go tool pprof -png fibonacci.test prof.cpu > prof.cpu.png Getting a heap profile from URL $ go tool pprof -png http://localhost:6060/debug/pprof/heap > out.png Visual sampling file `pb.gz` file $ go tool pprof -web /path/to/profile.pb.gz Memory Profiling -inuse_space Display in-use memory size -inuse_objects Display in-use object counts -alloc_space Display allocated memory size -alloc_objects Display allocated object counts $ go tool pprof -alloc_space fibonacci.test prof.mem Entering interactive mode (type "help" for commands) (pprof) top 5200.11kB of 5207.71kB total (99.85%) Dropped 63 nodes (cum <= 26.04kB) Showing top 10 nodes out of 15 (cum >= 39kB) flat flat% sum% cum cum% 5161.56kB 99.11% 99.11% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.fibonacci 38kB 0.73% 99.84% 38kB 0.73% regexp.(*bitState).reset 0.42kB 0.0081% 99.85% 40.21kB 0.77% testing.RunBenchmarks 0.11kB 0.0021% 99.85% 39kB 0.75% regexp.(*Regexp).doExecute 0.02kB 0.0003% 99.85% 43.18kB 0.83% testing.(*M).Run 0 0% 99.85% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.Benchmark_fibonacci 0 0% 99.85% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.loops 0 0% 99.85% 43.18kB 0.83% main.main 0 0% 99.85% 41.44kB 0.8% main.matchString 0 0% 99.85% 39kB 0.75% regexp.(*Regexp).MatchString (pprof) top --cum 5161.58kB of 5207.71kB total (99.11%) Dropped 63 nodes (cum <= 26.04kB) Showing top 10 nodes out of 15 (cum >= 41.44kB) flat flat% sum% cum cum% 0 0% 0% 5204.77kB 99.94% runtime.goexit 0 0% 0% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.Benchmark_fibonacci 5161.56kB 99.11% 99.11% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.fibonacci 0 0% 99.11% 5161.56kB 99.11% _/home/xuri/Desktop/fibonacci.loops 0 0% 99.11% 5161.56kB 99.11% testing.(*B).launch 0 0% 99.11% 5161.56kB 99.11% testing.(*B).runN 0 0% 99.11% 43.18kB 0.83% main.main 0 0% 99.11% 43.18kB 0.83% runtime.main 0.02kB 0.0003% 99.11% 43.18kB 0.83% testing.(*M).Run 0 0% 99.11% 41.44kB 0.8% main.matchString (pprof) list loop Total: 5.09MB ROUTINE ======================== _/home/xuri/Desktop/fibonacci.loops in /home/xuri/Desktop/fibonacci/fibonacci.go 0 5.04MB (flat, cum) 99.11% of Total . . 9: return a . . 10: } . . 11:} . . 12: . . 13:func loops(j int) { . 5.04MB 14: f := fibonacci() . . 15: for i := 0; i < j; i++ { . . 16: println(f()) . . 17: } . . 18:} (pprof) disasm loop Total: 5.09MB ROUTINE ======================== _/home/xuri/Desktop/fibonacci.loops 0 5.04MB (flat, cum) 99.11% of Total . . 47dc20: FS MOVQ FS:0xfffffff8, CX . . 47dc29: CMPQ 0x10(CX), SP . . 47dc2d: JBE 0x47dc9e . . 47dc2f: SUBQ $0x20, SP . 5.04MB 47dc33: CALL _/home/xuri/Desktop/fibonacci.fibonacci(SB) . . 47dc38: MOVQ 0(SP), BX . . 47dc3c: MOVQ BX, 0x18(SP) . . 47dc41: XORL AX, AX . . 47dc43: MOVQ AX, 0x8(SP) . . 47dc48: MOVQ 0x28(SP), BP . . 47dc4d: CMPQ BP, AX . . 47dc50: JGE 0x47dc99 . . 47dc52: MOVQ 0x18(SP), DX . . 47dc57: MOVQ 0(DX), BX . . 47dc5a: CALL BX . . 47dc5c: MOVQ 0(SP), BX . . 47dc60: MOVQ BX, 0x10(SP) . . 47dc65: CALL runtime.printlock(SB) . . 47dc6a: MOVQ 0x10(SP), BX . . 47dc6f: MOVQ BX, 0(SP) . . 47dc73: CALL runtime.printint(SB) . . 47dc78: CALL runtime.printnl(SB) . . 47dc7d: CALL runtime.printunlock(SB) . . 47dc82: MOVQ 0x8(SP), AX . . 47dc87: INCQ AX . . 47dc8a: MOVQ AX, 0x8(SP) . . 47dc8f: MOVQ 0x28(SP), BP . . 47dc94: CMPQ BP, AX . . 47dc97: JL 0x47dc52 . . 47dc99: ADDQ $0x20, SP . . 47dc9d: RET . . 47dc9e: CALL runtime.morestack_noctxt(SB) . . 47dca3: JMP _/home/xuri/Desktop/fibonacci.loops(SB) . . 47dca8: INT $0x3 . . 47dca9: INT $0x3 . . 47dcaa: INT $0x3 . . 47dcab: INT $0x3 . . 47dcac: INT $0x3 . . 47dcad: INT $0x3 . . 47dcae: INT $0x3 (pprof) web Save the callgraph as a PNG file: $ go tool pprof -png fibonacci.test prof.mem > prof.mem.png Stochastic flame graph profiler for Go programs Install stack trace visualizer FlameGraph: $ git clone https://github.com/brendangregg/FlameGraph.git $ cp FlameGraph/flamegraph.pl /usr/local/bin $ flamegraph.pl -help USAGE: /usr/local/bin/flamegraph.pl [options] infile > outfile.svg --title # change title text --width # width of image (default 1200) --height # height of each frame (default 16) --minwidth # omit smaller functions (default 0.1 pixels) --fonttype # font type (default "Verdana") --fontsize # font size (default 12) --countname # count type label (default "samples") --nametype # name type label (default "Function:") --colors # set color palette. choices are: hot (default), mem, io, # wakeup, chain, java, js, perl, red, green, blue, aqua, # yellow, purple, orange --hash # colors are keyed by function name hash --cp # use consistent palette (palette.map) --reverse # generate stack-reversed flame graph --inverted # icicle graph --negate # switch differential hues (bluered) --help # this message eg, /usr/local/bin/flamegraph.pl --title="Flame Graph: malloc()" trace.txt > graph.svg Install stochastically profiling tool go-torch: $ go get -v github.com/uber/go-torch $ go-torch fibonacci.test prof.cpu INFO[10:30:18] Run pprof command: go tool pprof -raw -seconds 30 fibonacci.test prof.cpu INFO[10:30:19] Writing svg to torch.svg ### Create Presentation with Go Use package golang.org/x/tools/present, download and install the package and its dependencies: $ go get golang.org/x/tools/present Build file: $ go build golang.org/x/tools/present Write .slide file as per the format specified GoDoc: godoc.org/golang.org/x/tools/present. Run present to start presentation and open web browser, visit http://127.0.0.1:3999. ### Gopher China 2016 Gopher China 2016 is a major conferences of Go communities in China since 2015. This is the second session place at Beijing Asia Hotel on April 16th to 17th. I join conferences on second day. There are seven speeches on today: Dave Cheney - How to Write high performance application Xiaowei Wu - Aliyun, Go for CDN Services Shiwei Xu - Qiniu, MicroService Management Hongliang Sun - DaoCloud, Distributed Docker System Bushuang Gao - Xiaomi, Xiaomi's Best Practice of Devops Platform in Go Break ZhaoChang - Grabtaxi Go testing CI and deploy Jianliang Su - Netease, Go for Netease's Upload Acceleration in WAN We can get presentation of these speeches from github.com/gopherchina. As a novice of Go, in my opinion, Go build-in many powerful standard packages and easy to import third part package from package source. It just likes the premium version of C language because Go was also designed by a simple philosophy. Go has communities support with many developers and greate corporations such as Google. So I think use Go will become more widespread. ### Running Apache Spark on YARN with Docker Introduction Apache Spark is a fast and general engine for large-scale data processing. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation that has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance. Environment Versions Ubuntu Server 14.04.4 LTS (Trusty Tahr) 64-bit Docker 1.10.3 Hadoop 2.6.0 Spark 1.6.0 Install Docker Update package information, ensure that APT works with the https method, and that CA certificates are installed. $ apt-get update $ apt-get install apt-transport-https ca-certificates Add the new GPG key. $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D Open the /etc/apt/sources.list.d/docker.list file in your favorite editor. If the file doesn't exist, create it. Remove any existing entries. Add an entry for your Ubuntu operating system. deb https://apt.dockerproject.org/repo ubuntu-trusty main Save and close the /etc/apt/sources.list.d/docker.list file. Update the APT package index and install Docker: $ sudo apt-get update && sudo apt-get install docker-engine Running docker without sudo $ sudo gpasswd -a ${USER} docker Restart the docker daemon. $ sudo service docker restart Verify docker is installed correctly. $ docker run hello-world Pull Spark Images Docker pull Spark image from Docker Hub: $ docker pull sequenceiq/spark:1.6.0 Create Container We will build a multi-node cluster in which one is master and the other is slaves1 and slave2, create three continers in different terminal. Create master: $ docker run --name master -it -p 8088:8088 -p 8042:8042 -p 8085:8080 -p 4040:4040 -p 7077:7077 -p 2022:22 -v /data:/data -h master sequenceiq/spark:1.6.0 bash Create slaves: $ docker run --name slave1 -it -h slave1 sequenceiq/spark:1.6.0 bash $ docker run --name slave2 -it -h slave2 sequenceiq/spark:1.6.0 bash Hadoop Configuration Stop the Hadoop multi-node cluster on all nodes: # $HADOOP_COMMON_HOME/sbin/stop-dfs.sh && $HADOOP_COMMON_HOME/sbin/stop-yarn.sh Update /etc/hosts on every nodes: # vi /etc/hosts Put the alias to the ip addresses of all the nodes: 172.17.0.2 master # IP address of the master node 172.17.0.3 slave1 # IP address of the slave2 node 172.17.0.4 slave2 # IP address of the slave2 node Configuration Hadoop slaves file on all nodes: # vi $HADOOP_CONF_DIR/slaves slave1 slave2 Create name, data and tmp folder for Hadoop on all nodes: # mkdir -pv $HADOOP_COMMON_HOME/dfs/name # mkdir -pv $HADOOP_COMMON_HOME/dfs/data # mkdir -pv $HADOOP_COMMON_HOME/tmp Configuration Hadoop core-site.xml file on all nodes: # vi $HADOOP_CONF_DIR/core-site.xml Change sandbox to master and add a hadoop.tmp.dir property: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131702</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property> </configuration> Configuration Hadoop hdfs-site.xml file on all nodes: $ vi $HADOOP_CONF_DIR/hdfs-site.xml Set dfs.replication value from 1 to 3: <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/dfs/data</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration> Configuration Hadoop mapred-site.xml file on all nodes: $ vi $HADOOP_CONF_DIR/mapred-site.xml Change sandbox to master: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> </configuration> Configuration Hadoop yarn-site.xml file on all nodes: $ vi $HADOOP_CONF_DIR/yarn-site.xml Change sandbox to master: <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>1</value> </property> <property> <name>yarn.application.classpath</name> <value>/usr/local/hadoop/etc/hadoop, /usr/local/hadoop/share/hadoop/common/*, /usr/local/hadoop/share/hadoop/common/lib/*, /usr/local/hadoop/share/hadoop/hdfs/*, /usr/local/hadoop/share/hadoop/hdfs/lib/*, /usr/local/hadoop/share/hadoop/mapreduce/*, /usr/local/hadoop/share/hadoop/mapreduce/lib/*, /usr/local/hadoop/share/hadoop/yarn/*, /usr/local/hadoop/share/hadoop/yarn/lib/*</value> </property> <property> <description> Number of seconds after an application finishes before the nodemanager's DeletionService will delete the application's localized file directory and log directory. To diagnose Yarn application problems, set this property's value large enough (for example, to 600 = 10 minutes) to permit examination of these directories. After changing the property's value, you must restart the nodemanager in order for it to have an effect. The roots of Yarn applications' work directories is configurable with the yarn.nodemanager.local-dirs property (see below), and the roots of the Yarn applications' log directories is configurable with the yarn.nodemanager.log-dirs property (see also below). </description> <name>yarn.nodemanager.delete.debug-delay-sec</name> <value>600</value> </property> </configuration> Format NameNode on master node: # hdfs namenode -format Spark Configuration Configuration Spark slaves file on all nodes: # cp $SPARK_HOME/conf/slaves.template $SPARK_HOME/conf/slaves && vi $SPARK_HOME/conf/slaves Add following lines in file: slave1 slave2 Configuration Spark spark-env.sh file on all node: # cp $SPARK_HOME/conf/spark-env.sh.template $SPARK_HOME/conf/spark-env.sh && vi $SPARK_HOME/conf/spark-env.sh Add following lines in file: export JAVA_HOME=/usr/java/default export SPARK_MASTER_IP=master export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_MEMORY=1g export MASTER=spark://${SPARK_MASTER_IP}:${SPARK_MASTER_PORT} Configuration Spark core-site.xml file on all nodes: # vi $SPARK_HOME/yarn-remote-client/core-site.xml Change sandbox to master: <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> <property> <name>dfs.client.use.legacy.blockreader</name> <value>true</value> </property> </configuration> Configuration Spark yarn-site.xml file on all nodes: # vi $SPARK_HOME/yarn-remote-client/yarn-site.xml Change sandbox to master: <configuration> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.application.classpath</name> <value>/usr/local/hadoop/etc/hadoop, /usr/local/hadoop/share/hadoop/common/*, /usr/local/hadoop/share/hadoop/common/lib/*, /usr/local/hadoop/share/hadoop/hdfs/*, /usr/local/hadoop/share/hadoop/hdfs/lib/*, /usr/local/hadoop/share/hadoop/mapreduce/*, /usr/local/hadoop/share/hadoop/mapreduce/lib/*, /usr/local/hadoop/share/hadoop/yarn/*, /usr/local/hadoop/share/hadoop/yarn/lib/*, /usr/local/hadoop/share/spark/*</value> </property> </configuration> Starting Hadoop multi-node cluster on master node: # $HADOOP_COMMON_HOME/sbin/start-dfs.sh && $HADOOP_COMMON_HOME/sbin/start-yarn.sh Track/Monitor/Verify Hadoop cluster on master node: # jps 1650 SecondaryNameNode 2043 Jps 1484 NameNode 1789 ResourceManager Track/Monitor/Verify Hadoop cluster on slaves nodes: # jps 1343 DataNode 1434 NodeManager 1530 Jps Start Apache Spark on msater node: # $SPARK_HOME/sbin/start-all.sh Track/Monitor/Verify Apache Spark cluster on master node: # jps 2148 Jps 2084 Master 1650 SecondaryNameNode 1484 NameNode 1789 ResourceManage Track/Monitor/Verify Apache Spark cluster on slaves nodes: # jps 1733 Jps 1343 DataNode 1434 NodeManager Monitor Hadoop ResourseManage and Hadoop NameNode via web-version ResourceManager http://master:8088, because I run Docker in Ubuntu server and Ubuntu Server IP address is 172.16.136.128, so I can also visit http://172.16.136.128:8088: Apache Spark Web UI http://master:8080 and I can also visit http://172.16.136.128:8085: Issues If got following error when start Apache Spark on msater node by $SPARK_HOME/sbin/start-all.sh command: slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out slave2: failed to launch org.apache.spark.deploy.worker.Worker: slave2: at java.lang.ClassLoader.loadClass(libgcj.so.10) slave2: at gnu.java.lang.MainThread.run(libgcj.so.10) slave2: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out slave1: failed to launch org.apache.spark.deploy.worker.Worker: slave1: at java.lang.ClassLoader.loadClass(libgcj.so.10) slave1: at gnu.java.lang.MainThread.run(libgcj.so.10) slave1: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out Check OS Java version and list old version Java: # echo $JAVA_HOME /usr/java/default # rpm -qa | grep java java_cup-0.10k-5.el6.x86_64 gcc-java-4.4.7-16.el6.x86_64 java-1.5.0-gcj-devel-1.5.0.0-29.1.el6.x86_64 R-java-devel-3.2.3-1.el6.x86_64 R-java-3.2.3-1.el6.x86_64 java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 List installed JDK: # rpm -qa | grep jdk jdk-1.7.0_51-fcs.x86_64 Remove old version Java and JDK: # rpm -e --nodeps java_cup-0.10k-5.el6.x86_64 # rpm -e --nodeps java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64 Related Articles Setup Hadoop on Ubuntu (Single-Node Cluster) Setup Hadoop on Ubuntu (Multi-Node Cluster) ### Run Docker from Behind Proxy Configuring the Proxy Settings OS X Follow the instructions on the Setup Docker on OS X. Past the docker run hello-world step. From the installation documentation we know that docker-machine actually runs in a VirtualBox VM within OS X, wrapping and hiding the older boot2docker code. So no existing proxy configuration settings apply, and you have to duplicate them as described next. $ docker-machine ssh default Now the command prompt will say something like: docker@default:~$ We need root access: $ sudo -s Now the command prompt will say something like: root@default:/home/docker# Now configure the proxy: echo "export HTTP_PROXY=http://<proxy_address>:[port]" >> /var/lib/boot2docker/profile echo "export HTTPS_PROXY=http://<proxy_address>:[port]" >> /var/lib/boot2docker/profile For verification: # cat /var/lib/boot2docker/profile Exit out of ssh session: # exit $ exit Restart docker-machine: $ docker-machine restart default Ubuntu Execute following command: $ sudo echo "export HTTP_PROXY=http://<proxy_address>:[port]" >> /etc/default/docker $ sudo echo "export HTTPS_PROXY=http://<proxy_address>:[port]" >> /etc/default/docker Restart Docker $ sudo service docker restart CentOS 6 Execute following command: $ sudo echo "export HTTP_PROXY=http://<proxy_address>:[port]" >> /etc/sysconfig/docker $ sudo echo "export HTTPS_PROXY=http://<proxy_address>:[port]" >> /etc/sysconfig/docker CentOS 7 Execute following command to create docker.service.d folder and edit http-proxy.conf file: $ sudo mkdir /etc/systemd/system/docker.service.d $ sudo vim /etc/systemd/system/docker.service.d/http-proxy.conf Now configure the proxy: [Service] Environment="HTTP_PROXY=http://<proxy_address>:[port]" Environment="HTTPS_PROXY=http://<proxy_address>:[port]" Reload daemon and restart Docker: $ sudo systemctl daemon-reload $ sudo systemctl restart docker For verification: $ sudo systemctl show docker --property Environment ### Setup Hadoop on Ubuntu (Multi-Node Cluster) Running Hadoop on Ubuntu Linux (Multi-Node Cluster) From single-node clusters to a multi-node cluster We will build a multi-node cluster merge three or more single-node clusters into one multi-node cluster in which one Ubuntu box will become the designated master, and the other box will become only a slave. Environment Versions Ubuntu 16.04 LTS Xenial Xerus 64-bit Server Edition Hadoop 2.7.2 Prerequisites Configuring single-node clusters first, here we have used three single node clusters. Shutdown each single-node cluster with the following command: hduser1@master:~$ $HADOOP_HOME/sbin/stop-all.sh Networking The easiest is to put three machines in the same network with regard to hardware and software configuration. Update /etc/hosts on both machines. Put the alias to the ip addresses of all the machines. Here we are creating a cluster of 3 machines, one is master, one is slave1 and other is slave2: hduser1@master:$ sudo vim /etc/hosts Add the following lines for two node cluster 172.16.136.140 master # IP address of the master node 172.16.136.141 slave1 # IP address of the slave1 node 172.16.136.142 slave2 # IP address of the slave2 node SSH access The hduser1 user on the master (ssh hduser1@master) must be able to connect: to its own user account on the master - i.e. ssh master in this context. to the hduser1 user account on the slave (i.e. ssh hduser1@slave1) via a password-less SSH login. Set up password-less SSH login between cluster: hduser1@master:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser1@slave1 hduser1@master:~$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser1@slave2 Connect with user hduser1 from the master to the user account hduser1 on the slave1 and slave2. From master to master hduser1@master:~$ ssh master From master to slave1 hduser1@master:~$ ssh slave1 From slave1 to slave2 hduser1@slave1:~$ ssh slave2 Hadoop Cluster Overview This will describe how to configure one Ubuntu box as a master node and the other Ubuntu box as a slave node. Configuration $HADOOP_HOME/etc/hadoop/masters The machine on which sbin/start-dfs.sh is running will become the primary NameNode. This file should be updated on all the nodes. Create the masters file in the $HADOOP_HOME/etc/hadoop/ directory: $ sudo vim $HADOOP_HOME/etc/hadoop/masters Add the following line master $HADOOP_HOME/etc/hadoop/slaves This file should be updated on all the nodes. Open the slaves file in the $HADOOP_HOME/etc/hadoop/ directory: $ sudo vim $HADOOP_HOME/etc/hadoop/slaves Add the following lines (remove localhost) slave1 slave2 $HADOOP_HOME/etc/hadoop/*-site.xml (All nodes.) Open this file in the $HADOOP_HOME/etc/hadoop/ directory: $ sudo vim $HADOOP_HOME/etc/hadoop/core-site.xml Change the fs.default.name parameter (in $HADOOP_HOME/etc/hadoop/core-site.xml), which specifies the NameNode (the HDFS master) host and port. $HADOOP_HOME/etc/hadoop/core-site.xml (All nodes.) <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://master:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> $HADOOP_HOME/etc/hadoop/mapred-site.xml (All nodes.) Open this file in the $HADOOP_HOME/etc/hadoop/ directory $ sudo vim $HADOOP_HOME/etc/hadoop/mapred-site.xml Change the mapred.job.tracker parameter (in $HADOOP_HOME/etc/hadoop/mapred-site.xml), which specifies the JobTracker (MapReduce master) host and port and add mapred.framework.name property. $HADOOP_HOME/etc/hadoop/mapred-site.xml (All nodes.) <property> <name>mapred.job.tracker</name> <value>master:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> <property> <name>mapred.framework.name</name> <value>yarn</value> </property> $HADOOP_HOME/etc/hadoop/hdfs-site.xml (All nodes.) Open this file in the $HADOOP_HOME/etc/hadoop/ directory $ sudo vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml Change the dfs.replication parameter (in $HADOOP_HOME/etc/hadoop/hdfs-site.xml) which specifies the default block replication. We have two nodes as slave available, so we set dfs.replication to 2. Changes to be like this: <property> <name>dfs.replication</name> <value>2</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> Paste the following between <configuration></configuration> in file $HADOOP_HOME/etc/hadoop/yarn-site.xml: <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8035</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8050</value> </property> Applying Master node specific Hadoop configuration: (Only for master nodes) These are some configuration to be applied over Hadoop master nodes (Since we have only one master node it will be applied to only one master node.) Remove existing Hadoop data folder (which was created while single-cluster hadoop setup.) hduser1@master:~$ sudo rm -rf /app/hadoop/tmp Make same (/app/hadoop/tmp) directory and create NameNode (/usr/local/hadoop_tmp/hdfs/namenode) directory: hduser1@master:~$ sudo mkdir -pv /app/hadoop/tmp/hdfs/namenode Make hduser1 as owner of that directory: hduser1@master:~$ sudo chown hduser1:hadoop_group -R /app/hadoop/tmp/ Applying Slave node specific Hadoop configuration (Only for slave nodes) Since we have three slave nodes, we will be applying the following changes over slave1 and slave2 nodes: Remove existing Hadoop_data folder (which was created while single node hadoop setup) $ sudo rm -rf /app/hadoop/tmp Creates same (/app/hadoop/tmp) folder, an inside this folder again create DataNode (/app/hadoop/tmp/hdfs/namenode) directory: $ sudo mkdir -pv /app/hadoop/tmp/hdfs/datanode Make hduser as owner of that directory $ sudo chown hduser1:hadoop_group -R /app/hadoop/tmp/ Formatting the HDFS filesystem via the NameNode (Only for master nodes) Format the cluster's HDFS file system $ hdfs namenode -format Starting the multi-node cluster (Only for master nodes) hduser1@master:~$ start-dfs.sh && start-yarn.sh By this command the NameNode daemon is started on master, and DataNode daemons are started on all slaves (here: slave1 and slave2). Track/Monitor/Verify Hadoop cluster (Run on any Node) Verify Hadoop daemons on Master, run the following commands hduser1@master:~$ jps 7104 Jps 6386 SecondaryNameNode 6555 ResourceManager 6158 NameNode Verify Hadoop daemons on any slave (here: slave1 and slave2), DataNode and NodeManager should run: $ jps 1344 DataNode 1653 Jps 1463 NodeManager Monitor Hadoop ResourseManage and Hadoop NameNode via web-version ResourceManager: http://master:8088 Note: The Job Tracker and Task Tracker concepts are different in Hadoop YARN, In new version of Hadoop we can monitor jobs being executed at ResourseManage. http://localhost:50070 - web UI of the NameNode daemon Datanode Information Create input directory on HDFS: hduser1@master:~$ hadoop fs -mkdir /input Execute example program: hduser1@master:~$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples*.jar wordcount /input /output Check output directory: hduser1@master:~$ hadoop fs -ls /output ### OCserv on Ubuntu for Cisco AnyConnect Client Introduction OCserv is the OpenConnect VPN server. Its purpose is to be a secure, small, fast and configurable VPN server. It implements the OpenConnect SSL VPN protocol, and has also (currently experimental) compatibility with clients using the AnyConnect SSL VPN protocol. The OpenConnect protocol provides a dual TCP/UDP VPN channel, and uses the standard IETF security protocols to secure it. The server is implemented primarily for the GNU/Linux platform but its code is designed to be portable to other UNIX variants as well. From Ubuntu 16.04 onward, OCserv is included in the standard Ubuntu repositories, so you do not need to compile it from source. In this tutorial the iOS 12.2 client, which could be an iPad or an iPhone, will connect to the VPN server using the Cisco AnyConnect VPN client. Install packages on server Log on to your server and install the OCserv package: $ sudo apt-get install build-essential autogen pkg-config $ sudo apt-get install libtalloc-dev libreadline-dev libpam0g-dev libhttp-parser-dev libpcl1-dev $ sudo apt-get install libgnutls28-dev libev-dev $ sudo apt-get install libprotobuf-c-dev libhttp-parser-dev $ apt-get install libgeoip-dev # Debian9 required $ sudo apt-get install libseccomp-dev We will also need the GnuTLS package, since we use the GnuTLS utilities to generate our public key infrastructure (keys and certificates): $ sudo apt-get install gnutls-bin Build and Install $ wget ftp://ftp.infradead.org/pub/ocserv/ocserv-0.12.3.tar.xz $ tar Jxvf ocserv-0.12.3.tar.xz $ cd ocserv-0.12.3 $ ./configure --prefix=/usr --sysconfdir=/etc $ make && sudo make install We can use self-signed certificates or using a purchased commercial certificate from CA certificate providers, such as Comodo, StartSSL, WoSign and etc. Make CA certificate and server certificate The GnuTLS certificate tool (certtool) allows you to specify the fields for your certificates in a configuration template file. Start by creating a configuration template file for your Certificate Authority (CA) certificate: $ cd /etc/ocserv $ sudo vim ca.tmpl Press the I key on your keyboard to enter insert mode. Enter the following fields into the CA configuration file, customizing the values as you prefer: cn = "My CA" organization = "My Org" serial = 1 expiration_days = 3650 ca signing_key cert_signing_key crl_signing_key When you have finished entering the above, escape from insert mode, write the file to disk, and quit the editor. Now generate a key and certificate for your CA, using the CA configuration template file you just created: $ sudo certtool --generate-privkey --outfile ca-key.pem $ sudo certtool --generate-self-signed --load-privkey ca-key.pem \ --template ca.tmpl --outfile ca-cert.pem Now create a server certificate template file: $ sudo vim server.tmpl Press the I key on your keyboard to enter insert mode. Enter the following fields into the server configuration file. Note that in the common name (cn) field, you must specify your actual server IP address or hostname (shown as vpn.xuri.me in the example that follows): cn = "vpn.xuri.me" organization = "My Org" expiration_days = 3650 signing_key encryption_key tls_www_server When you have finished entering the above, escape from insert mode, write the file to disk, and quit the editor. Generate the server key and certificate, using the configuration template file: $ sudo certtool --generate-privkey --outfile server-key.pem $ sudo certtool --generate-certificate --load-privkey server-key.pem \ --load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem \ --template server.tmpl --outfile server-cert.pem Use commercial certificate For example I use WoSign Free SSL Certificates. I got 1_vpn.xuri.me_bundle.crt and 2_vpn.xuri.me.key two files. Convert .crt certificate to .pem format: $ openssl x509 -in 1_vpn.xuri.me_bundle.crt -out server-cert.pem -outform PEM Convert .key file to .pem format: $ cat 2_vpn.xuri.me.key > server-key.pem Put server-cert.pem and server-key.pem on path /etc/ocserv/, and set file permission 600. If you are use CA certificates issued by StartSSL, you have got certificate cert.crt file, I some case you should create certificate chain and merge sub certificate and root certificate like this: $ wget http://cert.startssl.com/certs/ca.pem $ wget http://cert.startssl.com/certs/sub.class1.server.ca.pem $ cat cert.crt sub.class1.server.ca.pem ca.pem > server-cert.pem Generate Certificates with Let's Encrypt Confirm the port in the file /lib/systemd/system/ocserv.socket not used by other program, and generate certificates by certbot: $ sudo apt-get install software-properties-common $ sudo add-apt-repository ppa:certbot/certbot $ sudo apt-get update $ sudo apt-get install certbot $ sudo certbot certonly Saving debug log to /var/log/letsencrypt/letsencrypt.log How would you like to authenticate with the ACME CA? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 1: Spin up a temporary webserver (standalone) 2: Place files in webroot directory (webroot) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Select the appropriate number [1-2] then [enter] (press 'c' to cancel): Select 1 and input domain name, certificates file located at /etc/letsencrypt/live/vpn.xuri.me/fullchain.pem, /etc/letsencrypt/live/vpn.xuri.me/privkey.pem. Configure the OpenConnect VPN server Edit the OCserv sample configuration file that is provided in /etc/ocserv: $ sudo vim ocserv.conf Use the editor to comment out (#) the default values and replace them with those shown in the example that follows: #auth = "pam[gid-min=1000]" auth = "plain[passwd=/etc/ocserv/ocpasswd]" #server-cert = /etc/ssl/certs/ssl-cert-snakeoil.pem #server-key = /etc/ssl/private/ssl-cert-snakeoil.key server-cert = /etc/ocserv/server-cert.pem server-key = /etc/ocserv/server-key.pem #try-mtu-discovery = false try-mtu-discovery = true default-domain = vpn.xuri.me #dns = 192.168.1.2 dns = 8.8.8.8 # Comment out all route fields #route = 10.10.10.0/255.255.255.0 #route = 192.168.0.0/255.255.0.0 #route = fef4:db8:1000:1001::/64 #no-route = 192.168.5.0/255.255.255.0 cisco-client-compat = true tcp-port = 443 udp-port = 443 When you have finished entering the above, escape from insert mode, write the file to disk, and quit the editor. Create user id and password Generate a user id and password that you will use to authenticate from AnyConnect to OCserv. For example, if you want your user id to be xuri: $ sudo ocpasswd -c /etc/ocserv/ocpasswd xuri You will be prompted to enter a password twice. The password will not be displayed on your terminal: Enter password: Re-enter password: Enable packet forwarding Allow forwarding in the Linux kernel by editing the system control configuration file: $ sudo vim /etc/sysctl.conf Delete the # sign at the start to uncomment the line: net.ipv4.ip_forward=1 Write the file to disk and quit the editor, and make this change active now: $ sudo sysctl -p Open firewall Open the server firewall for SSL: $ sudo iptables -I FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu $ sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT $ sudo iptables -A INPUT -p udp --dport 443 -j ACCEPT Enable network address translation (NAT): $ sudo iptables -t nat -A POSTROUTING -j MASQUERADE Assuming you have already installed iptables-persistent, reconfigure it to make your changes persist across server reboots: $ sudo dpkg-reconfigure iptables-persistent Start OpenConnect VPN server Check that nothing is already listening on port 443: $ sudo lsof -i The command sudo lsof -i then showed systemd listening to port 443 on IPv6. I do not know why systemd was doing this. The command systemctl -all list-sockets showed the related unit as ocserv.socket. The solution was to issue the command sudo systemctl stop ocserv.socket. Start OCserv: $ sudo ocserv -c /etc/ocserv/ocserv.conf or $ sudo ocserv Check that it is now listening on port 443 with the command: $ sudo netstat -tulpn | grep 443 $ ocserv -f -d 1 Optimization Add ocserv to system service: $ cd /etc/init.d $ sudo ln -s /lib/init/upstart-job ocserv $ cd /etc/init $ vi ocserv.conf Write the following script in the configuration file: #!upstart description "OpenConnect Server" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 20 5 script exec start-stop-daemon --start --pidfile /var/run/ocserv.pid --exec /usr/local/sbin/ocserv -- -f >> /dev/null 2>&1 end script Now we can use service ocserv start and service ocserv stop to control the service. Smart shunt Set up no-route in the configuration file by your own rules. Make CA certificate available for download Your client such as Mac, iPad or iPhone needs to be able to validate the server certificate. To allow it to do this, you must install your CA certificate on the iPad or iPhone as a trusted root certificate. The first step in this is to make the CA certificate available for download from your server. Open the firewall so that you can reach the server from a browser: $ sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT Install Apache: $ sudo apt-get install apache2 Copy the CA certificate into the web root folder: $ sudo cp /etc/ocserv/ca-cert.pem /var/www/html Download and install CA certificate Connect OCserv on Mac Download and install Cisco AnyConnect Secure Mobility Client for OS X with last version. Add your server IP address (e.g. vpn.xuri.me): Enter your username: Enter your password: Connect to VPN Connect OCserv on mobile client Now go to your iOS device (iPad or iPhone). Open the Safari browser. Browse to the location of the CA certificate at your server’s IP address. For example, if your server is located at vpn.xuri.me, then in Safari you would browse to: http://vpn.xuri.me/ca-cert.pem Follow the prompts to install the CA certificate as a "Profile" on your iOS 12.2 device. Once the "Profile" (i.e., certificate) is installed, tap on Done: Install AnyConnect on iOS 12.2 client On your iPad or iPhone, open the the App Store, and search for Cisco AnyConnect or desktop client. Configure AnyConnect on iOS 12.2 client Open the AnyConnect app. Tap on Connections. Tap on Add VPN Connection. Description is whatever you want Server Address is your server IP address (e.g. vpn.xuri.me) Tap Save. Connect to VPN Now connect from your iPad or iPhone to your VPN. You will be prompted to enter your username (the one you set up with ocpasswd a few minutes ago, for example, xuri): You will be prompted to enter your password (the one you set up for that username when you invoked ocpasswd): The AnyConnect VPN toggle goes green when you are connected: (Also, if you log on to your server and use a command such as sudo tail /var/log/syslog, you will see messages such as sec-mod: initiating session for user 'xuri'.) Troubleshooting Client get error: The secure gateway has rejected the connection attempt. A new connection attempt to the same or another secure gateway is needed, which requires re-authentication. Add MTU settings mtu = 1480 in the configuration file and restart the service. ### Use SSL SNI in Production With SNI technology you can now host multiple SSL certificates on a single IP address. SNI inserts the requested hostname (website address) within the TLS handshake (the browser sends it as part of ‘Client Hello’), enabling the server to determine the most appropriate SSL certificate to present – removing the need for your server to possess psychic powers and guess which SSL certificate it should present. Using SNI the server can safely host multiple SSL certificates for multiple sites, all using a single IP address. I enabled CloudFlare Universal SSL for my site recently, but I got This site works only in browsers with SNI support via Google Chrome on Microsoft Windows XP SP3, so I wanna to know what's the reason of this messages. I found following explain for this issue from CloudFlare official website. Universal SSL uses Server Name Indication (SNI) certificates using Elliptic Curve Digital Signature Algorithm (ECDSA). SNI and ECDSA certificates work with the following modern browsers: Desktop Browsers installed on Windows Vista or OS X 10.6 or later: Internet Explorer 7 Firefox 2 Opera 8 (with TLS 1.1 enabled) Google Chrome v5.0.342.0 Safari 2.1 Mobile Browsers Mobile Browsers Mobile Safari for iOS 4.0 Android 3.0 (Honeycomb) and later Windows Phone 7 ### Enable Certificate Transparency for HTTPS Introduction The certificates were used to impersonate numerous sites in Iran, such as Gmail and Facebook, which enabled the operators of the fake sites to spy on unsuspecting site users. In another case, a Malaysian subordinate certificate authority (DigiCert Sdn. Bhd.), mistakenly issued 22 weak SSL certificates, which could be used to impersonate websites and sign malicious software. As a result, major browsers had to revoke their trust in all certificates issued by DigiCert Sdn. Bhd. (Note: DigiCert Sdn. Bhd. is not affiliated with the U.S.-based corporation DigiCert, Inc.) Get more information from Certificate Transparency official website. Enable Certificate Transparency via TLS Extension for Nginx TLS Extension Server operators can deliver SCTs by using a special TLS extension (see figure 2). In this case, the CA issues the certificate to the server operator, and the server operator submits the certificate to the log. The log sends the SCT to the server operator, and the server operator uses a TLS extension with type signed_certificate_timestamp to deliver the SCT to the client during the TLS handshake. This method does not change the way a CA issues SSL certificates. However, it does require a server change to accommodate the TLS extension. We can get signed_certificate_timestamp TLS Extension in server handshake stage Server Hello responsed via network protocol analyzer such Wireshark. Get SCT File ct-submit is a program that submits X.509 certificate chains to Certificate Transparency log servers. It returns the Signed Certificate Timestamp structure in a format suitable for use with Apache's mod_ssl_ct module and nginx-ct. Install Go and ct-submit $ sudo apt-get install golang $ wget -O ct-submit.zip -c https://github.com/grahamedgecombe/nginx-ct/archive/master.zip $ unzip ct-submit.zip $ cd ct-submit-master $ go build Using executable file ct-submit-1.0.0 submit crt or pem format certificate file, and get a sct format file. $ ./ct-submit-master ct.googleapis.com/aviator <~/www/ssl/chained.crt >~/www/scts/aviator.sct $ ./ct-submit-master ct1.digicert-ct.com/log <~/www/ssl/chained.crt >~/www/scts/digicert.sct I have submit my certificate to Google and Digicert server in this example, we can find more server from Certificate Transparency Server List. Add Certificate Transparency (CT) Module for Nginx Config nginx-ct with OpenSSL 1.0.2+, I used CloudFlare patched OpenSSL $ git clone https://github.com/cloudflare/sslconfig $ wget -O openssl.zip -c https://github.com/openssl/openssl/archive/master.zip $ unzip openssl.zip $ mv openssl-master/ openssl $ cd openssl && patch -p1 < ../sslconfig/patches/openssl__chacha20_poly1305_cf.patch $ cd ../ $ wget -c http://nginx.org/download/nginx-1.9.11.tar.gz $ tar zxf nginx-1.9.10.tar.gz $ wget -O nginx-ct.zip -c https://github.com/grahamedgecombe/nginx-ct/archive/master.zip $ unzip nginx-ct.zip $ ./configure \ --http-client-body-temp-path=/var/run/nginx/body \ --http-fastcgi-temp-path=/var/run/nginx/fastcgi \ --http-proxy-temp-path=/var/run/nginx/proxy \ --http-scgi-temp-path=/var/run/nginx/scgi \ --http-uwsgi-temp-path=/var/run/nginx/uwsgi \ --user=www-data \ --group=www-data \ --prefix=/etc/nginx \ --sbin-path=/usr/sbin/nginx \ --conf-path=/etc/nginx/nginx.conf \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --pid-path=/var/run/nginx.pid \ --lock-path=/var/run/nginx.lock \ --with-pcre-jit \ --with-ipv6 \ --with-http_v2_module \ --with-debug \ --with-http_stub_status_module \ --with-http_realip_module \ --with-http_addition_module \ --with-http_dav_module \ --with-http_gzip_static_module \ --with-http_sub_module \ --with-http_xslt_module \ --with-http_ssl_module \ --with-ld-opt=-lrt \ --add-module=../nginx-ct-master \ --with-openssl=../openssl \ --with-http_v2_module \ --with-http_ssl_module $ make $ sudo service nginx stop $ make install Add following code in Nginx configuration file. ssl_ct on; ssl_ct_static_scts /your/path/to/scts; Validate Certificate Transparency on Google Chrome Certificate without SCT information will prompt: The server did not supply any Certificate Transparency information. Certificate with SCT information will prompt: The server supplied valid Certificate Transparency information. ### Media Center Sharing Setup for Sony Bravia X8000C Series Install Twonky Server on your computer, and setup via browser http://127.0.0.1:9000. Connect media center on TV: Settings → Network & Accessories → Home network setup (Set up home network connections.) → Server diagnostics (Check for servers and view server status.) → Server results. ### Using jsonb on PostgreSQL jsonb datatype, a specialised representation of the JSON data, allowing PostgreSQL to be competitive in managing the "lingua franca" of the moment for the exchange of data via web services. It is useful to perform a number of tests to verify its actual performance. In this post I have do some test for using jsonb datatype in PostgreSQL, hardware information: iMac 14.2, 3.4 GHz Intel Core i5 CPU, 8 GB 1600 MHz DDR3 RAM, 7200 RPM HDD. I have create 3 tables: user_ini base info data (2 000 000 rows), tbl_user_json with json type column (2 000 000 rows), tbl_user_jsonb with jsonb type column (2 000 000 rows). Connect database and create benchmark data $ psql -d postgres psql (9.5.0) Type "help" for help. postgres=# create table user_ini(id int4 ,user_id int8, user_name character varying(64),create_time timestamp(6) with time zone default clock_timestamp()); CREATE TABLE postgres=# insert into user_ini(id,user_id,user_name) select r,round(random()*2000000), r || '_test' from generate_series(1,2000000) as r; INSERT 0 2000000 Generate json data postgres=# create table tbl_user_json(id serial, user_info json); CREATE TABLE postgres=# insert into tbl_user_json(user_info) select row_to_json(user_ini) from user_ini; INSERT 0 2000000 Generate jsonb data postgres=# create table tbl_user_jsonb(id serial, user_info jsonb); CREATE TABLE postgres=# insert into tbl_user_jsonb(user_info) select row_to_json(user_ini)::jsonb from user_ini; INSERT 0 2000000 Get table size postgres=# \dt+ tbl_user_json List of relations Schema | Name | Type | Owner | Size | Description --------+---------------+-------+-------+--------+------------- public | tbl_user_json | table | xuri | 274 MB | (1 row) postgres=# \dt+ tbl_user_jsonb List of relations Schema | Name | Type | Owner | Size | Description --------+----------------+-------+-------+--------+------------- public | tbl_user_jsonb | table | xuri | 332 MB | (1 row) Query rows postgres=# select * from tbl_user_jsonb limit 3; id | user_info ----+--------------------------------------------------------------------------------------------------------- 1 | {"id": 1, "user_id": 1961705, "user_name": "1_test", "create_time": "2016-02-04T11:46:47.434434+08:00"} 2 | {"id": 2, "user_id": 1977164, "user_name": "2_test", "create_time": "2016-02-04T11:46:47.434508+08:00"} 3 | {"id": 3, "user_id": 1731350, "user_name": "3_test", "create_time": "2016-02-04T11:46:47.43451+08:00"} (3 rows) jsonb key and value query test Query user_info field from tbl_user_jsonb table: postgres=# select * from tbl_user_jsonb where user_info->>'user_name'= '1_test'; id | user_info ----+--------------------------------------------------------------------------------------------------------- 1 | {"id": 1, "user_id": 1961705, "user_name": "1_test", "create_time": "2016-02-04T11:46:47.434434+08:00"} (1 row) Query explain analyze without index: postgres=# explain analyze select * from tbl_user_jsonb where user_info->>'user_name'= '1_test'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on tbl_user_jsonb (cost=0.00..72545.09 rows=10000 width=141) (actual time=0.012..359.949 rows=1 loops=1) Filter: ((user_info ->> 'user_name'::text) = '1_test'::text) Rows Removed by Filter: 1999999 Planning time: 0.033 ms Execution time: 359.966 ms (5 rows) Create index for tbl_user_jsonb field in user_info table: postgres=# create index idx_gin_user_infob_user_name on tbl_user_jsonb using btree ((user_info ->> 'user_name')); CREATE INDEX Query plan, just use 0.057 ms: postgres=# explain analyze select * from tbl_user_jsonb where user_info->>'user_name'= '1_test'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on tbl_user_jsonb (cost=233.93..23867.07 rows=10000 width=141) (actual time=0.037..0.037 rows=1 loops=1) Recheck Cond: ((user_info ->> 'user_name'::text) = '1_test'::text) Heap Blocks: exact=1 -> Bitmap Index Scan on idx_gin_user_infob_user_name (cost=0.00..231.43 rows=10000 width=0) (actual time=0.034..0.034 rows=1 loops=1) Index Cond: ((user_info ->> 'user_name'::text) = '1_test'::text) Planning time: 0.127 ms Execution time: 0.057 ms (7 rows) Query by user_id in user_info table without index: postgres=# explain analyze select * from tbl_user_jsonb where user_info->>'user_id'= '1'; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------- Seq Scan on tbl_user_jsonb (cost=0.00..72545.00 rows=10000 width=141) (actual time=459.846..486.119 rows=1 loops=1) Filter: ((user_info ->> 'user_id'::text) = '1'::text) Rows Removed by Filter: 1999999 Planning time: 0.037 ms Execution time: 486.131 ms (5 rows) GIN index Before use GIN index, drop index idx_gin_user_infob_user_name: postgres=# drop index idx_gin_user_infob_user_name; DROP INDEX Create GIN index cost 202147.862 ms: postgres=# create index idx_tbl_user_jsonb_user_Info on tbl_user_jsonb using gin (user_Info); CREATE INDEX postgres=# \di+ idx_tbl_user_jsonb_user_Info List of relations Schema | Name | Type | Owner | Table | Size | Description --------+------------------------------+-------+-------+----------------+--------+------------- public | idx_tbl_user_jsonb_user_info | index | xuri | tbl_user_jsonb | 445 MB | (1 row) Query by key and value with index: postgres=# explain analyze select * from tbl_user_jsonb where user_info @> '{"user_id": 1017031}'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on tbl_user_jsonb (cost=59.50..6647.26 rows=2000 width=141) (actual time=56.129..56.145 rows=3 loops=1) Recheck Cond: (user_info @> '{"user_id": 1017031}'::jsonb) Rows Removed by Index Recheck: 1 Heap Blocks: exact=4 -> Bitmap Index Scan on idx_tbl_user_jsonb_user_info (cost=0.00..59.00 rows=2000 width=0) (actual time=56.098..56.098 rows=4 loops=1) Index Cond: (user_info @> '{"user_id": 1017031}'::jsonb) Planning time: 28.148 ms Execution time: 56.183 ms (8 rows) Query by key and value without index: postgres=# explain analyze select * from tbl_user_jsonb where user_info->>'user_name' ='6_test'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on tbl_user_jsonb (cost=0.00..72545.00 rows=10000 width=141) (actual time=0.028..423.282 rows=1 loops=1) Filter: ((user_info ->> 'user_name'::text) = '6_test'::text) Rows Removed by Filter: 1999999 Planning time: 0.036 ms Execution time: 423.295 ms (5 rows) Drop GIN index idx_tbl_user_jsonb_user_Info: postgres=# drop index idx_tbl_user_jsonb_user_Info; DROP INDEX Create new index idx_gin_user_info_user_name and query explain analyze, so fast: postgres=# create index idx_gin_user_info_user_name on tbl_user_jsonb using gin((user_info -> 'user_name')); CREATE INDEX postgres=# explain analyze select * from tbl_user_jsonb where user_info->'user_name' ?'6_test'; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on tbl_user_jsonb (cost=35.50..6628.26 rows=2000 width=141) (actual time=0.019..0.019 rows=1 loops=1) Recheck Cond: ((user_info -> 'user_name'::text) ? '6_test'::text) Heap Blocks: exact=1 -> Bitmap Index Scan on idx_gin_user_info_user_name (cost=0.00..35.00 rows=2000 width=0) (actual time=0.009..0.009 rows=1 loops=1) Index Cond: ((user_info -> 'user_name'::text) ? '6_test'::text) Planning time: 0.121 ms Execution time: 0.035 ms (7 rows) json and jsonb query benchmark Before compare json and jsonb, drop GIN index idx_gin_user_info_user_name: postgres=# drop index idx_gin_user_info_user_name; DROP INDEX Create function index for id field on tbl_user_json and tbl_user_jsonb table: postgres=# create index idx_gin_user_info_id on tbl_user_json using btree (((user_info ->> 'id')::integer)); CREATE INDEX postgres=# create index idx_gin_user_infob_id on tbl_user_jsonb using btree (((user_info ->> 'id')::integer)); CREATE INDEX Because ->> return text type value, and this value will be used for compare with id field, so we need create function index. json query postgres=# explain analyze select id,user_info->'id',user_info->'user_name' from tbl_user_json where (user_info->>'id')::int4 > '1' and (user_info->>'id')::int4 < '10000'; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on tbl_user_json (cost=214.93..22443.74 rows=10000 width=111) (actual time=16.235..32.507 rows=9998 loops=1) Recheck Cond: ((((user_info ->> 'id'::text))::integer > 1) AND (((user_info ->> 'id'::text))::integer < 10000)) Heap Blocks: exact=172 -> Bitmap Index Scan on idx_gin_user_info_id (cost=0.00..212.43 rows=10000 width=0) (actual time=16.179..16.179 rows=9998 loops=1) Index Cond: ((((user_info ->> 'id'::text))::integer > 1) AND (((user_info ->> 'id'::text))::integer < 10000)) Planning time: 6.588 ms Execution time: 32.929 ms (7 rows) jsonb query postgres=# explain analyze select id,user_info->'id',user_info->'user_name' from tbl_user_jsonb where (user_info->>'id')::int4 > '1' and (user_info->>'id')::int4 < '10000'; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on tbl_user_jsonb (cost=214.93..24048.07 rows=10000 width=141) (actual time=16.194..20.925 rows=9998 loops=1) Recheck Cond: ((((user_info ->> 'id'::text))::integer > 1) AND (((user_info ->> 'id'::text))::integer < 10000)) Heap Blocks: exact=205 -> Bitmap Index Scan on idx_gin_user_infob_id (cost=0.00..212.43 rows=10000 width=0) (actual time=16.164..16.164 rows=9998 loops=1) Index Cond: ((((user_info ->> 'id'::text))::integer > 1) AND (((user_info ->> 'id'::text))::integer < 10000)) Planning time: 0.136 ms Executio ### Create Trigger in PostgreSQL The trigger will be associated with the specified table, view, or foreign table and will execute the specified function function_name when certain events occur. The trigger can be specified to fire before the operation is attempted on a row (before constraints are checked and the INSERT, UPDATE, or DELETE is attempted); or after the operation has completed (after constraints are checked and the INSERT, UPDATE, or DELETE has completed); or instead of the operation (in the case of inserts, updates or deletes on a view). If the trigger fires before or instead of the event, the trigger can skip the operation for the current row, or change the row being inserted (for INSERT and UPDATE operations only). If the trigger fires after the event, all changes, including the effects of other triggers, are "visible" to the trigger. Example target: when drop rows in client table, insert new record in uninstall table. List of databases test1=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+-------+----------+-------------+-------------+------------------- postgres | xuri | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | xuri | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/xuri + | | | | | xuri=CTc/xuri template1 | xuri | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/xuri + | | | | | xuri=CTc/xuri test1 | xuri | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (4 rows) List of relations test1=# \dt public.* List of relations Schema | Name | Type | Owner --------+-----------+-------+------- public | client | table | xuri public | uninstall | table | xuri (2 rows) List of client and uninstall schema test1=# \d+ client Table "public.client" Column | Type | Modifiers | Storage | Stats target | Description --------+-------------------+-----------------------------------------------------+----------+--------------+------------- id | integer | not null default nextval('client_id_seq'::regclass) | plain | | name | character varying | | extended | | Indexes: "client_pkey" PRIMARY KEY, btree (id) test1=# \d+ uninstall Table "public.uninstall" Column | Type | Modifiers | Storage | Stats target | Description --------+-------------------+--------------------------------------------------------+----------+--------------+------------- id | integer | not null default nextval('uninstall_id_seq'::regclass) | plain | | name | character varying | | extended | | Indexes: "uninstall_pkey" PRIMARY KEY, btree (id) Install plpgsql to database $ createlang plpgsql DATABASE Create trigger process CREATE OR REPLACE FUNCTION notify_trigger() RETURNS trigger AS $BODY$ BEGIN IF( TG_OP='DELETE' ) THEN INSERT INTO uninstall (id) VALUES(OLD.id, OLD.name); ELSE END IF; RETURN NULL; END; $BODY$ LANGUAGE 'plpgsql'; Create trigger CREATE TRIGGER notify_trigger AFTER DELETE ON client FOR EACH ROW EXECUTE PROCEDURE notify_trigger(); Reference PostgreSQL Documentation - CREATE TRIGGER ### Resolve SVN CornerStone Can't Commit File Introduction Cornerstone is a version control app built on Subversion that makes serious version control easier than ever. With full support for all of Subversion's rich features, it's both incredibly powerful and super-easy to use. If use CornerStone try to commit, and it goes: Can not commit file: Download and install Subversion-Client-1.8.15_10.10.x.pkg from WANDISCO. Backup svn* file in /Applications/Xcode.app/Contents/Developer/usr/bin/ folder. Copy all svn* file in /opt/subversion/bin/ to /Applications/Xcode.app/Contents/Developer/usr/bin/ folder, replace just backup files. Check system svn version: $ svn --version svn, version 1.8.15 (r1718365) compiled Dec 12 2015, 14:32:53 on x86_64-apple-darwin14.0.0 Copyright (C) 2015 The Apache Software Foundation. This software consists of contributions made by many people; see the NOTICE file for more information. Subversion is open source software, see http://subversion.apache.org/ Reinstall Cornerstone, that's it. ### Wiener Philharmoniker Vienna New Year's Concert 2016 The Vienna Philharmonic New Year's Concert took place under the baton of Mariss Jansons on January 1, 2016, in the Golden Hall of the Musikverein in Vienna. Mariss Jansons, whose musical collaboration with the Vienna Philharmonic goes back to 1992, conducted the New Year's Concert for the third time, following 2006 and 2012. The New Year's Concert 2016 also represented the 75th anniversary of this unique cultural event. The New Year's Concert 2016 will be broadcast in over 90 countries around the world and be seen by 50 million television viewers. Program Robert Stolz Uno-Marsch Johann Strauss, Jr. Schatz-Walzer. op. 418 Violetta. Polka francaise, op. 404 Vergnügungszug. Polka (schnell), op. 281 Carl Michael Ziehrer Weaner Madl'n. Walzer op. 388 Eduard Strauss Mit Extrapost. Galopp, op. 259 -- Pause -- Johann Strauss, Jr. Ouvertüre zu Eine Nacht in Venedig (Wiener Fassung) Eduard Strauss Ausser Rand und Band. Polka schnell, op. 168 Josef Strauss Sphärenklänge. Walzer, op. 235 Johann Strauss, Jr. Sängerslust. Polka francaise, op. 328 Josef Strauss Auf Ferienreisen. Polka schnell, op. 133 Johann Strauss, Jr. Fürstin Ninetta - Entr’acte zwischen 2. und 3. Akt Èmile Waldteufel Espana. Walzer, op. 236 Josef Hellmesberger sen. Ball-Szene Johann Strauss, sen. Seufzer-Galopp, op. 9 Josef Strauss Die Libelle. Polka mazur, op. 204 Johann Strauss, Jr. Kaiser-Walzer, op. 437 Auf der Jagd. Polka schnell, op. 373 Vienna Philharmonic Orchestra - New Year's Concert 2016 with Mariss Jansons - MPEG2HD 422+DolbyE - UPLINK [SRSL] 50.36GB magnet:?xt=urn:btih:d88e8234b7e28d21a81fc861560e8378ce8f1472&dn=Vienna Philharmonic Orchestra - New Year's Concert 2016 with Mariss Jansons - MPEG2HD 422+DolbyE - UPLINK [SRSL] ### Lang Lang in Versailles Ever since his first visit to the magical Hall of Mirrors at Versailles Palace, superstar classical pianist Lang Lang had dreamed of performing there. His dream came to fruition in the form of a special concert there in June 2015. Lang Lang selected the same repertoire that the city of light inspired him to play on his latest album, Lang Lang in Paris - Chopin s four momentous Scherzi and Tchaikovsky s rarer, masterful cycle, The Seasons making Lang Lang Live in Versailles a companion piece to the album. The live, ninety-minute concert was captured by multiple cameras in 4K, including strikingly intimate and seamless Steady-cam footage which provides the viewer with an unprecedented experience of this once-in-a-lifetime recital. TRACK LIST: 1. Opening 2. Scherzo No. 1 in B Minor, Op. 20 3. Scherzo No. 2 in B-Flat Minor, Op. 31 4. Scherzo No.3 in C-Sharp Minor, Op. 39 5. Scherzo No. 4 in E Major, Op. 54 6. The Seasons, Op. 37a; I. January: At the Fireside 7. The Seasons, Op. 37a; II. February: Carnival 8. The Seasons, Op. 37a; III. March: Song of the Lark 9. The Seasons, Op. 37a; IV. April: Snowdrop 10. The Seasons, Op. 37a; V. May: Starlit Nights 11. The Seasons, Op. 37a; VI. June: Barcarolle 12. The Seasons, Op. 37a; VII. July: Song of the Reaper 13. The Seasons, Op. 37a; VIII. August: Harvest 14. The Seasons, Op. 37a; IX. September: The Hunt 15. The Seasons, Op. 37a; X. October: Autumn Song 16. The Seasons, Op. 37a; XI. November: Troika 17. The Seasons, Op. 37a; XII. December: Christmas 18. Applause LANG_LANG_LIVE_IN_VERSAILLES_HDCLUB.ISO (27.16GB) ### Monitor Servers with Zabbix Introduction Zabbix is an enterprise open source monitoring solution for networks and applications, created by Alexei Vladishev. It is designed to monitor and track the status of various network services, servers, and other network hardware. Zabbix uses MySQL, PostgreSQL, SQLite, Oracle or IBM DB2 to store data. Its backend is written in C and the web frontend is written in PHP. Zabbix offers several monitoring options: Simple checks can verify the availability and responsiveness of standard services such as SMTP or HTTP without installing any software on the monitored host. A Zabbix agent can also be installed on UNIX and Windows hosts to monitor statistics such as CPU load, network utilization, disk space, etc. As an alternative to installing an agent on hosts, Zabbix includes support for monitoring via SNMP, TCP and ICMP checks, as well as over IPMI, JMX, SSH, Telnet and using custom parameters. Zabbix supports a variety of real-time notification mechanisms, including XMPP. Installing the Zabbix Server Let's installing the Zabbix server on the Ubuntu 14.04 LTS from packages. Zabbix 2.4 for Ubuntu 14.04 LTS: $ wget http://repo.zabbix.com/zabbix/2.4/ubuntu/pool/main/z/zabbix-release/zabbix-release_2.4-1+trusty_all.deb $ sudo dpkg -i zabbix-release_2.4-1+trusty_all.deb $ sudo apt-get update Installing Zabbix packages Install Zabbix packages. dbconfig-common will create the database and populate the initial schema and data automatically. If backend db is located on a different server, please set dbc_remote_questions_default='true' in /etc/dbconfig-common/config. Example for Zabbix server and web frontend with mysql database. $ sudo apt-get install zabbix-server-mysql zabbix-frontend-php Example for installing Zabbix agent only. $ sudo apt-get install zabbix-agent Editing PHP configuration for Zabbix frontend Apache configuration file for Zabbix frontend is located in /etc/apache2/conf.d/zabbix. Some PHP settings are already configured. php_value max_execution_time 300 php_value memory_limit 128M php_value post_max_size 16M php_value upload_max_filesize 2M php_value max_input_time 300 # php_value date.timezone Europe/Riga It's necessary to uncomment the date.timezone setting and set the correct timezone for you. After changing the configuration file restart the apache web server. $ sudo service apache2 restart Zabbix frontend is available at http://zabbix-frontend-hostname/zabbix in the browser. You will be presented with a login screen. The default credentials are as follows: When you have logged in, click on the "Configuration" button, and then "Hosts" in the top navigation bar. Username = admin Password = zabbix Click on the name of the server (by default, this should be "Zabbix server"). This will open the host configuration page. Adjust the Hostname to reflect the hostname of your Zabbix server (this should be the same hostname you entered in the agent configuration for the server machine). At the bottom, check out "Enabled". Click save. You will be dopped back to the Hosts configuration page. Re-click on the hostname again. This time, click on the "Clone" button at the bottom of the page. We will configure this to reflect the settings of the client machine. Change the hostname and the IP address to reflect the correct information from the client agent file. In the "groups" section, select "Zabbix servers" and click the right arrow icon to remove that group. Select the "Linux servers" and click the left arrow icon to add that group setting. Click "Save" at the bottom. After a few minutes, you should be able to see both computers by going to "Monitoring" and then clicking on "Graph" you will see a graphical representation of the data points that have been collected: As with all monitoring software, these graphs will become more interesting the longer the server runs. There are many other options that you can utilize to keep an eye on your servers. Reference Document Zabbix Documentation 3.0 Related Article Install the Cacti Server Monitor on Ubuntu Server ### Upgrade to PHP 7 on Linux Prerequisites This guide assumes that you are running PHP 5.x on an Ubuntu 16.04 machine, using PHP-FPM in conjunction with Nginx. It also assumes that you have a non-root user configured with sudo privileges for administrative tasks. Adding a PPA for PHP 7 Packages A Personal Package Archive, or PPA, is an Apt repository hosted on Launchpad. PPAs allow third-party developers to build and distribute packages for Ubuntu outside of the official channels. They're often useful sources of beta software, modified builds, and backports to older releases of the operating system. Ondřej Surý maintains the PHP packages for Debian, and offers a PPA for PHP on Ubuntu. Before doing anything else, log in to your system, and add Ondřej's PPA to the system's Apt sources: $ sudo apt-get install python-software-properties $ sudo add-apt-repository ppa:ondrej/php Once the PPA is installed, update the local package cache to include its contents: $ sudo apt-get update Current PHP packages This only applies if you are upgrading from a previous version. Note down the current PHP packages you have $ dpkg -l | grep php | tee packages.txt Now that we have access to packages for PHP 7.0, we can replace the existing PHP installation. Upgrading PHP-FPM with Nginx This section describes the upgrade process for a system using Nginx as the web server and PHP-FPM to execute PHP code. First, install the new PHP-FPM package and its dependencies: $ sudo apt-get install -y php7.0 Check PHP Version Now use the following command to check installed php version on your system. $ php -v Note: If you have made substantial modifications to any configuration files in /etc/php5/, those files are still in place, and can be referenced. Configuration files for PHP 7.0 now live in /etc/php/7.0. Install PHP 7 Modules You may also need to install modules based on your application requirements. Use the following command to find our available php 7 modules. $ sudo apt-cache search php7-* If you are using MySQL, be sure to re-install the PHP MySQL bindings: $ sudo apt-get install php7.0-mysql Updating Nginx Site(s) to Use New Socket Path Nginx communicates with PHP-FPM using a Unix domain socket. Sockets map to a path on the filesystem, and our PHP 7 installation uses a new path by default: PHP 5 /var/run/php5-fpm.sock PHP 7 /var/run/php/php7.0-fpm.sock Open the default site configuration file with vim (or your editor of choice): $ sudo vim /etc/nginx/sites-enabled/default Your configuration may differ somewhat. Look for a block beginning with location ~ \.php$ {, and a line that looks something like fastcgi_pass unix:/var/run/php5-fpm.sock;. Change this to use unix:/var/run/php/php7.0-fpm.sock. /etc/nginx/sites-enabled/default server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /var/www/html; index index.php index.html index.htm; server_name server_domain_name_or_IP; location / { try_files $uri $uri/ =404; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php/php7.0-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Exit and save the file and we should repeat this process for any other virtual sites defined in /etc/nginx/sites-enabled which need to support PHP. Replace strings in multiple files in bulk by sed: $ sudo sed -i "s/php5-fpm/php7.0-fpm/g" `grep "php5-fpm" -rl /etc/nginx/sites-available` Restart Nginx: $ sudo service nginx restart Issues Got error PHP Warning: Module 'mcrypt' already loaded in Unknown on line 0 Comment extension=mcrypt.so in /etc/php/mods-available/mcrypt.ini (which should cover both the CLI and FPM SAPIs), like this: ; configuration for php mcrypt module ; priority=20 ; extension=mcrypt.so after doing this, it shows up in the list of compiled modules: $ php -m | grep mcrypt mcrypt WordPress plugin LaTeX for WordPress use preg_replace_callback(); function in wp-content/plugins/latex/latex.php on line 47 and 49, comment this 2 line and comment out code: $text = preg_replace_callback( '/&#(\d+);/m', function($m) { return chr(intval($m[1])); }, $text); # decimal notation $text = preg_replace_callback( '/&#x([a-f0-9]+);/mi', function($m) { return chr(intval("0x$m[1]")); }, $text); #hex notation ### Atom Essential Packages Wiki Atom is a text editor that's modern, approachable, yet hackable to the core—a tool you can customize to do anything but also use productively without ever touching a config file. OS X .apmrc config located in ~/.atom/.apmrc My Packages List Linter by atom-community Linter is a base linter provider for the Hackable Atom Editor. It provides a top-level API to its consumer that allows them to visualize errors and other kind-of messages, easily. esformatter by sindresorhus Beautify JavaScript. atom-beautify by Glavin001 Beautify HTML, CSS, JavaScript, PHP, Python, Ruby, Java, C, C++, C#, Objective-C, CoffeeScript, TypeScript, and SQL in Atom minimap by atom-minimap A preview of the full source code. autocomplete-paths by atom-community Adds path autocompletion to autocomplete+ vim-mode by atom Add vim modal control。 docblockr by NikhilKalige DocBlockr is a package for Atom which is designed to make writing documentation faster and easier. emmet by emmetio Emmet – the essential tool for web developers。 javascript-snippets by zenorocha JavaScript & NodeJS Snippets for Atom. file-icons by DanBrooker Assign file extension icons and colours for improved visual grepping. etc ... Related Articles Slove Sublime Text There are no packages available for installation ### PHP UDP Syslog Client Wiki Syslog is a widely used standard for message logging. It permits separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers and routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. Each message is labeled with a facility code, and assigned a severity label. The facility code indicates the software type of the application that generated the message. The destination of messages may be directed to various destinations, tuned by facility and severity, including console, files, remote syslog servers, or relays. Most implementations provide a command line utility, often called logger, as well as a link library, to send messages to the log. In this post, I creates and returns a socket resource use PHP socket_create function to send remot UDP Syslog. $remote_ip = '127.0.0.1'; $remote_port = 514; $severity_level = 3; $sock = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP); $syslog_content = ['info' => md5(time())]; $syslog_message = "<{$severity_level}>" . json_encode($syslog_content); socket_sendto($sock, $syslog_message, strlen($syslog_message), 0, $remote_ip, (int)$remote_port); socket_close($sock); Severity Level Code 0 - Emergency: system is unusable. A "panic" condition - notify all tech staff on call? (earthquake? tornado?) - affects multiple apps/servers/sites... 1 - Alert: action must be taken immediately. Should be corrected immediately - notify staff who can fix the problem - example is loss of backup ISP connection. 2 - Critical: critical conditions. Should be corrected immediately, but indicates failure in a primary system - fix CRITICAL problems before ALERT - example is loss of primary ISP connection. 3 - Error: error conditions. Non-urgent failures - these should be relayed to developers or admins; each item must be resolved within a given time. 4 - Warning: warning conditions. Warning messages - not an error, but indication that an error will occur if action is not taken, e.g. file system 85% full - each item must be resolved within a given time. 5 - Notice: normal but significant condition. Events that are unusual but not error conditions - might be summarized in an email to developers or admins to spot potential problems - no immediate action required. 6 - Informational: informational messages. Normal operational messages - may be harvested for reporting, measuring throughput, etc - no action required. 7 - Debug: debug-level messages. Info useful to developers for debugging the app, not useful during operations. Setup Syslog Server on Windows Install Kiwi Syslog Server on Windows, this application can be receives, logs, displays, alerts on, and forwards syslog, SNMP trap, and Windows event log messages from routers, switches, firewalls, Linux and UNIX hosts, and Windows machines. Test Send Remot UDP Syslog ### Use GPG to Encrypt and Sign Messages Wiki GNU Privacy Guard (GnuPG or GPG) is a free software replacement for Symantec's PGP cryptographic software suite. GnuPG is compliant with RFC 4880, which is the IETF standards track specification of OpenPGP. Modern versions of PGP and Veridis' Filecrypt are interoperable with GnuPG and other OpenPGP-compliant systems. GnuPG is part of the GNU project, and has received major funding from the German government. We can use GPG to Encrypt and Sign messages. Setup Install from source code. $ ./configure $ make $ make install Install via system package control # Debian / Ubuntu $ sudo apt-get install gnupg # Fedora / CentOS / RedHat $ yum install gnupg Install on OS X via Homebrew $ brew install gnupg Install GPG Tool application. After install, get help information to run gpg --help command. Generate a Key Pair $ gpg --gen-key This will take you through a few questions that will configure your keys. Please select what kind of key you want: (1) RSA and RSA (default) What keysize do you want? 4096 Key is valid for? 0 Is this correct? y Real name: your real name here Email address: your_email@address.com Comment: Optional comment that will be visible in your signature Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O Enter passphrase: Enter a secure passphrase here (upper & lower case, digits, symbols) Create a Revocation Certificate You need to have a way of invalidating your key pair in case there is a security breach, or in case you lose your secret key. There is an easy way of doing this with the GPG software. This should be done as soon as you make the key pair, not when you need it. This revocation key must be generated ahead of time and kept in a secure, separate location in case your computer is compromised or inoperable. Type: $ gpg --gen-revoke [UserID] List the keys with gpg --list-keys command, and delete keys with gpg --delete-key [UserID] command. Export Your Public Key $ gpg --armor --output public-key.txt --export [UserID] Export Your Private Key $ gpg --armor --output private-key.txt --export-secret-keys Upload Your Public Key $ gpg --send-keys [UserID] --keyserver hkp://pgp.mit.edu How To Verify and Sign Keys While you can freely distribute your generated public key file and people can use this to contact you in an encrypted way, there is still an issue of trust in the initial public key transmission. Verify the Other Person's Identity How do you know that the person giving you the public key is who they say they are? In some cases, this may be simple. You may be sitting right next to the person with your laptops both open and exchanging keys. This should be a pretty secure way of identifying that you are receiving the correct, legitimate key. But there are many other circumstances where such personal contact is not possible. You may not know the other party personally, or you may be separated by physical distance. If you never want to communicate over insecure channels, verification of the public key could be problematic. Luckily, instead of verifying the entire public keys of both parties, you can simply compare the fingerprint derived from these keys. This will give you a reasonable assurance that you both are using the same public key information. You can get the fingerprint of a public key by typing: $ gpg --fingerprint [UserID] This will produce a much more manageable string of numbers to compare. You can compare this string with the person themselves, or someone else who has access to that person. Import Other Users' Public Keys GPG would be pretty useless if you could not accept other public keys from people you wished to communicate with. You can import someone's public key in a variety of ways. If you've obtained a public key from someone in a text file, GPG can import it with the following command: $ gpg --import name_of_pub_key_file There is also the possibility that the person you are wishing to communicate with has uploaded their key to a public key server. These key servers are used to house people's public keys from all over the world. A popular key server that syncs its information with a variety of other servers is the MIT public key server. You can search for people by their name or email address by going here in your web browser: pgp.mit.edu. You can also search the key server from within GPG by typing the following: $ gpg --keyserver pgp.mit.edu --search-keys [UserID] Encrypt File $ gpg --recipient [UserID] --output demo.en.txt --encrypt demo.txt Decrypt File $ gpg --decrypt demo.en.txt --output demo.de.txt Sign File $ gpg --sign demo.txt After execute that command, will generate demo.txt.gpg file with sign in binary encode, if you want to use ASCII format file, use clearsign params: $ gpg --clearsign demo.txt After execute that command, will generate demo.txt.asc file with sign in ASCII encode. If you want to separate generated signature file and origion file, you can use the detach-sign parameters: $ gpg --detach-sign demo.txt After execute that command, will generate demo.txt.sig sign file with binary encode, if you want to use ASCII format sign file, add armor params: $ gpg --armor --detach-sign demo.txt Verify Sign $ gpg --verify demo.txt.asc demo.txt Use GPG Suite - GPG Keychain Encrypt E-mail on OS X After GPG Suite installed will add OpenPGP services on system. You can add some shortcuts in System Preferences -> Keyboard -> Shortcuts like this: Sign E-mail In Mail.app select all mail content and choose OpenPGP: Sign Selection in right click menu (or use shortcuts with your custom settings). Enter password and click OK button. E-mail Encryption After import other user's public key, select all mail content (also include sign selection if exists) and choose OpenPGP: Encrypt Selection (or use shortcuts with your custom settings). Select mail receivers and send it. Decryption E-mail Select code from ----BEGIN PGP MESSAGE---- to -----END PGP MESSAGE----- of mail content, and select OpenPGP: Decrypt Selection in right click menu (or use shortcuts with your custom settings). ### Troubleshooting Homebrew MySQL on OS X I get MySQL upgrade to 5.7.9 from Homebrew on OS X El Capitan (10.11.1), and got ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) when start the service. $ brew upgrade mysql ==> Downloading https://homebrew.bintray.com/bottles/mysql-5.7.9.el_capitan.bott ######################################################################## 100.0% ==> Pouring mysql-5.7.9.el_capitan.bottle.1.tar.gz ==> Caveats We've installed your MySQL database without a root password. To secure it run: mysql_secure_installation To connect run: mysql -uroot To have launchd start mysql at login: ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents Then to load mysql now: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist Or, if you don't want/need launchctl, you can just run: mysql.server start ==> Summary 🍺 /usr/local/Cellar/mysql/5.7.9: 12629 files, 464M $ mysql -uroot ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) I solved it by initialize method and lost all database. $ launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist $ rm -rf /usr/local/var/mysql $ mysqld --initialize The initialize method will create the data dir, and also a root user with a temporary password, be sure you copy this password and then log in and change the password. The password from the below output. [Note] A temporary password is generated for root@localhost: A:>s6r<6IeLh Restart service via Homebrew and reset root user password. $ brew services start mysql ==> Successfully started `mysql` (label: homebrew.mxcl.mysql) MacBook Pro:Caches xuri$ mysql -uroot -p Enter password: **** <- enter root password here Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.7.9 Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'password'; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye Resolve MySQL 8.0.12: `(HY000/2054) The server requested authentication method unknown to the client [caching_sha2_password] in ...` Add default authentication plugin to my.cnf: [mysqld] default_authentication_plugin = mysql_native_password ### Troubleshooting - Warning: Possible Dns Spoofing Detected! Sometimes, when logging into another computer, you'll get a scary looking warning like: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: POSSIBLE DNS SPOOFING DETECTED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ The RSA host key for example.com has changed, and the key for the corresponding IP address 192.168.1.100 is unknown. This could either mean that DNS SPOOFING is happening or the IP address for the host and its host key have changed at the same time. @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is 69:4e:bb:70:6a:61:e2:73:07:6f:b4:00:41:07:d8:9c. Please contact your system administrator. Add correct host key in /home/deploy/.ssh/known_hosts to get rid of this message. Offending key in /home/deploy/.ssh/known_hosts:1 Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks. Although this looks bad, all it probably means is that the IP address of the computer you're trying to log into has changed. Because we use DHCP, this can happen every time a computer reboots. To fix the problem, delete the appropriate lines out of the ~/.ssh/known_hosts files. ### Create Software RAID on Windows Wiki RAID 0 RAID 0 (also known as a stripe set or striped volume) splits ("stripes") data evenly across two or more disks, without parity information, redundancy, or fault tolerance. Since RAID 0 provides no fault tolerance or redundancy, the failure of one drive will cause the entire array to fail; as a result of having data striped across all disks, the failure will result in total data loss. This configuration is typically implemented having speed as the intended goal. RAID 0 is normally used to increase performance, although it can also be used as a way to create a large logical volume out of two or more physical disks. A RAID 0 setup can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 320 GB disk, the size of the array will be 120 GB × 2 = 240 GB. The diagram in this section shows how the data is distributed into Ax stripes on two disks, with A1:A2 as the first stripe, A3:A4 as the second one, etc. Once the stripe size is defined during the creation of a RAID 0 array, it needs to be maintained at all times. Since the stripes are accessed in parallel, an n-drive RAID 0 array appears as a single large disk with a data rate n times higher than the single-disk rate. Performance A RAID 0 array of n drives provides data read and write transfer rates up to n times higher than the individual drive rates, but with no data redundancy. As a result, RAID 0 is primarily used in applications that require high performance and are able to tolerate lower reliability, such as in scientific computing or computer gaming. Some benchmarks of desktop applications show RAID 0 performance to be marginally better than a single drive. Another article examined these claims and concluded that "striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance".[9] Synthetic benchmarks show different levels of performance improvements when multiple HDDs or SSDs are used in a RAID 0 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison. RAID 1 RAID 1 consists of an exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks. This configuration offers no parity, striping, or spanning of disk space across multiple disks, since the data is mirrored on all disks belonging to the array, and the array can only be as big as the smallest member disk. This layout is useful when read performance or reliability is more important than write performance or the resulting data storage capacity. The array will continue to operate so long as at least one member drive is operational. Performance Any read request can be serviced and handled by any drive in the array; thus, depending on the nature of I/O load, random read performance of a RAID 1 array may equal up to the sum of each member's performance,[a] while the write performance remains at the level of a single disk. However, if disks with different speeds are used in a RAID 1 array, overall write performance is equal to the speed of the slowest disk. Synthetic benchmarks show varying levels of performance improvements when multiple HDDs or SSDs are used in a RAID 1 setup, compared with single-drive performance. However, some synthetic benchmarks also show a drop in performance for the same comparison. Reference Standard RAID levels We need 2 disk to create RAID, Disk 1 and Disk 2. Before create. RAID 0 Open Disk Management application, right Disk 1 → New Striped Volume... Select Disk 2 from Available box and add it to Selected box → Next After do that RAID 1 Open Disk Management application, right Disk 1 → New Mirrored Volume... Select Disk 2 from Available box and add it to Selected box → Next After do that ### Configuration Surge and ShadowSocks on iOS Wiki Surge is a web developer tool and proxy utility for iOS 9. This app is designed for developers and may require some level of professional knowledge to use. Disclaimer: SS mosule originally came from the author's website, now deleted by the author, I backup from from the author's website, please assessment of the security by yourself, you can also use the surge Ethereal see if there is abnormal flow. Other declarations: conf configuration files and ss module installation files only need to download once, I upload the conf file and the module file on my blog. After Surge installed only a Default file, touch Config tab Select Download Configuration from URL Input download URL https://xuri.me/surge/surge.conf, and touch OK. You can alse import from iCloud or other Apps. Will be prompted to configure this is invalid, because here you need to change your server address and port, password and encryption method in config file. Touch Edit in Text Mode and edit your own server info. Modify [PROXY] section, change your server address and port, password and encryption method. After modify config file touch OK and start service, that's it. Update at 4, Dec 2015 Surge not available on App Store from 4 December 2015, download Surge 1.1.0.ipa from Google Drive (4.5MB) ### Assessment of Open XML Library Wiki Office Open XML (also informally known as OOXML or OpenXML) is a zipped, XML-based file format developed by Microsoft for representing spreadsheets, charts, presentations and word processing documents. Because Excel file is base on Open XML, so we can read and write Excel file directly. Here some open source Open XML library with some different language implement. PHP PhpSpreadsheet PhpSpreadsheet is a library written in pure PHP and providing a set of classes that allow you to read from and to write to different spreadsheet file formats, like Excel and LibreOffice Calc. Official Website: phpspreadsheet.readthedocs.io Project on GitHub: github.com/PHPOffice/PhpSpreadsheet Python jmcnamara/XlsxWriter XlsxWriter is a Python module for writing files in the Excel 2007+ XLSX file format. Official Website: xlsxwriter.readthedocs.io Project on GitHub: github.com/jmcnamara/XlsxWriter python-excel Include xlrd, xlwt and xlutils module. xlrd can read file but xlwt can't write file. Official Website: python-excel.org OpenPyXL A Python library to read/write Excel 2007 xlsx/xlsm files Official Website: openpyxl.readthedocs.org Go xuri/excelize Excelize is a library written in pure Go providing a set of functions that allow you to write to and read from XLSX files. Supports reading and writing XLSX file generated by Microsoft Excel™ 2007 and later. Supports saving a file without losing original charts of XLSX. This library needs Go version 1.8 or later. The full API docs can be seen using go's built-in documentation tool, or online at godoc.org and docs reference. Project on GitHub: github.com/xuri/excelize tealeg/xlsx Google Go library for reading and writing XLSX files. Project on GitHub: github.com/tealeg/xlsx psmithuk/xlsx Create Office Open XML Spreadsheet files in Go. Project on GitHub: github.com/psmithuk/xlsx Microsoft Excel API MSDN Link: msdn.microsoft.com/en-us/library/fp179694.aspx +----------------+--------------------------+--------------------------------------------------------------------------+-----------------------------------------------+-----------------------+ | Language | PHP | Python | Go | COM | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Library | PhpSpreadsheet | XlsxWriter | python-excel | OpenPyXL | tealeg/xlsx | psmithuk/xlsx | Microsoft Excel API | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Read | √ | × | √ | √ | √ | × | √ | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Write | √ | √ | √ | √ | √ | √ | √ | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Modify | √ | × | × | ! Maybe lost charts | ! Lost charts | × | √ | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | .xls | × | × | √ | × | × | √ | √ | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | .xlsx | √ | √ | ! Function limited | √ | √ | √ | √ | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Large File | √ | √ | × | √ | √ | √ | × | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Function | 4.0/5.0 Points | 4.0/5.0 Points | 2.0/5.0 Points | 3.0/5.0 Points | 4.0/5.0 Points | 3.5/5.0 Points | 5.5/5.0 Points | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | Speed | Fast | Fast | Fast | Fast | Fast | Fast | Slow | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ |Operating System| No Limited | No Limited | No Limited | No Limited | No Limited | No Limited | Windows + Excel | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ | |Create .xlsx file |Create .xlsx file |Create .xlsx file |Modify .xlsx file |Create .xlsx file |Create .xlsx file |Create .xlsx file | | |Read .xlsx file |Read .xlsx file |Read .xlsx file |Complex functions |Read .xlsx file |Complex functions |Read .xlsx file | |Applicable Scene|Modify .xlsx file |Modify .xlsx file |No complex functions|Will be process large file|Will be process large file|No complex functions|Modify .xlsx file | | |Complex functions |Complex functions |Cross platform |Cross platform |No complex functions |Cross platform |Complex functions | | |Will be process large file|Will be process large file| | |Cross platform | |Only process small file| | |Cross platform |Cross platform | | | | |Only works in Windows | +----------------+--------------------------+--------------------------+--------------------+--------------------------+--------------------------+--------------------+-----------------------+ In some case we want to generate Excel from exists template, I recommend to use PHPExcel with some manual hacking after assessment of these Open XML library. Some times PHPExcel will lost some colors or style of Excel, that means PHPExcel library lost archieve some file when process zip .xlsx files, so we need find out which file was lost via diff between template .xlsx file and generated .xlsx file.saved and using The ZipArchive class overwrite it after $objWriter->save with simple code: $zip = new ZipArchive; if ($zip->open('filename.xlsx') === TRUE) { $zip->addFile('mytheme1.xml', 'xl/theme/theme1.xml'); // Some another lost file your find ... $zip->close(); echo 'ok'; } else { echo 'err'; } How to find out diff files? We need to understand Structure of Open XML of Excel. Save UTF-8 characters to file with fputcsv in PHP: $file = fopen($filepath, 'w'); fwrite($file, chr(0xEF).chr(0xBB).chr(0xBF)); fputcsv($file, $row); fclose($file); Related Resources Tools for Working with Excel and Python Excelize - Go Library for Reading and Writing XLSX Files ### RecSys Related Algorithm - SVD Introduction If we want to predict the user A's rating of the book X, but we only have the A's rating for some other books and user B's rating of the book X. How can we predict the A's rating of the book X? The easiest way is to simply forecast as average. But we never know with accuracy. SVD (Singular Value Decomposition) is based on the existing ratings, analysis the favorite degree of the raters for every factors, and get the ranks from analysis result at last. In the above example, there are many factors of the book, such as the cover, author, story, price and etc,. SVD algorithms make a ranking matrix \(R\) with \(n\) rows and \(m\) columns as abstract. \(R[u][i]\) means that the rank of the object \(i\) from user \(u\). It can be decomposed into a user factors matrix \(P\) with \(n\) rows and \(f\) columns (\(P[u][k]\) means that rank of the factor \(k\) from user \(u\)) and a object factors matrix with \(m\) rows and \(f\) columns (\(Q[i][k]\) means that rank of the factor \(k\) of object \(i\)). This can be represented by the formula like this: \[R=PQ^\mathrm{T}\] There is an example for decomposed to two matrix. The larger \(P\), represents more users prefer the book, larger \(Q\), represents high factor degree. We can predict the user A's rating of the book X after decomposed. +-------------+------+------+------+ |Rank Matrix R|Book X|Book Y|Book Z| +-------------+------+------+------+ | User A | 6 | 3 | ? | +-------------+------+------+------+ | User B | 3 | 2 | 6 | +-------------+------+------+------+ | +----------------------+------------------------+ | | | v | +-----------------------+--------+----------+ v |Object Factors Matrix Q|Computer|Literature| +---------------------+--------+----------+ +-----------------------+--------+----------+ |User Factors Matrix P|Computer|Literature| | Book X | 6 | 0 | +---------------------+--------+----------+ +-----------------------+--------+----------+ | User A | 1 | 0.2 | | Book Y | 3 | 3 | +---------------------+--------+----------+ +-----------------------+--------+----------+ | User B | 0.3 | 1 | | Book Z | 0 | 6 | +---------------------+--------+----------+ +-----------------------+--------+----------+ In addition to considering how the user like this book, but also affect by whether they be a strict raters and existing ratings when the user ranking a book in fact. Somebody will give high rank when they got this book has been rank as high value. The factor of how user like this book has been exists, we need to add two new factor to record that another parts to improve the accuracy of the model. After improved formula like this: \[R=OverallMean+biasU+biasI+PQ^\mathrm{T}\] \(OverallMean\) means that average rank of the all books, \(biasU\) means that the deviation with \(OverallMean\) of the user ranking, and \(biasI\) means that the deviation with \(OverallMean\) of the book ranks. \(P\) and \(Q\) meaning are unchanged and they are all matrices except the \(OverallMean\). After decompose, suppose we want to predict user \(u\) rating for book \(i\): \[\hat{r}_{u,i}=OverallMean+b_u+b_i+p_uq_i^\mathrm{T}\] SVD Implement Two decomposed matrices get by learning. SVD using stochastic gradient descent learning parameters except the \(OverallMean\). The learning process can be summarized as this: initial value of each parameter, and then use these parameters to predict, and the predicted results were compared with known rates, adjust each various parameters based on comparison results at last. Adjustment the value of the parameter, making the following formula can take to the minimum: \[\sum_{(u,i)\in\alpha}\{(r_{u,i}-OverallMean-b_u-b_i-p_uq_i^\mathrm{T})^2+\lambda(b_u^2+b_i^2+\begin{Vmatrix}p_u\end{Vmatrix}^2+\begin{Vmatrix}q_i\end{Vmatrix}^2)\}\] \(\alpha\) means that all the training samples, the first part of the parentheses represents the deviation of the current predictions and the actual value, the second part of the parentheses is to prevent overfitting. That's the main ideas of SVD. Reference Jim Lambers - [The SVD Algorithm] Chih-Chao Ma - [A Guide to Singular Value Decomposition for Collaborative Filtering] Department of Computer Science, National Taiwan University, Taipei, Taiwan Netflix Update: Try This at Home Matrix Factorization Techniques for Recommender Systems ### Go Binary Tree Traversal a / \ b c / \ d f / / e g Preorder: a → b → d → e → f → g → c Inorder: e → d → b → g → f → a → c Postorder: e → d → g → f → b → c → a Traversal a binary tree recursively package main import "fmt" type Node struct { data string left, right *Node } func main() { nodeG := Node{data: "g", left: nil, right: nil} nodeF := Node{data: "f", left: &nodeG, right: nil} nodeE := Node{data: "e", left: nil, right: nil} nodeD := Node{data: "d", left: &nodeE, right: nil} nodeC := Node{data: "c", left: nil, right: nil} nodeB := Node{data: "b", left: &nodeD, right: &nodeF} nodeA := Node{data: "a", left: &nodeB, right: &nodeC} fmt.Println("Preorder") nodeA.PrintPre() fmt.Println("Inorder") nodeA.PrintIn() fmt.Println("Postorder") nodeA.PrintPost() } // Preorder (Root, Left, Right) func (root *Node) PrintPre() { fmt.Println(root.data) if root.left != nil { root.left.PrintPre() } if root.right != nil { root.right.PrintPre() } } // Inorder (Left, Root, Right) func (root *Node) PrintIn() { if root.left != nil { root.left.PrintIn() } fmt.Println(root.data) if root.right != nil { root.right.PrintIn() } } // Postorder (Left, Right, Root) func (root *Node) PrintPost() { if root.left != nil { root.left.PrintPost() } if root.right != nil { root.right.PrintPost() } fmt.Println(root.data) } Traversal a binary tree without recursively type seqStack struct { data [100]*Node tag [100]int top int // array index } // Preorder traversal without recursively func (node *Node) preOrderLoop() (result []string) { var s seqStack s.top = -1 // nil if node == nil { panic("no data here") } else { for node != nil || s.top != -1 { for node != nil { result = append(result, node.data) s.top++ s.data[s.top] = node node = node.left } s.top-- node = s.data[s.top+1] node = node.right } } return } // Inorder traversal without recursively func (node *Node) inOrderLoop() (result []string) { var s seqStack s.top = -1 // nil if node == nil { panic("no data here") } else { for node != nil || s.top != -1 { for node != nil { s.top++ s.data[s.top] = node node = node.left } s.top-- node = s.data[s.top+1] result = append(result, node.data) node = node.right } } return } // Postorder traversal without recursively func (node *Node) postOrderLoop() (result []string) { var s seqStack s.top = -1 if node == nil { panic("no data here") } else { for node != nil || s.top != -1 { for node != nil { s.top++ s.data[s.top] = node s.tag[s.top] = 0 node = node.left } if s.tag[s.top] == 0 { node = s.data[s.top] s.tag[s.top] = 1 node = node.right } else { for s.tag[s.top] == 1 { s.top-- node = s.data[s.top+1] result = append(result, node.data) if s.top < 0 { break } } node = nil } } } return } ### Enable IPv6 for Server We can check the current state of the IPv6 address configuration by typing: $ ip -6 addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000 inet6 2604:a880:400:d0::218:2001/64 scope global valid_lft forever preferred_lft forever inet6 fe80::601:c0ff:fea1:4101/64 scope link valid_lft forever preferred_lft forever The command to set the IPv6 address is: $ sudo ip -6 addr add 2604:a880:400:d0::218:2001/64 dev eth0 Setting the gateway $ sudo ip -6 route add default via 2604:a880:400:d0::1 dev eth0 Your IPv6 connection should now be configured. This will be configured until the next boot. Continue on if you want to make these changes persistent. Making the IPv6 Configuration Persistent If you are on Ubuntu or Debian, you will need to edit the /etc/network/interfaces file. Open this with sudo privileges in your text editor: $ sudo vim /etc/network/interfaces Inside, you will see a section for each of the networks that you are already configured to use. First, look for the dns-nameservers parameter and remove this line from the IPv4 section. We will add it to the IPv6 section instead. We must add a new section for our IPv6 address. It should look like this: iface eth0 inet6 static address 2604:a880:400:d0::218:2001 netmask 64 gateway 2604:a880:400:d0::1 autoconf 0 dns-nameservers 2001:4860:4860::8844 2001:4860:4860::8888 209.244.0.3 When you are finished, save and close the file. Your IPv6 connection should be brought up at boot now. Check IPv6 settings of your system: $ sudo sysctl -a | grep ipv6 If we found net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 that means system has disabled IPv6, run sudo sysctl -w net.ipv6.conf.eth0.disable_ipv6=0 to enable IPv6 on eth0 temporary before set IPv6 address. If enable IPv6 permanently, edit /etc/sysctl.conf file, and set IPv6 section like following net.ipv6.conf.all.disable_ipv6 = 0 net.ipv6.conf.default.disable_ipv6 = 0 net.ipv6.conf.lo.disable_ipv6 = 0 IPv6 should now be up and running on server. ### Using Corkscrew Tunnel SSH over HTTP China National Network Firewall (GFW) shielding part free SSL certificate (e.g StartSSL) from Jun 20, 2015 and my blog has been blocked now in some place of China. It seems like shielded by detecting the secondary field of SSL certificate. For example in StartSSL certificate: CN = StartCom Class 1 Primary Intermediate Server CA. I can't visit my site via SSL connection and can't connect my server. So I find tools help me establish a HTTP proxy for SSH. Corkscrew Corkscrew is a tool for tunneling SSH through HTTP proxies. Install Corkscrew via Homebrew $ brew install corkscrew Modify /etc/ssh/ssh_config or ~/.ssh/config file, and add following code in it Host * ProxyCommand /usr/local/bin/corkscrew [host] [port] %h %p Replace [host] and [port] with your HTTP proxy server and set file permission to 644 as plain text. If HTTP proxy have a authority, we should also save username and passcode to a file (~/.ssh/proxyauth) and tell corkscrew to use it. Format of authority file username:passcode Modify /etc/ssh/ssh_config file like this: Host * ProxyCommand /usr/local/bin/corkscrew [host] [port] %h %p ~/.ssh/proxyauth ### Monitoring Processes with Supervisord Wiki Supervisor is a client/server system that allows its users to control a number of processes on UNIX-like operating systems. Install via easy_install or pip on Debian, not recommended to use apt-get. easy_install is the part of python-setuptools. First install python-setuptools $ sudo apt-get install python-setuptools Then we can use easy_install to install from pypi You need to use sudo with easy_install. $ sudo easy_install supervisor Or there is another module python-pip which will also help in installing from pypi with command $ sudo pip install supervisor Generate config file # echo_supervisord_conf > /etc/supervisord.conf Common out [inet_http_server] section will enable web interface [inet_http_server] port = 127.0.0.1:9001 username = user password = 123 We can edit /etc/supervisord.conf to include our own config file like this [include] files=/etc/supervisor/conf.d/*.conf Start Supervisord $ sudo supervisord Start Supervisorctl $ sudo supervisorctl ### Resolve Setting Locale Failed on Linux Perl warning Setting locale failed in Debian Warning Message The warning message during the installation of Perl applications will appear as follows perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Generating locales Missing locales are generated with locale-gen: $ locale-gen en_US.UTF-8 Alternatively a locale file can be created manually with localedef $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8 Setting Locale Settings The locale settings can be set (to en_US.UTF-8 in the example) as follows $ export LANGUAGE=en_US.UTF-8 $ export LANG=en_US.UTF-8 $ export LC_ALL=en_US.UTF-8 $ locale-gen en_US.UTF-8 $ sudo dpkg-reconfigure locales The dpkg-reconfigure locales command will open a dialog under Debian for selecting the desired locale. This dialog will not appear under Ubuntu. Locale warning in RedHat or CentOS warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory Run these commands $ sudo vim /etc/environment add these lines: LANG=en_US.utf-8 LC_ALL=en_US.utf-8 ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWrt In this post, I will set up Shadowsocks, RedSocks2 and ChinaDNS on OpenWrt, install PandoraBox on Xiaomi MiWiFi Mini. ShadowSocks A secure socks5 proxy, designed to protect your Internet traffic. Server side setup reference ShadowSocks Setup Guide. Official Website shadowsocks.org OpenWRT OpenWrt is described as a Linux distribution for embedded devices. Instead of trying to create a single, static firmware, OpenWrt provides a fully writable filesystem with package management. This frees you from the application selection and configuration provided by the vendor and allows you to customize the device through the use of packages to suit any application. For developer, OpenWrt is the framework to build an application without having to build a complete firmware around it; for users this means the ability for full customization, to use the device in ways never envisioned. Official Website openwrt.org PandoraBox The buildsystem for the OpenWrt Linux distribution. Powered by LuCI, the initial reason for this project was the absence of a free, clean, extensible and easily maintainable web user interface for embedded devices. While most similar configuration interfaces make heavy use of the Shell-scripting language LuCI uses the Lua programming language and splits up the interface into logical parts like models and views, uses object-oriented libraries and templating. That ensures a higher performance, smaller installation size, faster runtimes and what is even more important: better maintainability. Official Website luci.subsignal.org RedSocks2 Transparent redirector of any TCP connection to proxy (Customized for Anti-GFW), a modified version of original redsocks. The name is changed to REDSOCKS2 to distinguish with original redsocks. This variant is useful for anti-GFW (Great Fire Wall). REDSOCKS2 contains several new features besides many bug fixes to original redsocks. Project on GitHub github.com/semigodking/redsocks ChinaDNS ChinaDNS automatically queries local DNS servers to resolve Chinese domains and queries foreign DNS servers to resolve foreign domains. It is smart enough to work only with a Chinese IP range file, which doesn't change often. Project on GitHub github.com/shadowsocks/ChinaDNS Hardware information and software version in this post Xiaomi MiWiFi Router Board : Xiaomi mini Board CPU : MTK MT7620A (1 Core 580MHz) Router Setting Plug in power and Ethernet cable to one of the Ethernet ports. Direct browser to 192.168.31.1 or miwifi.com. The first screen will appear. Press blue button (开始) to get to next screen. The next screen sets up the wifi network name and password. Note that the wifi password is also used later for access to the browser page (192.168.31.1). Once you have set this up, plug in WAN cable for access to internet. Internet access also enables the Chrome translation and you can confirm that the router works properly before proceeding. The next few steps involve first loading the development version of the firmware, then loading a firmware version which provides SSH access with the last step being loading the OpenWrt firmware onto the router. Apparently loading the development version ofthe firmware voids warranty, so beware. Development version can be obtained from www1.miwifi.com/miwifi_download.html. Point to the mini router to get the correct firmware/software. At the same time you can also download the PC Client (although I don't recommend this) and also one of either iOS or Android app. You will NEED one of these to complete the steps. The development ROM that I downloaded was named miwifi_r1cm_firmware_4156a_2.5.22.bin. Apparently the development ROM that is used is important because some of the ROMs may not allow the next step of flashing the SSH firmware. The older ROMs can be found here www.miui.com/thread-1776173-1-1.html. Try to use the development ROM that has been proven to allow the next step of the process. The above downloads the Chinese versions of PC Client and iOS/Android apps. To get the translated software follow these links. The iOS/Android are necessary because they are used to link the hardware to the miwifi account that you are about to set up. Without the link, the SSH version of the ROM (and SSH password) will not be available. Create and activate a new account at Xiaomi account.xiaomi.com/pass/register. When you have setup the account and logged in, remember the User ID. You will note that there is no device associated with the account. The association has to be done via theiOS/Android app. Unfortunately there does not seem to be a way to do that via the PC Client, which is why I do not recommend installing this. Using your iOS or Android device, open the app. Login using the ID and password in the step above. The account then recognises the Xiaomi mini router device. This is required for the SSH firmware. To get the SSH firmware proceed to www1.miwifi.com/miwifi_open.html. Halfway down the page there is a SSH button. Click on the button and sign into the Xiaomi account when requested. After signing in you will be presented with a page to download the SSH firmware. Also note the SSH password for root user in the middle of the page. The SSH firmware will have the name miwifi_ssh.bin Now that you have all the firmware, it is just a matter of flashing in the correct sequence. First off, get an empty USB drive and format to FAT or FAT32. Copy the development firmware into the USB drive. The development firmware and SSH firmware flash both follows the same procedure Pull the power from router, at the router Plug USB drive into router. Press reset button (in the hole to the left ofthe USB drive) While holding down the reset button, plug in the power. The orange light in front of the router will remain steady for a short while before it starts flashing Once the orange light starts flashing, releasethe reset button and sit back until the light in front of the router turns blue. Also if there are indicator lights on your USB drive, they will flash as the drive is being read. If the orange light does not flash, try with another USB drive and confirm that the drive is formatted to FAT or FAT32. The router can be re-set up between each flash just to confirm that it is working, but I generally just flash the firmwares one after the other. The SSH firmware do not have to be renamed for the flash. Just keep the name as miwifi_ssh.bin. Once the SSH firmware has been successfully flashed, confirm SSH via port 22, username root and password as noted from download download the SSH firmware page. The OpenWRT firmware will need to be downloaded from downloads.openwrt.org.cn/PandoraBox/Xiaomi-Mini-R1CM/testing. I used the latest version PandoraBox-ralink-mt7620-xiaomi-mini-squashfs-sysupgrade-r1216-20150721.bin. Transfer PandoraBox-ralink-mt7620-xiaomi-mini-squashfs-sysupgrade-r1216-20150721.bin to the /tmp directory in the root of the router drive using 192.168.31.1, Port 22 $ scp PandoraBox-ralink-* root@192.168.31.1:/tmp Connect to the router using 192.168.31.1, Port 22, via SSH or putty (Microsoft Windows). Change to the /tmp directory and confirming that the PandoraBox-ralink-mt7620-xiaomi-mini-squashfs-sysupgrade-r1216-20150721.bin file is listed. # ls -l /tmp Finally flash the OpenWRT firmware. # mtd -r write /tmp/PandoraBox-ralink-* OS1 or # mtd -r write /tmp/PandoraBox-ralink-* firmware The router will reboot once it has been successfully flashed. Re-connect Wifi AP named PandoraBox_* and access the OpenWRT router via 192.168.1.1 on your browser. Default account username will be root and password admin. BusyBox v1.22.1 (2015-07-09 13:52:12 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. _______________________________________________________________ | ____ _ ____ | | | _ \ __ _ _ __ __| | ___ _ __ __ _| __ ) _____ __ | | | |_) / _` | '_ \ / _` |/ _ \| '__/ _` | _ \ / _ \ \/ / | | | __/ (_| | | | | (_| | (_) | | | (_| | |_) | (_) > < | | |_| \__,_|_| |_|\__,_|\___/|_| \__,_|____/ \___/_/\_\ | | | | PandoraBox SDK Platform | | The Core of SmartRouter | | Copyright 2013-2015 D-Team Technology Co.,Ltd.SZ | | http://www.pandorabox.org.cn | |______________________________________________________________| Base on OpenWrt BARRIER BREAKER (14.09, r1216) [root@PandoraBox_57B7:/root]# Setup opkg Source Backup config file # mv /etc/opkg.conf /etc/opkg.conf.bak Edit opkg config file # vim /etc/opkg.conf Add following source dest root / dest ram /tmp lists_dir ext /etc/opkg-lists option overlay_root /overlay # notice dest usb /mnt/sdb1/opkg arch all 100 arch ramips_24kec 200 arch ramips 300 arch mips 400 arch unkown 500 src/gz barrier_breaker_base http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/base src/gz barrier_breaker_luci http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/luci src/gz barrier_breaker_management http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/management src/gz barrier_breaker_oldpackages http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/oldpackages src/gz barrier_breaker_packages http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/packages src/gz barrier_breaker_routing http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/routing src/gz barrier_breaker_telephony http://downloads.openwrt.org/barrier_breaker/14.07/ramips/mt7620a/packages/telephony src/gz openwrt_dist http://openwrt-dist.thankgfw.ml/releases/ramips/packages src/gz openwrt_dist_luci http://openwrt-dist.thankgfw.ml/releases/luci/packages Save and update source list and install curl, wget, libpolarssl and redsocks2 # opkg update # opkg install curl wget libpolarssl redsocks2 Download and install following package with curl or download from website shadowsocks-libev sourceforge.net/projects/openwrt-dist/files/shadowsocks-libev chinadns-openwrt sourceforge.net/projects/openwrt-dist/files/chinadns luci-app-chinadns sourceforge.net/projects/openwrt-dist/files/luci-app/chinadns redsocks2 sourceforge.net/projects/openwrt-dist/files/redsocks2 luci-app-redsocks2 sourceforge.net/projects/openwrt-dist/files/luci-app/redsocks2 # cd /tmp # wget http://jaist.dl.sourceforge.net/project/openwrt-dist/shadowsocks-libev/2.1.4-87ec497/ramips/shadowsocks-libev-spec_2.1.4-1_ramips_24kec.ipk # wget http://jaist.dl.sourceforge.net/project/openwrt-dist/chinadns/1.3.2-21cb752/ChinaDNS_1.3.2-1_ramips_24kec.ipk # wget http://iweb.dl.sourceforge.net/project/openwrt-dist/luci-app/chinadns/luci-app-chinadns_1.3.4-1_all.ipk # wget http://jaist.dl.sourceforge.net/project/openwrt-dist/redsocks2/0.60-769a5b0/redsocks2_0.60-2_ramips_24kec.ipk # wget http://jaist.dl.sourceforge.net/project/openwrt-dist/luci-app/redsocks2/luci-app-redsocks2_1.3.0-1_all.ipk # opkg install shadowsocks-libev-spec* # opkg install ChinaDNS* # opkg install luci-app-chinadns* # opkg install redsocks2* # opkg install luci-app-redsocks2* Setup ShadowSocks Client Config ShadowSocks client config file /etc/shadowsocks.json like this { "server" : "X.X.X.X", "server_port" : 8388, "local" : "0.0.0.0", "local_port" : 1080, "password" : "password", "timeout" : 60, "method" : "rc4-md5" } Enable and restart ShadowSocks service # /etc/init.d/shadowsocks enable # /etc/init.d/shadowsocks start Setup UDP Relay: Relay Mode disable, UDP: Forward enable, UDP Local Port 1053, Forwarding Tunnel 8.8.8.8:53 on Luci → Services → ShadowSocks and click Save & Apply button. Setup Redsocks2 Luci → Services → Redsocks2, config params like following and click Save & Apply button. Enable : Checked Local Port : 12345 Proxy Type : SOCKS5 Proxy IP : 127.0.0.1 Proxy Port : 1080 Auto Proxy : Enable Wait Timeout : 5 Redsocks2 config file /etc/init.d/redsocks2, if we wanna to only allow 80 and 443 port connection via redsocks2 service, replace -A REDSOCKS2 -p tcp -j REDIRECT --to-ports $local_port with -A REDSOCKS2 -p tcp --dport 80 -j REDIRECT --to-ports $local_port -A REDSOCKS2 -p tcp --dport 443 -j REDIRECT --to-ports $local_port in config file, and restart service # /etc/init.d/redsocks2 restart Setup ChinaDNS Luci → Services → ChinaDNS, config params like following Enable : Checked Enable Bidirectional Filter : Checked Local Port : 5353 CHNRoute File : /etc/chinadns_chnroute.txt Upstream Servers : 114.114.114.114,8.8.8.8 Click Save & Apply button, if service not running, using chinadns command and check config file /etc/init.d/chinadns. /etc/init.d/chinadns enable /etc/init.d/chinadns start Setup DHCP and DNS Luci → Network → DHCP and DNS, config params like following and click Save & Apply button. General settings DNS forwardings : 127.0.0.1#5353 Resolv and Hosts Files Ignore resolve file : Checked ### Enabling HTTP/2 and CHACHA20_POLY1305 on Nginx HTTP/2 HTTP/2 (originally named HTTP/2.0) is the second major version of the HTTP network protocol used by the World Wide Web. It is based on SPDY. HTTP/2 was developed by the Hypertext Transfer Protocol working group (httpbis, where bis means "repeat" or "twice") of the Internet Engineering Task Force. HTTP/2 is the first new version of HTTP since HTTP 1.1, which was standardized in RFC 2068 in 1997. The Working Group presented HTTP/2 to IESG for consideration as a Proposed Standard in December 2014, and IESG approved it to publish as Proposed Standard on February 17, 2015. The HTTP/2 specification was published as RFC 7540 in May 2015. CHACHA20_POLY1305 Existing TLS [RFC5246] cipher suites either suffer from cryptographic weaknesses (RC4), major implementation pitfalls (CBC mode ciphers) or are difficult to effectively implement in software (AES-GCM). In order to improve the state of software TLS implementations, this memo specifies cipher suites that can be fast and secure when implemented in software without sacrificing key agility. ChaCha20 [chacha] is a stream cipher developed by D. J. Bernstein.It is a refinement of Salsa20 and was used as the core of the SHA-3finalist, BLAKE. ChaCha20 maps 16, 32-bit input words to 64 output bytes. Byconvention, 8 of the input words consist of a 256-bit key, 4 areconstants and the remaining four are a block counter. The outputbytes are XORed with the plaintext to produce ciphertext. See more at ChaCha20 and Poly1305 based Cipher Suites for TLS draft-agl-tls-chacha20poly1305-01 As of version 1.9.5, nginx supports full http/2 spec. So with a few minutes of learning the config (it really is easy), you can not only use the latest it web standards, its EXTREMELY performant. The install process is very simple with only one minor gotcha. First off, add the apt keys: $ wget http://nginx.org/packages/keys/nginx_signing.key $ cat nginx_signing.key | sudo apt-key add - Next add the following lines to /etc/apt/sources.list if you're on Ubuntu: deb http://nginx.org/packages/mainline/ubuntu/ wily nginx deb-src http://nginx.org/packages/mainline/ubuntu/ wily nginx Update source cache and install Nginx $ sudo apt-get update $ sudo apt-get purge nginx nginx-core nginx-common $ sudo apt-get install nginx Configuration server { listen 443 ssl http2; server_name example.com; // ... Enable CHACHA20_POLY1305 Encryption ssl_prefer_server_ciphers on; ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:HIGH:MEDIUM:!MD5:!aNULL:!EDH:!RC4:!DSS; See more Enabling SPDY and HSTS on Nginx ### Elasticsearch ELK Stack Installation Logstash Logstash is a tool for receiving, processing and outputting logs. All kinds of logs. System logs, webserver logs, error logs, application logs, and just about anything you can throw at it. Sounds great, eh? Logstash provides a powerful pipeline for storing, querying, and analyzing your logs. When using Elasticsearch as a backend data store and Kibana as a front-end reporting tool, Logstash acts as the workhorse. It includes an arsenal of built-in inputs, filters, codecs, and outputs, enabling you to harness some powerful functionality with a small amount of effort. Elasticsearch Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. Elasticsearch is the second most popular enterprise search engine. It can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. Kibana Kibana is an open source analytics and visualization platform designed to work with Elasticsearch. You use Kibana to search, view, and interact with data stored in Elasticsearch indices. You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps. Kibana makes it easy to understand large volumes of data. Its simple, browser-based interface enables you to quickly create and share dynamic dashboards that display changes to Elasticsearch queries in real time. I have install ELK, the main components - Elasticsearch 1.7.0, Logstash 1.5.4 and and Kibana 4.1.1 on Ubuntu 14.04 LTS Server Edition. Storage Nginx access log into Elasticsearch with Logstash and setup browser-based dynamic interface with Kibana. Install Java 8 Elasticsearch and Logstash require Java, so we will install that now. We will install Oracle Java 8 (update 20 or later) because that is what Elasticsearch recommends. It should, however, work fine with OpenJDK, if we decide to go that route. Method 1 Add the Oracle Java PPA to apt: $ sudo add-apt-repository -y ppa:webupd8team/java Update my apt package database: $ sudo apt-get update Install the latest stable version of Oracle Java 8 with this command (and accept the license agreement that pops up): $ sudo apt-get -y install oracle-java8-installer Now that Java 8 is installed, let's install ElasticSearch. Method 2 Download Java SE from Oracle official website I have download jdk-8u60-linux-x64.tar.gz. Create directory $ sudo mkdir /usr/lib/jvm Unzip to install directory $ sudo tar -zxvf jdk-8* -C /usr/lib/jvm Setup environment variable $ sudo vim ~/.bashrc Append following code # Set Oracle JDK environment export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_60 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH Setup default JDK version of system $ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_60/bin/java 300 $ sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_60/bin/javac 300 $ sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/jdk1.8.0_60/bin/jar 300 $ sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/jdk1.8.0_60/bin/javah 300 $ sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/jdk1.8.0_60/bin/javap 300 $ sudo update-alternatives --config java Test installation $ java -version Install Elasticsearch Install Elasticsearch please reference the post Elasticsearch Cluster on Ubuntu After install edit the configuration $ sudo vim /etc/elasticsearch/elasticsearch.yml To restrict outside access to my Elasticsearch instance (port 9200), so outsiders can't read my data or shutdown my Elasticsearch cluster through the HTTP API. Find the line that specifies network.host, uncomment it, and replace its value with IP address so it looks like this: network.host: 192.168.171.129 Save and exit elasticsearch.yml. Now restart Elasticsearch: $ sudo service elasticsearch restart Then run the following command to start Elasticsearch on boot up: $ sudo update-rc.d elasticsearch defaults 95 10 Now that Elasticsearch is up and running, let's install Kibana. Install Kibana Download Kibana 4 to our home directory with the following command: $ cd ~; wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz Extract Kibana archive with tar: $ tar xvf kibana-*.tar.gz Open the Kibana configuration file for editing: $ vim ~/kibana-4*/config/kibana.yml In the Kibana configuration file, find the line that specifies host, and replace the IP address (0.0.0.0 by default) with real IP address: host: "192.168.171.129" Save and exit. This setting makes it so Kibana will only be accessible to the IP address. This is fine because we will use an Nginx reverse proxy to allow external access. Let's copy the Kibana files to a more appropriate location. Create the /opt directory with the following command: $ sudo mkdir -p /opt/kibana Now copy the Kibana files into my newly-created directory: $ sudo cp -R ~/kibana-4*/* /opt/kibana/ Kibana can be started by running /opt/kibana/bin/kibana, but we want it to run as a service. Download a Kibana init script with this command: $ cd /etc/init.d && sudo wget https://raw.githubusercontent.com/akabdog/scripts/master/kibana4_init && sudo mv kibana4_init kibana4 Now enable the Kibana service, and start it: $ sudo chmod +x /etc/init.d/kibana4 $ sudo update-rc.d kibana4 defaults 96 9 $ sudo service kibana4 start Before we can use the Kibana web interface, we have to set up a reverse proxy. Let's do that now, with Nginx. Use apt to install Nginx $ sudo apt-get install nginx Now open the Nginx default server block in my favorite editor. $ sudo vim /etc/nginx/sites-available/default Delete the file's contents, and paste the following code block into the file. Be sure to update the server_name to match my server's name: server { listen 80; server_name example.com; access_log /var/log/nginx/kibana.myhost.org.access.log; } Install Logstash Create the Logstash source list $ echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list Update our apt package database: $ sudo apt-get update Install Logstash with this command: $ sudo apt-get install logstash Logstash is installed but it is not configured yet. Configure Logstash Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs. Let's create a configuration file called logstash-sample.conf $ sudo vim /etc/logstash/conf.d/logstash-sample.conf Insert the following input configuration input { file { #type => "nginx_access" codec => "json" path => ["/var/log/nginx/kibana.myhost.org.access.log"] } } output { elasticsearch { embedded => false protocol => "http" host => "192.168.171.129" port => 9200 } } Now, start Logstash to put our configuration changes into effect: $ nohup /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-sample.conf & In a web browser, visit public IP address on 5601 port (http://192.168.171.129:5601) Go ahead and select @timestamp from the dropdown menu, then click the Create button to create the first index. Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. We should see a histogram with log events, with log messages below: Elasticsearch Browser Now that my Nginx access logs are centralized via Elasticsearch and Logstash, and I'm able to visualize them with Kibana 4. Reference Article How To Install Elasticsearch, Logstash, and Kibana 4 on Ubuntu 14.04 ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL Wiki Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL). EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more. Install Red Hat Enterprise Linux 7 on VMware Fusion Set network and hostname Turn on ethernet Software selection Installation destination Installation summary User settings Remove yum on Red Hat $ sudo rpm -aq|grep yum|xargs rpm -e --nodeps Download installer of yum on 163.com $ wget http://mirrors.163.com/centos/7/os/x86_64/Packages/python-iniparse-0.4-9.el7.noarch.rpm $ wget http://mirrors.163.com/centos/7/os/x86_64/Packages/yum-3.4.3-125.el7.centos.noarch.rpm $ wget http://mirrors.163.com/centos/7/os/x86_64/Packages/yum-metadata-parser-1.1.4-10.el7.x86_64.rpm $ wget http://mirrors.163.com/centos/7/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.31-29.el7.noarch.rpm Install yum $ sudo rpm -ivh python-iniparse-0.4-9.el7.noarch.rpm $ sudo rpm -ivh yum-metadata-parser-1.1.4-10.el7.x86_64.rpm $ sudo rpm -ivh yum-3.4.3-125.el7.centos.noarch.rpm yum-plugin-fastestmirror-1.1.31-29.el7.noarch.rpm Create config file $ vim /etc/yum.repos.d/rhel-debuginfo.repo File content [base] name=CentOS-$releasever - Base baseurl=http://mirrors.163.com/centos/7/os/$basearch/ gpgcheck=1 gpgkey=http://mirrors.163.com/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7 #released updates [updates] name=CentOS-$releasever - Updates baseurl=http://mirrors.163.com/centos/7/updates/$basearch/ gpgcheck=1 gpgkey=http://mirrors.163.com/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7 [extras] name=CentOS-$releasever - Extras baseurl=http://mirrors.163.com/centos/7/extras//$basearch/ gpgcheck=1 gpgkey=http://mirrors.163.com/centos/7/os/x86_64/RPM-GPG-KEY-CentOS-7 [centosplus] name=CentOS-$releasever - Plus baseurl=http://mirrors.163.com/centos/7/centosplus//$basearch/ gpgcheck=1 enabled=0 Save and test $ sudo yum clean all $ sudo yum update Install and verify EPEL source $ sudo yum install epel-release $ sudo yum repolist ### Handy use SphinxQL Query Builder for PHP Wiki Sphinx Sphinx is an open source search engine that allows full-text searches. It is best known for performing searches over large data very efficiently. The data to be indexed can generally come from very different sources: SQL databases, plain text files, HTML files, mailboxes, and so on. Official Website sphinxsearch.com SphinxQL The Sphinx search daemon supports MySQL binary network protocol and can be accessed with the regular MySQL API. Sphinx also supports a subset of SQL (SphinxQL). It supports standard querying of all index types with SELECT, modifying RealTime indexes with INSERT, REPLACE, and DELETE, and more. Query Builder for SphinxQL github.com/FoolCode/SphinxQL-Query-Builder Some key features of Sphinx are High indexing and searching performance Advanced indexing and querying tools Advanced result set post-processing Proven scalability up to billions of documents, terabytes of data, and thousands of queries per second Easy integration with SQL and XML data sources, and SphinxQL, SphinxAPI, or SphinxSE search interfaces Easy scaling with distributed searches In this post I will set up Sphinx with MySQL server using the example SQL file included in the distribution package. It's a basic idea of how to use Sphinx for my project. Before installing Sphinx, I already have MySQL server installed on Ubuntu 14.04 x64 linux. Install Sphinx $ sudo apt-get install sphinxsearch Now I have successfully installed Sphinx on my server. Before starting the Sphinx daemon, I should configure it. Creating the Test Database In this section, I will set up a database using the example SQL file provided with the package. Import an example SQL file into the database. Log in to the MySQL server shell. $ mysql -u root -p Enter the password for the MyQL root user when asked. The prompt will change to mysql>. Create a database named test. > CREATE DATABASE test; Import the SQL file. > SOURCE /etc/sphinxsearch/example.sql; Then exit the MySQL shell. > quit Now I should have the test database filled with data. Configuring Sphinx In this section, I will configure the Sphinx configuration file. Create the sphinx.conf file. $ sudo vim /etc/sphinxsearch/sphinx.conf Sphinx configuration consists of 3 main blocks that are essential to run. They are index, searchd, and source. Each of these blocks is described below, and at the end of this step, the entirety of sphinx.conf is included for you to paste into the file. The source block contains the type of source, username and password to the MySQL server. The first column of the SQL query should be a unique id. The SQL query will run on every index and dump the data to Sphinx index file. Below are descriptions of each field and the source block itself. sql_host : Hostname for the MySQL host. In our example, this is the localhost. This can be a domain or IP address. sql_user : Username for the MySQL login. In our example, this is root. sql_pass : Password for the MySQL user. In our example, this is the root MySQL user's password sql_db : Name of the database that stores data. In our example, this is test. sql_query : This is the query thats dumps data to index. source src1 { type = mysql sql_host = localhost sql_user = root sql_pass = password sql_db = test sql_port = 3306 # optional, default is 3306 sql_query = \ SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \ FROM documents sql_attr_uint = group_id sql_attr_timestamp = date_added sql_query_info = SELECT * FROM documents WHERE id=$id } The index component contains the source and the path to store the data. source : Name of the source block. In our example, this is src1. path : This path to save the index. charset_type : This is the charset of the index. You also set this to utf-8. index test1 { source = src1 path = /var/lib/sphinxsearch/data/test1 docinfo = extern charset_type = sbcs } The searchd component contains the port and other variables to run the Sphinx daemon. listen : This is the port which sphinx daemon will run. In our example, this is 9312. query_log : This path to save the query log. pid_file : This is path to PID file of Sphinx daemon. max_matches : Maximum number matches to return per search term. seamless_rotate : Prevents searchd stalls while rotating indexes with huge amounts of data to precache. preopen_indexes : Whether to forcibly preopen all indexes on startup. unlink_old : Whether to unlink old index copies on successful rotation. searchd { listen = 127.0.0.1:9306:mysql41 # Port to listen on log = /var/log/sphinxsearch/searchd.log query_log = /var/log/sphinxsearch/query.log read_timeout = 5 max_children = 30 pid_file = /var/run/sphinxsearch/searchd.pid max_matches = 1000 seamless_rotate = 1 preopen_indexes = 1 unlink_old = 1 binlog_path = /var/lib/sphinxsearch/data } The full configuration to copy and paste is below. The only variable you need to change below is the sql_pass in the source block, which is highlighted. source src1 { type = mysql sql_host = localhost sql_user = root sql_pass = password # change this to your root users MySQL password sql_db = test sql_port = 3306 sql_query = \ SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, title, content \ FROM documents sql_attr_uint = group_id sql_attr_timestamp = date_added sql_query_info = SELECT * FROM documents WHERE id=$id } index test1 { source = src1 path = /var/lib/sphinxsearch/data/test1 docinfo = extern charset_type = sbcs } searchd { listen = 127.0.0.1:9306:mysql41 log = /var/log/sphinxsearch/searchd.log query_log = /var/log/sphinxsearch/query.log read_timeout = 5 max_children = 30 pid_file = /var/run/sphinxsearch/searchd.pid max_matches = 1000 seamless_rotate = 1 preopen_indexes = 1 unlink_old = 1 binlog_path = /var/lib/sphinxsearch/data } Adding Data to the Index In this section, we'll add data to the Sphinx index. Add data to index using the config we created earlier. $ sudo indexer --all I get something that looks like the following. Sphinx 2.0.4-id64-release (r3135) Copyright (c) 2001-2012, Andrew Aksyonoff Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com) using config file '/etc/sphinxsearch/sphinx.conf'... indexing index 'test1'... WARNING: collect_hits: mem_limit=0 kb too low, increasing to 25600 kb collected 4 docs, 0.0 MB sorted 0.0 Mhits, 100.0% done total 4 docs, 193 bytes total 0.003 sec, 59696 bytes/sec, 1237.24 docs/sec total 3 reads, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg total 9 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg If you got following error, that means searchd is running, stop it. Then delete the .spl file. FATAL: failed to lock /var/lib/sphinxsearch/data/test1.spl: Resource temporarily unavailable, will not index. Try --rotate option. To keep the index up to date, lets create a cronjob. Open crontab. $ crontab -e You may be asked which text editor you want to use. Choose whichever you prefer; in this post, I've used vim. Type the following at the end of the file, then save and close it. @hourly /usr/bin/indexer --rotate --config /etc/sphinxsearch/sphinx.conf --all The above cronjob will run on every hour and add new data to the index using the default config file. Starting Sphinx In this section, I'll start the Sphinx daemon. By default, the Sphinx daemon is tuned off. To enable Sphinx, first open /etc/default/sphinxsearch. $ sudo vim /etc/default/sphinxsearch Find the line START=no and set it to yes. START=yes Then, save and close the file. Finally, start the Sphinx daemon. $ sudo service sphinxsearch start Testing Search In this section, I will test my search. Enter the following command. $ search this is my test document number I get something that looks like the following. Sphinx 2.0.4-id64-release (r3135) Copyright (c) 2001-2012, Andrew Aksyonoff Copyright (c) 2008-2012, Sphinx Technologies Inc (http://sphinxsearch.com) using config file '/etc/sphinxsearch/sphinx.conf'... index 'test1': query 'this is my test document number ': returned 2 matches of 2 total in 0.006 sec displaying matches: 1. document=1, weight=7431, group_id=1, date_added=Fri Aug 14 16:44:37 2015 id=1 group_id=1 group_id2=5 date_added=2015-08-14 16:44:37 title=test one content=this is my test document number one. also checking search within phrases. 2. document=2, weight=7431, group_id=1, date_added=Fri Aug 14 16:44:37 2015 id=2 group_id=1 group_id2=6 date_added=2015-08-14 16:44:37 title=test two content=this is my test document number two words: 1. 'this': 4 documents, 4 hits 2. 'is': 4 documents, 4 hits 3. 'my': 2 documents, 2 hits 4. 'test': 3 documents, 5 hits 5. 'document': 2 documents, 2 hits 6. 'number': 3 documents, 3 hits Use Query Builder for SphinxQL with PHP Install package via composer $ composer require foolz/sphinxql-query-builder Create test script file <?php require_once __DIR__ . '/vendor/autoload.php'; use Foolz\SphinxQL\SphinxQL; use Foolz\SphinxQL\Connection; // create a SphinxQL Connection object to use with SphinxQL $conn = new Connection(); $conn->setParams(array('host' => '127.0.0.1', 'port' => '9306')); $query = SphinxQL::create($conn)->select('id') ->from('test1') ->where('id', '=', 1); try { $result = $query->execute(); } catch (Exception $e){ var_dump($e); } Output array(1) { [0] => array(2) { ["id"] => string(1) "1" ["weight"] => string(1) "1" } } By using Sphinx, we can easily add a custom search to your site. ### RSA Encryption & Decryption on PHP Generate an RSA keypair with a 4096 bit private key Execute command: openssl genrsa -out private_key.pem 4096 e.g. $ openssl genrsa -out private_key.pem 4096 Generating RSA private key, 4096 bit long modulus .............................++++++ ................................................................++++++ e is 65537 (0x10001) Make sure to prevent other users from reading your key by executing chmod go-r private_key.pem afterward. Extracting the public key from an RSA keypair Execute command: openssl rsa -pubout -in private_key.pem -out public_key.pem e.g. $ openssl rsa -pubout -in private_key.pem -out public_key.pem writing RSA key A new file is created, public_key.pem, with the public key. It is relatively easy to do some cryptographic calculations to calculate the public key from the prime1 and prime2 values in the public key file. However, OpenSSL has already pre-calculated the public key and stored it in the private key file. So this command doesn't actually do any cryptographic calculation -- it merely copies the public key bytes out of the file and writes the Base64 PEM encoded version of those bytes into the output public key file. Use phpseclib - PHP Secure Communications Library Encryption <?php header('Content-Type: text/xml; charset=utf-8'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Math'.DIRECTORY_SEPARATOR.'BigInteger.php'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Crypt'.DIRECTORY_SEPARATOR.'AES.php'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Crypt'.DIRECTORY_SEPARATOR.'RSA.php'); /* |-------------------------------------------------------------------------- | Encryption |-------------------------------------------------------------------------- | */ $publickey = <<<EOD -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0 FPqri0cb2JZfXJ/DgYSF6vUpwmJG8wVQZKjeGcjDOL5UlsuusFncCzWBQ7RKNUSesmQRMSGkVb1/ 3j+skZ6UtW+5u09lHNsj6tQ51s1SPrCBkedbNf0Tp0GbMJDyR4e9T04ZZwIDAQAB -----END PUBLIC KEY----- EOD; $rsa = new Crypt_RSA(); $rsa->loadKey($publickey); // public key $plaintext = '...'; $ciphertext = $rsa->encrypt($plaintext); echo '<pre>' . base64_encode($ciphertext) . '</pre>'; Decryption <?php header('Content-Type: text/xml; charset=utf-8'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Math'.DIRECTORY_SEPARATOR.'BigInteger.php'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Crypt'.DIRECTORY_SEPARATOR.'AES.php'); include_once(dirname(dirname(__FILE__)).DIRECTORY_SEPARATOR.'phpseclib'.DIRECTORY_SEPARATOR.'Crypt'.DIRECTORY_SEPARATOR.'RSA.php'); /* |-------------------------------------------------------------------------- | Decryption |-------------------------------------------------------------------------- | */ $ciphertext = base64_decode(str_replace(' ', '+', $_GET['crypttext'])); $privatekey = <<<EOD -----BEGIN RSA PRIVATE KEY----- MIICXAIBAAKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0FPqri0cb2JZfXJ/DgYSF6vUp wmJG8wVQZKjeGcjDOL5UlsuusFncCzWBQ7RKNUSesmQRMSGkVb1/3j+skZ6UtW+5u09lHNsj6tQ5 1s1SPrCBkedbNf0Tp0GbMJDyR4e9T04ZZwIDAQABAoGAFijko56+qGyN8M0RVyaRAXz++xTqHBLh 3tx4VgMtrQ+WEgCjhoTwo23KMBAuJGSYnRmoBZM3lMfTKevIkAidPExvYCdm5dYq3XToLkkLv5L2 pIIVOFMDG+KESnAFV7l2c+cnzRMW0+b6f8mR1CJzZuxVLL6Q02fvLi55/mbSYxECQQDeAw6fiIQX GukBI4eMZZt4nscy2o12KyYner3VpoeE+Np2q+Z3pvAMd/aNzQ/W9WaI+NRfcxUJrmfPwIGm63il AkEAxCL5HQb2bQr4ByorcMWm/hEP2MZzROV73yF41hPsRC9m66KrheO9HPTJuo3/9s5p+sqGxOlF L0NDt4SkosjgGwJAFklyR1uZ/wPJjj611cdBcztlPdqoxssQGnh85BzCj/u3WqBpE2vjvyyvyI5k X6zk7S0ljKtt2jny2+00VsBerQJBAJGC1Mg5Oydo5NwD6BiROrPxGo2bpTbu/fhrT8ebHkTz2epl U9VQQSQzY1oZMVX8i1m5WUTLPz2yLJIBQVdXqhMCQBGoiuSoSjafUhV7i1cEGpb88h5NBYZzWXGZ 37sJ5QsW+sJyoNde3xH8vdXhzU7eT82D6X/scw9RZz+/6rCJ4p0= -----END RSA PRIVATE KEY----- EOD; $rsa = new Crypt_RSA(); $rsa->loadKey($privatekey); // private key $plaintext = $rsa->decrypt($ciphertext); echo '<pre>' . $plaintext . '</pre>'; /* |-------------------------------------------------------------------------- | Signature Verifiy |-------------------------------------------------------------------------- | */ $publickey = <<<EOD -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe4eCZ0 FPqri0cb2JZfXJ/DgYSF6vUpwmJG8wVQZKjeGcjDOL5UlsuusFncCzWBQ7RKNUSesmQRMSGkVb1/ 3j+skZ6UtW+5u09lHNsj6tQ51s1SPrCBkedbNf0Tp0GbMJDyR4e9T04ZZwIDAQAB -----END PUBLIC KEY----- EOD; $signature = $rsa->sign($plaintext); $rsa->loadKey($publickey); echo $rsa->verify($plaintext, $signature) ? 'verified' : 'unverified'; ### Elasticsearch Cluster on Ubuntu Wiki Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. Elasticsearch is the second most popular enterprise search engine. It can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. In this post I will talk about how to install and configure elasticsearch cluster on Ubuntu 14.04 LTS 64 bits. Official Website elastic.co Details Of Elasticsearch Cluster Architecture Operating System : Ubuntu 14.04 LTS Trusty Tahr 64-bit Server Edition Package Name and version : elasticsearch-1.7.0 No. of Servers : 2 elasticnode01 : 192.168.171.129 (Name of server) elasticnode02 : 192.168.171.130 (Name of server) Ports opened in both servers Port Number Source Destination 9100 elasticnode01 elasticnode02 9200 elasticnode01 elasticnode02 9300 elasticnode01 elasticnode02 9100 elasticnode02 elasticnode01 9200 elasticnode02 elasticnode01 9300 elasticnode02 elasticnode01 Dependencies First, update the list of available packages by running apt-get update. Next, we must install the Java runtime. There are two options here. Install the OpenJDK runtime supplied by Ubuntu. Install the Elasticsearch recommended Java runtime, Oracle Java. The first option works perfectly fine if you would just like to play around and get acquainted with Elasticsearch or run a small collection of nodes. The latter option is the one recommended by Elasticsearch for guaranteed compatibility. OpenJDK To accomplish the first option, we can simply run apt-get install openjdk-7-jre. Test Java installation This step are implementing on elasticnode01 & elasticnode02. We can then check that Java is installed by running java -version. $ java -version java version "1.7.0_79" OpenJDK Runtime Environment (IcedTea 2.5.5) (7u79-2.5.5-0ubuntu0.14.04.2) OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) That's all the dependencies we need for now, so let's get started with obtaining and installing Elasticsearch. Download and Install This step are implementing on elasticnode01 & elasticnode02. Elasticsearch can be downloaded directly from their site in zip, tar.gz, deb, or rpm packages. You don't need to do this ahead of time, as we will download the files that we need as we need them in the text below. Download the Debian package archive via wget $ wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.0.deb Installing directly from a Debian package is done by running $ sudo dpkg -i elasticsearch-1.7.0.deb This results in Elasticsearch being properly installed in /usr/share/elasticsearch. Recall that installing from the Debian package also installs an init script in /etc/init.d/elasticsearch that starts the Elasticsearch server running on boot. The server will also be immediately started after installation. Configuration files Edit config/elasticsearch.yml file and update the given below parameters. $ sudo vim /etc/elasticsearch/elasticsearch.yml In Server elasticnode01 cluster.name: mycluster node.name: "elasticnode01" discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["elasticnode02"] In Server elasticnode02 cluster.name: mycluster node.name: "elasticnode02" discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["elasticnode01"] In both server elasticnode01 and elasticnode02 , edit the /etc/hosts file and map the both servers ip address . Replace the below given ip addresses with your server’s ip addresses. 192.168.171.129 elasticnode01 192.168.171.130 elasticnode02 Start the elasticsearch server. Use the command in both server – elasticnode01 and elasticnode02. $ sudo service elasticsearch start Test Elasticsearch install Ensure the server is started by running $ curl -X GET 'http://localhost:9200' Should see the following response { "status" : 200, "name" : "elasticnode02", "cluster_name" : "mycluster", "version" : { "number" : "1.7.0", "build_hash" : "929b9739cae115e73c346cb5f9a6f24ba735a743", "build_timestamp" : "2015-07-16T14:31:07Z", "build_snapshot" : false, "lucene_version" : "4.10.4" }, "tagline" : "You Know, for Search" } If we see a response similar to the one above, Elasticsearch is working properly. Alternatively, we can query our install of Elasticsearch from a browser by visiting :9200. We should see the same JSON as we saw when using curl above. Basic usage Create index $ curl -XPUT 'localhost:9200/customer?pretty' Put information into our customer index: curl -XPUT 'localhost:9200/customer/external/1?pretty' -d ' { "name": "Ri Xu" }' Let's give this a try $ curl -X GET localhost:9200/customer/external/1?pretty { "_index" : "customer", "_type" : "external", "_id" : "1", "_version" : 1, "found" : true, "_source": { "name" : "Ri Xu" } } Get record by search POST /twitter/tweet/_search { "size": 10, "query": { "range": { "create_time": { "gte": 1524844800, "lte": 1524931200 } } } } Delete record by query DELETE /twitter/tweet/_query { "query": { "term": { "user": "kimchy" } } } Checking cluster through command line. You can run from any node server (elasticnode01 / elasticnode02). Checking cluster health $ curl 'localhost:9200/_cat/health?v' epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks 1437735048 18:50:48 mycluster green 2 2 0 0 0 0 0 0 Checking node status in cluster $ curl 'localhost:9200/_cat/nodes?v' host ip heap.percent ram.percent load node.role master name ubuntu 127.0.1.1 3 37 0.00 d * elasticnode02 ubuntu 127.0.1.1 6 31 0.01 d m elasticnode01 Installing useful plugin for GUI. Install the plugin in both server that is elasticnode01 and elasticnode02. elasticsearch-head - A web front end for an Elasticsearch cluster $ sudo /usr/share/elasticsearch/bin/plugin -install mobz/elasticsearch-head elasticsearch-head web interface in web browser via visit http://ip-address-node:9200/_plugin/head Bigdesk - Installing bigdesk plugin in both servers. $ sudo /usr/share/elasticsearch/bin/plugin -install lukas-vlcek/bigdesk Open bigdesk in web browser : http://ip-address-node:9200/_plugin/bigdesk Conclusion I have now installed, configured and begun using Elasticsearch. Since it responds to a basic RESTful API. It is now easy to begin adding to and querying data using Elasticsearch from your application. Read more Install MySQL Cluster on Ubuntu 12.04 LTS ### Install Nginx and PHP-FPM on OS X Install Nginx via Homebrew $ brew install nginx Adding Nginx to startup routine $ ln -sfv /usr/local/opt/nginx/*.plist /Library/LaunchDaemons/ $ sudo chown root:wheel /Library/LaunchDaemons/homebrew.mxcl.nginx.plist To Start $ launchctl load -w /Library/LaunchDaemons/homebrew.mxcl.nginx.plist or $ sudo nginx To Stop $ launchctl unload -w /Library/LaunchDaemons/homebrew.mxcl.nginx.plist or $ sudo nginx -s stop The default configuration is set that it will listen on port 8080 instead of the HTTP standard 80. Ignore that for now $ curl -IL http://127.0.0.1:8080 HTTP/1.1 200 OK Server: nginx/1.8.0 Date: Thu, 23 Jul 2015 15:01:14 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 21 Apr 2015 17:50:12 GMT Connection: keep-alive ETag: "55368dd4-264" Accept-Ranges: bytes Installing PHP with FPM Search for available PHP formulas (formula’s in homebrews are equivalent to packages in aptitude) $ brew search php It will return long list of php package $ brew tap josegonzalez/php $ brew tap homebrew/dupes Before we build PHP, you may like to exercise options using $ brew options php55 We have built it using $ brew install php55 --with-fpm --with-imap --without-apache --with-debug After long wait, you can verify php & php-fpm version using php -v and php-fpm -v respectively. $ php -v PHP 5.5.27 (cli) (built: Jul 23 2015 08:42:00) (DEBUG) Copyright (c) 1997-2015 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies $ php-fpm -v PHP 5.5.24 (fpm-fcgi) (built: May 19 2015 10:10:19) Copyright (c) 1997-2015 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2015 Zend Technologies Adding PHP-FPM to startup routine $ ln -sfv /usr/local/opt/php55/*.plist /Library/LaunchDaemons/ To Start $ launchctl load -w /Library/LaunchDaemons/homebrew.mxcl.php55.plist or $ php-fpm To Stop $ launchctl unload -w /Library/LaunchDaemons/homebrew.mxcl.php55.plist Make sure PHP-FPM is listening on port 9000 $ lsof -Pni4 | grep LISTEN | grep php php-fpm 38716 xuri 9u IPv4 0xeae1b207b42a1721 0t0 TCP 127.0.0.1:9000 (LISTEN) php-fpm 38717 xuri 0u IPv4 0xeae1b207b42a1721 0t0 TCP 127.0.0.1:9000 (LISTEN) php-fpm 38718 xuri 0u IPv4 0xeae1b207b42a1721 0t0 TCP 127.0.0.1:9000 (LISTEN) php-fpm 38719 xuri 0u IPv4 0xeae1b207b42a1721 0t0 TCP 127.0.0.1:9000 (LISTEN) Config Nginx $ sudo vim /usr/local/etc/nginx/nginx.conf Modify this section location ~ \.php$ { root html; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } ... to ... location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } Save and reload Nginx service $ sudo nginx -s reload Deny access to multiple folders on Nginx location ~ /(folder1|folder2|folder3) { deny all; return 404; } Define 404 page server { // ... error_page 404 /var/www/404.html; Read more posts Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + Get MAMP Working OS X 10.10 Yosemite Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu ### Sublime Text for Go Build System Go, also commonly referred to as Go, is a programming language developed at Google in 2007 by Robert Griesemer, Rob Pike, and Ken Thompson. It is a statically-typed language with syntax loosely derived from that of C, adding garbage collection, type safety, some dynamic-typing capabilities, additional built-in types such as variable-length arrays and key-value maps, and a large standard library. Official Website golang.org Install Go on macOS via Homebrew $ brew install go $ brew install hg Setting up Go Environment That shows where Go was installed. We need to do the following to setup Go's environment: $ export PATH=$PATH:/usr/local/opt/go/libexec/bin $ export GOPATH=/usr/local/opt/go/bin Check Environment Variables $ go env GOARCH="amd64" GOBIN="" GOCHAR="6" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOOS="darwin" GOPATH="/usr/local/opt/go/bin" GORACE="" GOROOT="/usr/local/Cellar/go/1.4.2/libexec" GOTOOLDIR="/usr/local/Cellar/go/1.4.2/libexec/pkg/tool/darwin_amd64" CC="clang" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fno-common" CXX="clang++" CGO_ENABLED="1" $GOPATH is the workspaces of go, $GOROOT is the install path of go. We can also check the existing flags using brew info go. Go get the Basics $ go get golang.org/x/tools/cmd/godoc $ go get golang.org/x/tools/cmd/vet Install Gotour $ go get golang.org/x/tour/gotour $ cd $GOPATH/bin $ ./gotour 2015/07/22 16:17:21 Serving content from /usr/local/opt/go/bin/src/code.google.com/p/go-tour 2015/07/22 16:17:21 A browser window should open. If not, please visit http://127.0.0.1:3999 2015/07/22 16:17:43 accepting connection from: 127.0.0.1:51640 Add alise in .bash_profile # Go tour alias gotour=$GOPATH/bin/gotour GOPATH="/usr/local/opt/go/bin" GOROOT="/usr/local/Cellar/go/1.4.2/libexec" After save $ source ~/.bash_profile Sublime Text Package Install GoSublime (A Go plugin collection for the text editor SublimeText 2 providing code completion and other IDE-like features.), Command Shift P → Install Package → GoSublime. Setup Build System: Tools → Build System → New Build System, config file like this { "path" : "$HOME/bin:/usr/local/bin:$PATH", "cmd" : ["sh","-c","go run $file"] } Save file, and create a test.go file. // test package main import ( "fmt" ) func main() { fmt.Println("Hello World") } Build result output like this If you got an error "MarGo: Missing required environment variables: GOPATH", check setting of Go's environment. ### Laravel Beanstalkd Queuing Service Exception Handling Beanstalkd was first developed to solve the needs of a popular web application (Causes on Facebook). Currently, it is an absolutely reliable, easy to install messaging service which is perfect to get started with and use. Reference this posts Production-Ready Beanstalkd with Laravel Queues to deploy queue service with beanstalkd. Here we use to make the process of monitoring Supervisor. Task in the queue will be some exception in some cases, to avoid infinite loop on exception we need to travel Laravel how many times to try a specific job, before deciding it has failed $ php artisan queue:listen --tries=3 This way, it will stop processing that specific job after 3 tries. If you're using Supervisor, edit config file in /etc/supervisor/conf.d/ floder. Laravel use failed_jobs to record failed jobs in queue, to populate failed_jobs table, we need do migrate create table. $ php artisan queue:failed-table This creates the necessary migration. Then do $ php artisan migrate to get table created in your database. mysql> show columns from my_failed_jobs; +------------+------------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------------+------+-----+---------------------+----------------+ | id | int(10) unsigned | NO | PRI | NULL | auto_increment | | connection | text | NO | | NULL | | | queue | text | NO | | NULL | | | payload | text | NO | | NULL | | | failed_at | timestamp | NO | | 0000-00-00 00:00:00 | | +------------+------------------+------+-----+---------------------+----------------+ 5 rows in set (0.00 sec) mysql> Show all filed jobs mysql> select * from my_failed_jobs; Retry ID 2 failed job $ php artisan queue:retry 2 The failed job has been pushed back onto the queue! ### Rebuild iOS Contact Index in English Language When the iOS language is set to English, Chinese contacts will no longer be in alphabetical order, the index all go to # Clone lexrus/PhoneticContacts project $ git clone https://github.com/lexrus/PhoneticContacts.git or download zip from GitHub and you will get a PhoneticContacts floder. Open PhoneticContacts.xcodeproj file with Xcode on your Mac. Click "Build and then run current scheme" button. If you see Build Succsess, that means batch processing is completed, the speed is quite fast, open Contacts.app and use iCloud synchronized to iPhone. I place an order iPhone 5s A1530 32GB silver case on Apple Store website at Jul 1st 2015. Logistics company is China EMS. Logistics Package My Xiaomi Redmi 1 and iPhone 5s ### Solve Linux No space left on device Issue Last night, I recieved a alert SMS, one server goes down... Server on an Aliyun ECS Ubuntu 14.04, when I SSH into the server and when trying to excute some commands for example I get the following: -bash: cannot create temp file for here-document: No space left on device The filesystem will usually go into read-only while the system is running if there is a filesystem consistency issue. I checked disk usage: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 20G 8.2G 11G 45% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 492M 4.0K 492M 1% /dev tmpfs 101M 408K 100M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 0 501M 0% /run/shm none 100M 0 100M 0% /run/user In this case, I want to check of all filesystem, but not work, it look likes should mount ISO to run repair command. $ sudo mount -o remount,rw / $ sudo umount /dev/xvda1 $ sudo fsck -y /dev/xvda1 fsck from util-linux 2.20.1 e2fsck 1.42.9 (4-Feb-2014) /dev/xvda1 is mounted. e2fsck: Cannot continue, aborting. 2015-06-23 03:14:03 But finally I check Inodes info if disk: # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/xvda1 1310720 1310720 0 100% / Inodes is full, because stores a amount of fragmented files on server. Disk blocks and inodes one full, will lead to not create the file, suggesting that the disk is full, server's file system went into Read-only. ### Show Differences Between Files with Icdiff Wiki Icdiff is improved colored diff. By highlighting changes, icdiff can show you the differences between similar files without getting in the way. This is especially helpful for identifying and understanding small changes within existing lines. Official Website jefftk.com/icdiff Icdiff on GitHub github.com/jeffkaufman/icdiff Install on OS X $ brew install icdiff ### LaTeX with Sublime Text and Skim on Mac Wiki LaTeX is a word processor and document markup language. It is distinguished by typical word processors such as Microsoft Office and OpenOffice in that the writer uses plain text as opposed to formatted text, relying on markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylise text throughout a document (such as bold and italic), and to add citations and cross-referencing. A TeX distribution such as TeXlive or MikTeX is used to produce an output file (such as PDF or DVI) suitable for printing or digital distribution. Install LaTex Download last version MacTeX.pkg from TeX Users Group (approximately 2.4G, will take 4.62 GB of space on computer after installed). Install Sublime Text Download installer from Sublime Text official website, open Package Control by press Command Shift P after installed, input Install Package and press Enter, search and install LaTexTools plugin. Install Skim Skim is a PDF reader and note-taker for OS X. It is designed to help you read and annotate scientific papers in PDF, but is also great for viewing any PDF file. Download Skim installer from MacUpdate website and install it. After installed, open Preference, in Sync tab, in PDF-TeX Sync support select Sublime Text, don't select the Check for the file changes, close this window. Chinese Support Add %!TEX program = xelatex in first line of .tex file. Test %!TEX program = xelatex \documentclass{article} \usepackage{fontspec, xunicode, xltxtra} \setmainfont{Hiragino Sans GB} \title{Title} \author{} \begin{document} \maketitle{} \section{Introduction} This is where you will write your content. 中文内容测试。 \end{document} After save .tex file, press Command B to compile and run, then you can see inside the Skim PDF preview. If prompted LaTeXTools: you need to migrate your preferences. See the README file for instructions., open Pallte by press Command Shift P, input LaTeXTools: Reconfigure and migrate settings. That these. Reference Templete Using the LaTeX Resume Templates LaTeX Templates moderncv Package ShareLaTeX ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2015 Date: Thursday, May 14, 2015 Time: 8:15 PM Place: Schönbrunn Palace, Palace Gardens (in front of the Neptune Fountain) 1130 Vienna Conductor: Zubin Mehta Soloist: Rudolf Buchbinder, Piano The Vienna Philharmonic performed its annual Summer Night Concert Schönbrunn, an open-air concert with free admission, in the unique ambience of the gardens of Schönbrunn Palace on Thursday, May 14, 2015. Zubin Mehta conducted the Summer Night Concert with Rudolf Buchbinder as soloist. This year's concert, which also represented the opening concert of the Vienna Festival, was attended by 100,000 visitors. With this open-air concert in Schönbrunn, the Vienna Philharmonic wishes to provide all Viennese, as well as visitors to the city, with a special musical experience in the impressive setting of Schönbrunn Palace and its beautiful baroque gardens, a UNESCO World Cultural Heritage Site. Music Program Richard Strauss - Wiener Philharmoniker Fanfare op. 109 Edvard Grieg - Piano Concerto in A minor, op. 16 (Buchbinder soloist) Alfred Grünfeldt - Soirée de Vienne (Buchbinder) Christian Sinding - Rustle of Spring, op. 32, Nr. 3 Edvard Grieg - Peer-Gynt-Suite Nr. 1, op. 46 Jean Sibelius - Finlandia, Tondichtung op. 26/7 Hans Christian Lumbye - Copenhagener Eisenbahn Dampf Galopp Johann Strauss II - Wiener Blut, "Viennese Spirit" Waltz Zubin Mehta Zubin Mehta was born in 1936 in Bombay and received his first musical education under his father’s Mehli Mehta’s guidance who was a noted concert violinist and the founder of the Bombay Symphony Orchestra. In 1954 he left for Vienna, where he eventually entered the conducting program under Hans Swarowsky at the Akademie für Musik. Zubin Mehta won the Liverpool International Conducting Competition in 1958 and was also a prize-winner of the summer academy at Tanglewood. By 1961 he had already conducted the Vienna, Berlin and Israel Philharmonic Orchestras and has recently celebrated 50 years of musical collaboration with all three ensembles. Rudolf Buchbinder Rudolf Buchbinder is firmly established as one of the world's foremost pianists and is frequently invited by major orchestras and festivals around the world. His comprehensive repertoire encompasses numerous 20th century compositions. Rudolf Buchbinder's emphasis lies in his meticulous study of musical sources. He owns 35 complete editions of Beethoven's sonatas and has an extensive collection of autograph scores, first editions and original documents. In addition, he possesses copies of the autograph scores and piano parts of both Brahms concertos. Download Wiener Philharmoniker Summer Night Concert Schönbrunn 2015.ts[720P] 9.69GB General ID : 8900 (0x22C4) Complete name : C:\rutracker.org_HD\3sat HD_Sommernachtskonzert Schonbrunn 2015.ts Format : MPEG-TS File size : 9.69 GiB Duration : 1h 38mn Overall bit rate mode : Variable Overall bit rate : 14.1 Mbps Video ID : 6510 (0x196E) Menu ID : 11150 (0x2B8E) Format : AVC Format/Info : Advanced Video Codec Format profile : High@L4.0 Format settings, CABAC : Yes Format settings, ReFrames : 6 frames Codec ID : 27 Duration : 1h 38mn Bit rate : 12.3 Mbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 50.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.267 Stream size : 8.46 GiB (87%) Color primaries : BT.709 Transfer characteristics : BT.709 Matrix coefficients : BT.709 Audio #1 ID : 6520 (0x1978) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 38mn Bit rate mode : Constant Bit rate : 256 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -6ms Stream size : 180 MiB (2%) Language : German Audio #2 ID : 6521 (0x1979) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 38mn Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -6ms Stream size : 135 MiB (1%) Audio #3 ID : 6522 (0x197A) Menu ID : 11150 (0x2B8E) Format : AC-3 Format/Info : Audio Coding 3 Mode extension : CM (complete main) Format settings, Endianness : Big Codec ID : 6 Duration : 1h 38mn Bit rate mode : Constant Bit rate : 448 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Delay relative to video : -5ms Stream size : 315 MiB (3%) Language : German Audio #4 ID : 6523 (0x197B) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 38mn Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -22ms Stream size : 135 MiB (1%) Language : Multiple languages ### Enabling SPDY and HSTS on Nginx Also see Enabling SPDY and HSTS on Apache. We should have Nginx up and running with a SSL certificate. Even though it is theoretically possible to run SPDY without SSL, in practice we need SSL to make it work. Make sure out version of Nginx is compiled with SPDY support: $ nginx -V Make sure we can find –with-http_spdy_module somewhere in that output. If don't, we need to grab a build with the SPDY module enabled. Install the latest version from the Nginx Ubuntu repo. Also make sure we are using version 1.5 or newer. 1.4 only supports SPDY/2 which the browsers have stopped supporting. 1.5–1.9 have support for SPDY/3.1 which is the current version. Now it is as simple as adding a single word to Nginx config. Open the server block config for SSL site, and change this line: listen 443 ssl; to: listen 443 ssl spdy; and reload Nginx config: $ sudo service nginx reload Now all SPDY enabled visitors should get site delivered over SPDY, while older browsers get regular SSL. Setting up HSTS in nginx To be fully HSTS compliant a host should only issue a HSTS header over a secure transport layer. This is because an attacker can maliciously strip out or inject a HSTS header into insecure traffic. For that reason, a browser should also disregard any HSTS headers received via HTTP, so technically it shouldn't matter if you do issue it over HTTP. Still, it's best to do it right. In your nginx server block, specifically the one that listens on port 443, you need to add a new response header. server { listen 443 ssl spdy; server_name xuri.me; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload"; add_header X-Frame-Options DENY; add_header Alternate-Protocol 443:npn-spdy/3; ### WordPress Memcached Turbo Boost and Check Hits and Misses Memcached is a general-purpose distributed memory caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Implement Installation For OS X Install Memcached for PHP on OS X For Ubuntu and Debian $ sudo apt-get install memcached For CentOS $ sudo yum install memcached Start the Memcached daemon $ sudo service memcached restart Verify that Memcached is installed and running. From your server, telnet into the Memcached server $ telnet localhost 11211 You should see output like: Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Type version into the Telnet prompt. If Memcached is installed and running, you should see something like: VERSION 1.4.24 Flush Memcache by typing flush_all, exit Telnet by typing Ctrl ], hit Enter, then typing quit and press Enter. Install the Memcached PECL extension on your server. Note that there are two different PHP interfaces for Memcached; one is named PECL Memcache and the other, PECL Memcached. The "d" at the end of Memcached is extremely important in this case. You should be able to install PECL Memcached from your package manager for your Linux distro. For OS X Install Memcached for PHP on OS X For Ubuntu and Debian $ sudo apt-get install php5-memcached For CentOS $ sudo yum install php-pecl-memcached Note that if you have a more custom installation of PHP, you might need to take some extra steps to link the PECL Memcached extension to PHP. If you are setting this up using your package manager's version of PHP and PECL Memcached, this should not be necessary. For example, many yum.conf files will exclude packages that begin with php. You may be able to modify the configuration file as necessary to install this package. Download Memcached Object Cache plugin, copy object-cache.php to wp-content. Check memcached server cache hits and misses Download pecl-caching-memcache and copy memcached.php in hosts path, edit params in file define('ADMIN_USERNAME','memcache'); // Admin Username define('ADMIN_PASSWORD','password'); // Admin Password Visit memcached.php via browser, will see like this ### Algorithms Related Books Mastering Algorithms with C By Kyle Loudon | Publisher: O'Reilly Media | Final Release Date: August 1999 | Pages: 562 There are many books on data structures and algorithms, including some with useful libraries of C functions. Mastering Algorithms with C offers you a unique combination of theoretical background and working code. With robust solutions for everyday programming tasks, this book avoids the abstract style of most classic data structures and algorithms texts, but still provides all of the information you need to understand the purpose and use of common programming techniques. Implementations, as well as interesting, real-world examples of each data structure and algorithm, are included. Using both a programming style and a writing style that are exceptionally clean, Kyle Loudon shows you how to use such essential data structures as lists, stacks, queues, sets, trees, heaps, priority queues, and graphs. He explains how to use algorithms for sorting, searching, numerical analysis, data compression, data encryption, common graph problems, and computational geometry. And he describes the relative efficiency of all implementations. The compression and encryption chapters not only give you working code for reasonably efficient solutions, they offer explanations of concepts in an approachable manner for people who never have had the time or expertise to study them in depth. Anyone with a basic understanding of the C language can use this book. In order to provide maintainable and extendible code, an extra level of abstraction (such as pointers to functions) is used in examples where appropriate. Understanding that these techniques may be unfamiliar to some programmers, Loudon explains them clearly in the introductory chapters. Introduction to Algorithms, 3rd Edition By Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein | Publisher: The MIT Press | Release Date: July 31, 2009 | Hardcover: 1312 pages Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “Divide-and-Conquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edge-based flow in the material on flow networks. Many new exercises and problems have been added for this edition. As of the third edition, this textbook is published exclusively by the MIT Press. ### My Internship Interview Summary I recently participated in some PHP internship interview. I summarize some common questions here. First, the interviewer will let you do some self-introduction, you can talk about your experience, what projects done before, then the interviewer is likely to continue this topic, asking them in your project done, which is the most challenging or let you impressive technical issues is, and how to solve it. About system designs, what to consider before you the design of the system. About PV (A page view or page impression is a request to load a single web page of an Internet site.), UV(Unique visitors refers to the number of distinct individuals requesting pages from the website during a given period, regardless of how often they visit.). Optimize the database performance How do you design a database field Difference between VARCHAR and CHAR VARCHAR is variable-length. CHAR is fixed length. If your content is a fixed size, you'll get better performance with CHAR. Refernce MySQL: The CHAR and VARCHAR Types. About NoSQL Databases What is NoSQL? NoSQL encompasses a wide variety of different database technologies that were developed in response to a rise in the volume of data stored about users, objects and products, the frequency in which this data is accessed, and performance and processing needs. Relational databases, on the other hand, were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the cheap storage and processing power available today. The Benefits of NoSQL When compared to relational databases, NoSQL databases are more scalable and provide superior performance, and their data model addresses several issues that the relational model is not designed to address: Large volumes of structured, semi-structured, and unstructured data Agile sprints, quick iteration, and frequent code pushes Object-oriented programming that is easy to use and flexible Efficient, scale-out architecture instead of expensive, monolithic architecture There are 3 type of visibility available in php for controlling your property or method. public scope to make that variable/function available from anywhere, other classes and instances of the object. Public Visibility in PHP Classes Public visiblity is least restricted visibility available in php. If you will not define the visibity factor with your method or property then public will be by defautl applied. Public methods or variables can be accessible from anywhere.For example, It can be accessible from using object(outside the class), or inside the class, or in child class. Following is the example of the public visibility in php classes: class test { public $abc; public $xyz; public function xyz() {} } $objA = new test(); echo $objA->abc; //accessible from outside $objA->xyz(); //public method of the class test So in above example class test is the very basic class. In this class every thing is open. Mininum restriction in the class is to access its properyt and methods using object outside the class. private scope when you want your variable/function to be visible in its own class only. Private Visibility in PHP Classes Private method or properties can only be accessible withing the class. You can not access private variable or function of the class by making object out side the class. But you can use private function and property within the class using $this object. Private visibility in php classes is used when you do not want your property or function to be exposed outside the class. Following example of Private visibility in php classes. class test { public $abc; private $xyz; public function pubDo($a) { echo $a; } private function privDo($b) { echo $b; } public function pubPrivDo() { $this->xyz = 1; $this->privDo(1); } } $objT = new test(); $objT->abc = 3; //Works fine $objT->xyz = 1; //Throw fatal error of visibility $objT->pubDo("test"); //Print "test" $objT->privDo(1); //Fatal error of visibility $objT->pubPrivDo(); //Within this method private function privDo and variable xyz is called using $this variable. protected scope when you want to make your variable/function visible in all classes that extend current class including the parent class. Protected Visibility in PHP Classes Protected visibility in php classes are only useful in case of inheritance and interface. We will discuss in dept of interfaces and inheritance in other chapter of this tutorial. Protected method or variable can be accessible either within class or child class. Here we will take very basic example: class parent { protected $pr; public $a protected function testParent() { echo this is test; } } class child extends parent { public function testChild() { $this->testParent(); //will work because it } } $objParent = new parent(); $objParent->testParent(); //Throw error $objChild = new Child(); $objChild->setChild(); //work because test child will call test parent. If you will take anaylze above section you can found that method testParent() is not accessible from object of class. But it is accessible in child class. Always use correct visibility in php classes to keep your structure healthy. Do not use code like this. It is break all visibility of your php class. class test { public function method($method) { $this->$method(); } private function abc() { //Do Something } protected function xyz() { //do something } } $objT = new test(); $objT->method('abc'); $objT->method('xyz'); Reference PHP: Classes and Objects. Algorithm Basic knowledge Data Structure Algorithm Concept List Breadth-first search Bit manipulation Binary Tree Depth-first search Singleton Design Pattern Forest data structures Binary search Factory Method Design Pattern Stack Merge Sort Memory (stack and heap) Queue Quick Sort Recursive Vector / Array List Insert or search of tree O Mark Hash table Difference between POST and GET GET requests a representation of the specified resource. Note that GET should not be used for operations that cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a request should cause. Server use Request.QueryString method get value, Less secure, less efficient, less transmission capacity(lass than 2KB) POST submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both. Server use Request.Form method get value, More secure, more efficient, more transmission capacity. Cookies and Sessions Sessions are server-side files that contain user information, while Cookies are client-side files that contain user information. Sessions have a unique identifier that maps them to specific users. This identifier can be passed in the URL or saved into a session cookie. Some params in php.ini file. The interview is almost over, do you have any questions for me? How could I have done a better interview? ### Benchmark HHVM in FastCGI Mode Wiki HipHop Virtual Machine (HHVM) is a process virtual machine based on just-in-time (JIT) compilation, serving as an execution engine for PHP and Hack programming languages. By using the principle of JIT compilation, executed PHP or Hack code is first transformed into intermediate HipHop bytecode (HHBC), which is then dynamically translated into the x86-64 machine code, optimized and naively executed. This contrasts to the PHP's usual interpreted execution, in which the Zend Engine transforms the PHP source code into opcodes as a form of intermediate code, and executes the opcodes directly on the Zend Engine's virtual CPU. Official Website hhvm.com According to their website, HHVM has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP engine + APC (which is the current way of hosting a large majority of PHP applications). This article will walk through the steps required to install HipHop VM 3.23.2 (rel) on Ubuntu 17.10 x64 (Intel® CoreTM i5-5350U CPU @ 1.80GHz, 1GB RAM). Note that HHVM doesn't support any 32-bit operating system and they have no plans to add support for 32 bit operating systems. Installing HHVM Installing HHVM is quite straightforward and shouldn't take more than a few minutes. Reference Prebuilt Packages for HHVM. $ sudo apt-get install software-properties-common apt-transport-https $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xB4112585D386EB94 $ sudo add-apt-repository https://dl.hhvm.com/ubuntu $ sudo apt-get update $ sudo apt-get install hhvm Configure proxy for APT, add this line to your /etc/apt/apt.conf file: Acquire::http::Proxy "http://yourproxyaddress:proxyport"; Acquire::https::Proxy "http://yourproxyaddress:proxyport"; To confirm that HHVM has been installed, type the following command: $ hhvm --help Using HHVM in the FastCGI Mode Starting with version 3.0, HHVM can no longer be used in the server mode. This section will help you configure HHVM in the FastCGI mode with the Apache and Nginx servers. HHVM-FastCGI works much the same way as PHP-FPM. HHVM, running in FastCGI mode, is started independently of the web server (Apache, Nginx, etc). It listens on either a TCP socket (conventionally localhost:9000) or a UNIX socket. The web server listens on port 80 or port 443 like it normally would. When a new request comes in, the web server either makes a connection to the application server or reuses one of the previously open connections, and communicates using FastCGI protocol. Therefore, the web server continues to decode HTTP protocol and supplies HHVM with information like the path of the file to be executed, request headers, and body. HHVM computes the response and sends it back to the web server using FastCGI again. Finally, the web server is in charge of sending the HTTP response to the client. Running the Server To run the server in FastCGI mode pass the following parameters to hhvm runtime: $ hhvm --mode daemon -d hhvm.server.type=fastcgi -d hhvm.server.port=9000 -c /etc/hhvm/server.ini The server will now accept connections on localhost:9000. To use a UNIX socket, use the Server.FileSocket option instead: $ hhvm --mode server -d hhvm.server.type=fastcgi -d hhvm.server.file_socket=/var/run/hhvm/sock To turn the server into a daemon, change the value of mode: $ hhvm --mode daemon -d hhvm.server.type=fastcgi -d hhvm.server.file_socket=/var/run/hhvm/sock Note, all the usual options that are accepted by hhvm runtime can be used in FastCGI mode as well. In particular, -d hhvm.admin_server.port=9001 will create an additional "admin" server listening on a port 9001. Making it work with Apache 2.4 The recommended way of integrating with Apache is using mod_proxy mod_proxy_fcgi. Enable the modules, then in your Apache configuration, add a line as so: ProxyPass / fcgi://127.0.0.1:9000/var/www/html/ # Or if you used a unix socket # ProxyPass / unix://var/run/hhvm/sock|fcgi://127.0.0.1:9000/var/www/html/ This will route all the traffic to the FastCGI server. If you want to route only certain requests (e.g. only those from a subdirectory or ending *.php, you can use ProxyPassMatch, e.g. ProxyPassMatch ^/(.*\.php(/.*)?)$ fcgi://127.0.0.1:9000/var/www/html/$1 Consult mod_proxy_fcgi docs for more details on how to use ProxyPass and ProxyPassMatch. Also make sure to set up a DirectoryIndex in your Apache configuration like this: <Directory /var/www/html/> DirectoryIndex index.php </Directory> This will try to access index.php when you send a request to a directory. Get more information from FaceBook HHVM FastCGI. Making it work with Nginx Now nginx needs to be configured to know where your PHP files are and how to forward them to HHVM to execute. The relevant bit of nginx config lives at /etc/nginx/sites-available/default -- by default, it's looking in /usr/share/nginx/html for files to serve, but it doesn't know what to do with PHP. HHVM included script sudo /usr/share/hhvm/install_fastcgi.sh will configure nginx correctly for stock installs. The important part is that it adds include hhvm.conf near the top of of the nginx config mentioned above -- this will direct nginx to take any file that ends in .hh or .php and send it to HHVM via fastcgi. The default FastCGI configuration from Nginx should work just fine with HHVM-FastCGI. For instance you might want to add the following directives inside one of your location directives: include snippets/fastcgi-php.conf; # nginx version: nginx/1.12.1 (Ubuntu) fastcgi_pass 127.0.0.1:9000; # or if you used a unix socket # fastcgi_pass unix:/var/run/hhvm/sock; The result in your browser should look like this: Benchmark I use bench.php script by Zend, we can find it at here. Benchmark Results (Ubuntu 17.10 x64 (Intel® CoreTM i5-5350U CPU @ 1.80GHz, 1GB RAM) PHP 7.2.0-2+ubuntu17.10.1+deb.sury.org+2 (cli) (built: Dec 7 2017 20:15:31) ( NTS ) $ php bench.php simple 0.056 simplecall 0.017 simpleucall 0.039 simpleudcall 0.038 mandel 0.157 mandel2 0.149 ackermann(7) 0.045 ary(50000) 0.004 ary2(50000) 0.003 ary3(2000) 0.073 fibo(30) 0.118 hash1(50000) 0.018 hash2(500) 0.026 heapsort(20000) 0.042 matrix(20) 0.039 nestedloop(12) 0.073 sieve(30) 0.031 strcat(200000) 0.005 ------------------------ Total 0.934 HipHop VM 3.23.2 (rel) $ hhvm bench.php simple 0.047 simplecall 0.059 simpleucall 0.043 simpleudcall 0.040 mandel 0.178 mandel2 0.188 ackermann(7) 0.025 ary(50000) 0.048 ary2(50000) 0.077 ary3(2000) 0.210 fibo(30) 0.046 hash1(50000) 0.045 hash2(500) 0.044 heapsort(20000) 0.159 matrix(20) 0.111 nestedloop(12) 0.069 sieve(30) 0.060 strcat(200000) 0.014 ------------------------ Total 1.462 ### Custom Emoji to WordPress Replace default smilies of Wordpress, insert following code in functions.php file of current theme. // Replace default smilies of Wordpress add_filter('smilies_src','fa_smilies_src',1,10); function fa_smilies_src ($img_src, $img, $siteurl){ $img = rtrim($img, "gif"); return get_bloginfo('template_directory') . '/smilies/' . $img . 'png'; } Download Twitter Emoji Package (Download from Google Driver 21.6 KB), unzip and place it in theme path. To deduplication smilies, add this code // Deduplication smilies function fa_get_wpsmiliestrans(){ global $wpsmiliestrans; $wpsmilies = array_unique($wpsmiliestrans); foreach($wpsmilies as $alt => $src_path){ $output .= '<a class="add-smily" data-smilies="' . $alt . '"><img class="wp-smiley" src="' . get_bloginfo('template_directory') . '/smilies/' . rtrim($src_path, "gif") . 'png" /></a>'; } return $output; } Add shortcuts to article edit page: // Add shortcuts to article edit page add_action('media_buttons_context', 'fa_smilies_custom_button'); function fa_smilies_custom_button($context) { $context .= '<style>.smilies-wrap{background:#fff;border: 1px solid #ccc;box-shadow: 2px 2px 3px rgba(0, 0, 0, 0.24);padding: 10px;position: absolute;top: 4.2em;width: 400px;display:none}.smilies-wrap img{height:24px;width:24px;cursor:pointer;margin-bottom:5px} .is-active.smilies-wrap{display:block}</style><a id="insert-media-button" style="position:relative" class="button insert-smilies add_smilies" title="Add Emoji" data-editor="content" href="javascript:;"> <span class="dashicons dashicons-smiley"></span> Add Emoji </a><div class="smilies-wrap">' . fa_get_wpsmiliestrans() . '</div><script>jQuery(document).ready(function(){jQuery(document).on("click", ".insert-smilies",function() { if(jQuery(".smilies-wrap").hasClass("is-active")){jQuery(".smilies-wrap").removeClass("is-active");}else{jQuery(".smilies-wrap").addClass("is-active");}});jQuery(document).on("click", ".add-smily",function() { send_to_editor(" " + jQuery(this).data("smilies") + " ");jQuery(".smilies-wrap").removeClass("is-active");return false;});});</script>'; return $context; } Don't forgot select Convert emoticons like :-) and :-P to graphics on display in Writing Settings. After that, it should like this. ? ### Config PostgreSQL and Laravel Wiki PostgreSQL is a powerful, open source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation. Install PostgreSQL on OS X $ brew install postgresql Now run this command to finish creating the datbabase: $ initdb /usr/local/var/postgres Now run the following commands to start Postgres at login, so we already have Postgres running on the background: $ mkdir -p ~/Library/LaunchAgents $ ln -sfv /usr/local/opt/postgresql/*.plist ~/Library/LaunchAgents $ launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist If this is an upgrade and you already have the org.postgresql.postgres.plist loaded: $ launchctl unload -w ~/Library/LaunchAgents/org.postgresql.postgres.plist $ cp /usr/local/Cellar/postgresql/9.0.1/org.postgresql.postgres.plist ~/Library/LaunchAgents $ launchctl load -w ~/Library/LaunchAgents/org.postgresql.postgres.plist Or start manually with: $ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start And stop with: $ pg_ctl -D /usr/local/var/postgres stop -s -m fast Remove all rows from a table: >> Truncate Table <tablename> Import dump/sql file into PostgreSQL database $ psql -h <user@ip_address_of_server> -U <database_username> -d <name_of_the_database> -f local/path/to/your/file.sql HomeBrew Services First, install brew services by tapping gapple/services (one time): Here's an example usage $ brew services restart postgresql Stopping `postgresql`... (might take a while) ==> Successfully stopped `postgresql` (label: homebrew.mxcl.postgresql) ==> Successfully started `postgresql` (label: homebrew.mxcl.postgresql) Behind the scenes, brew services start is doing everything in the post-install message above. It runs launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist. It Just Works. Now let's see everything we've loaded: $ brew services list Install PDO PostgreSQL $ brew install php55-pdo-pgsql Config Laravel Open file: app/config/database.php, in the array replace 'default' => 'mysql', with 'default' => 'pgsql', now with the information from before edit the 'connections' array like so: 'connections' => array( 'pgsql' => array( 'driver' => 'pgsql', 'host' => 'localhost', 'database' => 'your_database_name', 'username' => '', 'password' => '', 'charset' => 'utf8', 'prefix' => '', 'schema' => 'public' ), ) After save file, should now have a functioning database that Laravel can talk to. We can use PG Commander or other GUI client to working with PostgreSQL databases. Backup & Restore Database Backup database $ pg_dump -h 127.0.0.1 -p 5432 -U postgres -Fc -b -v -f db.backup db_name Backup table $ pg_dump -h 127.0.0.1 -p 5432 -U postgres -Fp -d db_name -t table_name > table.backup Restore database $ pg_restore -h 127.0.0.1 -p 5432 -U postgres -d db_name -v db.backup Restore table $ psql -h 127.0.0.1 -p 5432 -U postgres -d db_nmae < table.backup Copy in Action Extracting table to a csv delimited file: \copy (SELECT * FROM table_name) TO '~/file_name.csv' WITH (FORMAT CSV); And for loading data into a table the equivalent for each of the above: \copy table_name FROM '~/file_name.csv' WITH CSV; Run pg_dump --help to get more helpful information. Log Analyzer pgBadger is a PostgreSQL log analyzer with fully detailed reports and graphs. Installation $ brew install pgbadger If the log file is CSV format, we need install Text::CSV_XS module: $ perl -MCPAN -e shell install Text::CSV_XS Analyze single file: $ pgbadger -f csv postgresql-2015-04-06_121457.csv Analyze a log of time periods $ pgbadger -b "2015-04-05 12:00:00" -e "2015-04-06 12:00:00" /var/pg_log/* -o /tmp/tmp_pgbadger.html ### Sublime Text Plugin - HTML-CSS-JS Prettify HTML-CSS-JS Prettify is a Sublime Text 2 and 3 plugin allowing you to format your HTML, CSS, JavaScript and JSON code. It uses a set of nice beautifier scripts made by Einar Lielmanis. The formatters are written in JavaScript, so you’ll need something (node.js) to interpret JavaScript code outside the browser. This will work with either HTML, CSS, JavaScript and JSON files. Install Command Shift P → Install Package → HTML-CSS-JS Prettify Solve Error Node.js was not found in the default path. Please specify the location. Install Node $ brew install node Modify HTMLPrettify.sublime-settings file: "node_path": { "windows": "C:/Program Files/nodejs/node.exe", "linux": "/usr/bin/nodejs", "osx": "/usr/local/bin/node" }, Edit like this "node_path": { "osx": "/usr/local/Cellar/node/0.12.1/bin/node" }, ### Setup Docker on OS X Wiki Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on Linux. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable filesystem such as aufs and others to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines. Official Website docker.com Difference Between Docker and Vagrant Vagrant and Docker are different beasts. Docker is a two part shell/management layer for building and running virtual linux containers, based on lxc. The great thing about Docker is that it is light-weight (because it relies on shared-kernel linux containers) and it is distribution agnostic. While the kernel between all instances is shared (but isolated from the host and each other), the user space for different instances can be based on different linux distributions. Vagrant on the other hand is a wonderful tool for automatically provisioning multiple virtual machines each with their own configurations managed with puppet and/or chef. For its virtualisation it can use different providers. Originally the default provider was virtualbox, but it now supports many more, including vmware fusion and even amazon-ec2. (From Quora) This topic discussed the Should to use Vagrant or Docker.io for creating an isolated environment. You can reference offical documents. In this post, I record my installation process. Environment Versions OS X El Capitan Version 10.11.4 (15E65) Docker Version 1.10.3 OS X users use Docker Toolbox to install Docker software. Docker Toolbox includes the following Docker tools: Docker CLI client for running Docker Engine to create images and containers Docker Machine so you can run Docker Engine commands from Mac OS X terminals Docker Compose for running the docker-compose command Kitematic, the Docker GUI the Docker QuickStart shell preconfigured for a Docker command-line environment Oracle VM VirtualBox Download and install Docker Toolbox. Lanuch Docker Quickstart Terminal after Docker Toolbox has been installed, it will: create a new (or start an existing) Docker Engine host running switch your environment to your new VM use the docker client to create, load, and manage containers Process output bash --login '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh' Last login: Wed Mar 23 15:51:39 on ttys002 MacBook Pro:~ xuri$ bash --login '/Applications/Docker/Docker Quickstart Terminal.app/Contents/Resources/Scripts/start.sh' Running pre-create checks... Creating machine... (default) Copying /Users/xuri/.docker/machine/cache/boot2docker.iso to /Users/xuri/.docker/machine/machines/default/boot2docker.iso... (default) Creating VirtualBox VM... (default) Creating SSH key... (default) Starting the VM... (default) Check network to re-create if needed... (default) Waiting for an IP... Waiting for machine to be running, this may take a few minutes... Detecting operating system of created instance... Waiting for SSH to be available... Detecting the provisioner... Provisioning with boot2docker... Copying certs to the local machine directory... Copying certs to the remote machine... Setting Docker configuration on the remote daemon... Checking connection to Docker... Docker is up and running! To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: /usr/local/bin/docker-machine env default ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ docker is configured to use the default machine with IP 192.168.99.100 For help getting started, check out the docs at https://docs.docker.com MacBook Pro:~ xuri$ OS X, Boot2Docker and Container architecture diagram +------------------------------------+ +--------------------------+ | OS X | | VirtualBox-VM | | | Connect | Tiny Core Linux | | +-----------------+ | (tcp:4243) | +--------------------+ | | | | | | | | | | | docker-client +--+------------+->| docker-daemon | | | | | | | | | | | +-----------------+ | | +----------+---------+ | | | | | | | +--------------------------------+ | | | run | | | boot2docker | | | v | | | ~/.boot2docker | | | +----------------+ | | | | | | Boot up to | | +--------------+-+ | | | +----> boot2docker.iso |-+----------->| | | +--------------+-+ | | | | | | | +-+ | | | | | +----> boot2docker->m.vmdk| | | +-+ lxc process | | | | | | | | | | | | +--------------------------------+ | | +----------------+ | +------------------------------------+ +--------------------------+ Some Docker commands Check Docker version and info $ docker version Docker Information $ docker info Images operation Show all top level images, their repository and tags, and their size: $ docker images Search Docker Hub for images: $ docker search <image_name> Pull an image or a repository from the registry: $ docker pull <image_name> Remove one or more images: $ docker rmi <image_name> Save an image(s) to a tar archive (streamed to STDOUT by default): $ docker save <image_name> > <filename.tar> Write to a file, instead of STDOUT: $ docker save -o <filename.tar> <filename> Create an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it: $ docker import <file|URL> Loads a tarred repository from a file or the standard input stream. Restores both images and tags: $ docker load < <filename.tar> Container operation Run a command in a new container: $ docker run <image_name> [command] [arg] Enter the container: $ docker run -i -t <image_name> /bin/bash Enter the container last time entered: $ docker exec -i -t <container ID> Shows just running containers: $ docker ps Show all containers: $ docker ps -a Show n last created containers (includes all states): $ docker ps -l Commit a container: $ docker commit <container ID> <new_image_name> Related Article Run Docker from Behind Proxy ### Secure Apache Server from DDoS, Slowloris, and DNS Injection Attacks DDoS There is an Apache module that was created to prevent a DDoS attack, although it's probably not installed by default. Follow these steps to install the module. Open the terminal window. Issue the command: $ sudo apt-get -y install libapache2-mod-evasive Run the following command: $ sudo mkdir -p /var/log/apache2/evasive Run the following command $ sudo chown -R www-data:root /var/log/apache2/evasive Open the /etc/apache2/mods-available/evasive.load file (using sudo and your favorite text editor) and append the following to the bottom of that file (this is one configuration per line): DOSHashTableSize 2048 # Maximum number of requests for the same page DOSPageCount 20 # Total number of requests for any object by the same client IP on the same listener DOSSiteCount 300 # Interval for the page count threshold DOSPageInterval 1.0 # Interval for the site count threshold DOSSiteInterval 1.0 # Time that a client IP will be blocked for DOSBlockingPeriod 10.0 DOSLogDir "/var/log/apache2/evasive" DOSEmailNotify admin@example.com Save the file and restart Apache. Now be better protected from DDoS attacks. Slowloris Slowloris is software written by Robert Hansen that allows one machine to take down another machine's web server using minimal bandwidth. Apache has a module to help prevent such attacks. Here's how to get it working for you. Run the following command: $ sudo apt-get -y install libapache2-mod-qos After the installation is complete, check the configuration in /etc/apache2/mods-available/qos.conf to make sure it perfectly fits your needs. After you tweak the module (if necessary), restart Apache and enjoy a Slowloris-free web server. DNS Injection Spam from web forms is not only prevalent, it's a fast-track method of getting your domain blacklisted by the likes of Spamhaus. To prevent DNS Injection attacks, which are attacks that can inject fake DNS names into your server's cache, you need to add another module to Apache. Follow these steps. Run the following command: $ sudo apt-get -y install libapache2-mod-spamhaus After the installation completes, issue the command: $ sudo touch /etc/spamhaus.wl With the module installed, open the /etc/apache2/conf-available/security.conf file (using sudo and your favorite text editor) and append the following to the bottom of your configuration file: <IfModule mod_spamhaus.c> MS_METHODS POST,PUT,OPTIONS,CONNECT MS_WhiteList /etc/spamhaus.wl MS_CacheSize 256 </IfModule> Save the security.conf file and restart Apache so the new module will take effect. Installing mod_security Mod security is a free Web Application Firewall (WAF) that works with Apache, Nginx and IIS. It supports a flexible rule engine to perform simple and complex operations and comes with a Core Rule Set (CRS) which has rules for SQL injection, cross site scripting, Trojans, bad user agents, session hijacking and a lot of other exploits. For Apache, it is an additional module which makes it easy to install and configure. Modsecurity is available in the Debian/Ubuntu repository: $ sudo apt-get install libapache2-modsecurity Verify if the mod_security module was loaded. $ sudo apachectl -M | grep --color security You should see a module named security2_module (shared) which indicates that the module was loaded. Modsecurity's installation includes a recommended configuration file which has to be renamed: $ sudo mv /etc/modsecurity/modsecurity.conf{-recommended,} Reload Apache $ sudo service apache2 reload You'll find a new log file for mod_security in the Apache log directory: $ ls -l /var/log/apache2/modsec_audit.log -rw-r----- 1 root root 0 Oct 19 08:08 /var/log/apache2/modsec_audit.log Configuring mod_security Out of the box, modsecurity doesn't do anything as it needs rules to work. The default configuration file is set to DetectionOnly which logs requests according to rule matches and doesn't block anything. This can be changed by editing the modsecurity.conf file: $ sudo vim /etc/modsecurity/modsecurity.conf Find this line SecRuleEngine DetectionOnly and change it to: SecRuleEngine On If you're trying this out on a production server, change this directive only after testing all your rules. Another directive to modify is SecResponseBodyAccess. This configures whether response bodies are buffered (i.e. read by modsecurity). This is only neccessary if data leakage detection and protection is required. Therefore, leaving it On will use up droplet resources and also increase the logfile size. Find this SecResponseBodyAccess On and change it to: SecResponseBodyAccess Off Now we'll limit the maximum data that can be posted to your web application. Two directives configure these: SecRequestBodyLimit SecRequestBodyNoFilesLimit The SecRequestBodyLimit directive specifies the maximum POST data size. If anything larger is sent by a client the server will respond with a 413 Request Entity Too Large error. If your web application doesn't have any file uploads this value can be greatly reduced. The value mentioned in the configuration file is SecRequestBodyLimit 13107200 which is 12.5MB. Similar to this is the SecRequestBodyNoFilesLimit directive. The only difference is that this directive limits the size of POST data minus file uploads-- this value should be "as low as practical." The value in the configuration file is SecRequestBodyNoFilesLimit 131072 which is 128KB. Along the lines of these directives is another one which affects server performance: SecRequestBodyInMemoryLimit. This directive is pretty much self-explanatory; it specifies how much of "request body" data (POSTed data) should be kept in the memory (RAM), anything more will be placed in the hard disk (just like swapping). Since droplets use SSDs, this is not much of an issue; however, this can be set a decent value if you have RAM to spare. SecRequestBodyInMemoryLimit 131072 This is the value (128KB) specified in the configuration file. Excluding Hosts and Directories Sometimes it makes sense to exclude a particular directory or a domain name if it is running an application like phpMyAdmin as modsecurity and will block SQL queries. It is also better to exclude admin backends of CMS applications like WordPress. To disable modsecurity for a complete VirtualHost place the following <IfModule security2_module> SecRuleEngine Off </IfModule> inside the <VirtualHost> section. For a particular directory: <Directory "/var/www/wp-admin"> <IfModule security2_module> SecRuleEngine Off </IfModule> </Directory> Further Reading Official modsecurity documentation https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual. ### Anti Bad Bots and Crawlers by User Agent As we know there are many crawlers and spider, such as Googlebot, Baiduspider. But there are also some crawlers not compliance the rules of robots. That will take more pressure for the server. So we should do something to anti these bad spider. In this post I will show you how to adnti bad bots and crawlers with Apache or Nginx and PHP. Apache Edit site conf file as the following code, after save configuration file reload Apache service. <Directory "/var/www"> # Anti crawlers SetEnvIfNoCase User-Agent ".*(^$|FeedDemon|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|YisouSpider|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms)" BADBOT Deny from env=BADBOT Order Allow,Deny Allow from all </Directory> Nginx Create agent_deny.conf file in conf directory of Nginx like this: # Forbid crawlers such as Scrapy if ($http_user_agent ~* (Scrapy|Curl|HttpClient)) { return 403; } # Disable specify UA and empty UA access if ($http_user_agent ~ "FeedDemon|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|YisouSpider|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms|^$" ) { return 403; } # Forbid crawlers except GET|HEAD|POST method if ($request_method !~ ^(GET|HEAD|POST)$) { return 403; } Add following in location / section of the site, after save configuration file reload Nginx service. include agent_deny.conf; PHP Set following code in index file. <?php // ... /* |-------------------------------------------------------------------------- | Anti Crawlers |-------------------------------------------------------------------------- | */ if(isset($_SERVER['HTTP_USER_AGENT'])) { $badAgents = array('FeedDemon','BOT/0.1 (BOT for JCE)','CrawlDaddy ','Java','Feedly','UniversalFeedParser','ApacheBench','Swiftbot','ZmEu','Indy Library','oBot','jaunty','YandexBot','AhrefsBot','YisouSpider','jikeSpider','MJ12bot','WinHttp','EasouSpider','HttpClient','Microsoft URL Control','YYSpider','jaunty','Python-urllib','lightDeckReports Bot'); if(in_array($_SERVER['HTTP_USER_AGENT'],$badAgents)) { exit('Go away'); } } else { exit('Go away'); } Test Use CURL command to simulate crawlers Simulate ApacheBench $ curl -I -A 'ApacheBench' https://xuri.me Simulate empty USER_AGENT $ curl -I -A '' https://xuri.me Simulate Googlebot $ curl -I -A 'Googlebot' https://xuri.me We can get list of User-Agent from user-agents.org. ### Setup Hadoop on Ubuntu (Single-Node Cluster) Wiki Apache Hadoop is a set of algorithms (an open-source software framework written in Java) for distributed storage and distributed processing of very large data sets (Big Data) on computer clusters built from commodity hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures (of individual machines, or racks of machines) are commonplace and thus should be automatically handled in software by the framework. In this tutorial I will describe the required steps for setting up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System. Benefits of using Hadoop The architecture of Hadoop allows you to scale your hardware as and when you need to. New nodes can be added incrementally without having to worry about the change in data formats or the handling of applications that sit on the file system. One of the most important features of Hadoop is that it allows you to save enormous amounts of money by substituting cheap commodity servers for expensive ones. This is possible because Hadoop transfers the responsibility of fault tolerance from the hardware layer to the application layer. Environment Versions Ubuntu 16.04 LTS Xenial Xerus 64-bit Server Edition Hadoop 2.7.2 Single-node Installation The report here will describe the required steps for setting up a single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu Linux. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets. Before we start, we will understand the meaning of the following: DataNode A DataNode stores data in the Hadoop File System. A functional file system has more than one DataNode, with the data replicated across them. NameNode The NameNode is the centrepiece of an HDFS file system. It keeps the directory of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these file itself. NodeManager The NodeManager (NM) is YARN's per-node agent, and takes care of the individual compute nodes in a Hadoop cluster. This includes keeping up-to date with the ResourceManager (RM), overseeing containers' life-cycle management; monitoring resource usage (memory, CPU) of individual containers, tracking node-health, log's management and auxiliary services which may be exploited by different YARN applications. ResourceManager ResourceManager (RM) is the master that arbitrates all the available cluster resources and thus helps manage the distributed applications running on the YARN system. It works together with the per-node NodeManagers (NMs) and the per-application ApplicationMasters (AMs). Secondary Namenode Secondary Namenode whole purpose is to have a checkpoint in HDFS. It is just a helper node for namenode. Install Java Hadoop requires Java to be installed, so let's begin by installing Java, update the source list: user@ubuntu:~$ sudo apt-get update The OpenJDK project is the default version of Java, that is provided from a supported Ubuntu repository. user@ubuntu:~$ sudo apt-get install openjdk-8-jdk These commands will update the package information on your server and then install Java. After executing these commands, execute the following command to verify that Java has been installed: user@ubuntu:~$ java -version If Java has been installed, this should display the version details as illustrated in the following output: openjdk version "1.8.0_01-internal" OpenJDK Runtime Environment (build 1.8.0_01-internal-b15) OpenJDK 64-Bit Server VM (build 25.01-b15, mixed mode) If you already have Java JDK installed on your system, then you need not run the above command. Adding a dedicated Hadoop system user user@ubuntu:~$ sudo addgroup hadoop_group user@ubuntu:~$ sudo adduser --ingroup hadoop_group hduser1 This will add the user hduser1 and the group hadoop_group to the local machine. Add hduser1 to the sudo group: user@ubuntu:~$ sudo adduser hduser1 sudo Configuring SSH The hadoop control scripts rely on SSH to peform cluster-wide operations. For example, there is a script for stopping and starting all the daemons in the clusters. To work seamlessly, SSh needs to be etup to allow password-less login for the hadoop user from machines in the cluster. The simplest ay to achive this is to generate a public/private key pair, and it will be shared across the cluster. Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine. For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost for the hduser user we created in the earlier. We have to generate an SSH key for the hduser1 user. user@ubuntu:~$ su - hduser1 hduser1@ubuntu:~$ ssh-keygen -t rsa -P '' -P '', here indicates an empty password You have to enable SSH access to your local machine with this newly created key which is done by the following command: hduser1@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys The final step is to test the SSH setup by connecting to the local machine with the hduser1 user. The step is also needed to save your local machine's host key fingerprint to the hduser1 user's known hosts file. hduser1@ubuntu:~$ ssh localhost If the SSH connection fails, we can try the following (optional): Enable debugging with ssh -vvv localhost and investigate the error in detail. Check the SSH server configuration in /etc/ssh/sshd_config. If you made any changes to the SSH server configuration file, you can force a configuration reload with sudo /etc/init.d/ssh reload. Main Installation Start by switching to hduser1 user@ubuntu:~$ su - hduser1 Download and extract last version Hadoop binary code from hadoop.apache.org. I use current stable version Hadoop 2.7.2. $ wget http://ftp.tc.edu.tw/pub/Apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz $ sudo tar -xvzf hadoop-*.tar.gz -C /usr/local/ && sudo mv /usr/local/hadoop-* /usr/local/hadoop Setup Environment Variables for Hadoop Add the following entries to .bashrc file Set Hadoop-related environment variables: # Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop # Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin Make change to take effect: $ source ~/.bashrc Configuration hadoop-env.sh Change the file: $HADOOP_HOME/etc/hadoop/hadoop-env.sh export JAVA_HOME=${JAVA_HOME} to in the same file export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") $HADOOP_HOME/etc/hadoop/*-site.xml Now we create the directory and set the required ownerships and permissions hduser1@ubuntu:~$ sudo mkdir -p /app/hadoop/tmp hduser1@ubuntu:~$ sudo chown hduser1:hadoop_group /app/hadoop/tmp hduser1@ubuntu:~$ sudo chmod 750 /app/hadoop/tmp The last line gives reading and writing permissions to the /app/hadoop/tmp directory Error: If you forget to set the required ownerships and permissions, you will see a java.io.IO Exception when you try to format the name node. Paste the following between <configuration></configuration> In file $HADOOP_HOME/etc/hadoop/core-site.xml <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> Create mapred-site.xml file from template: hduser1@ubuntu:~$ sudo cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml && sudo vim $HADOOP_HOME/etc/hadoop/mapred-site.xml In file $HADOOP_HOME/etc/hadoop/mapred-site.xml <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property> In file $HADOOP_HOME/etc/hadoop/hdfs-site.xml <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> Formatting the HDFS filesystem via the NameNode To format the filesystem (which simply initializes the directory specified by the dfs.name.dir variable). Run the command hduser1@ubuntu:~$ hdfs namenode -format Starting single-node cluster Run the command hduser1@ubuntu:~$ start-dfs.sh && start-yarn.sh This will startup a Namenode, DataNode, ResourceManager and a NodeManager on the machine. Verify this by typing in the following command: hduser1@ubuntu:~$ jps 4704 NameNode 5573 Jps 5015 SecondaryNameNode 5160 ResourceManager 5273 NodeManager 4842 DataNode If you can see a result similar to the depicted in the screenshot above, it means that you now have a functional instance of Hadoop running on your server. The output means that we now have a functional instance of Hadoop running on our server. Stop Hadoop We run stop-dfs.sh && stop-yarn.sh (or stop-all.sh) to stop all the daemons running on our machine: hduser1@ubuntu:~$ stop-dfs.sh && stop-yarn.sh Hadoop Web Interfaces http://localhost:50070 - web UI of the NameNode daemon http://localhost:50090 - web UI of the status http://localhost:50075 - web UI of the data node daemon Note The masters and slaves file should contain localhost. In /etc/hosts, the ip of the system should be given with the alias as localhost. Set the java home path in hadoop-env.sh as well .bashrc. Related Article Setup Hadoop on Ubuntu (Multi-Node Cluster) ### Enabling SPDY and HSTS on Apache Wiki SPDY SPDY is an open networking protocol developed primarily at Google for transporting web content. SPDY manipulates HTTP traffic, with particular goals of reducing web page load latency and improving web security. SPDY achieves reduced latency through compression, multiplexing, and prioritization although this depends on a combination of network and website deployment conditions. The name "SPDY" is a trademark of Google and is not an acronym. mod_spdy is a SPDY module for Apache 2.x that allows your web server to take advantage of SPDY features like stream multiplexing and header compression. This is the open source home for mod_spdy. You can also download Debian and RPM packages or compile mod_spdy from source. HSTS HTTP Strict Transport Security (HSTS) is a web security policy mechanism which is necessary to protect secure HTTPS websites against downgrade attacks, and which greatly simplifies protection against cookie hijacking. It allows web servers to declare that web browsers (or other complying user agents) should only interact with it using secure HTTPS connections, and never via the insecure HTTP protocol. HSTS is an IETF standards track protocol and is specified in RFC 6797. External Links Chromium SPDY www.chromium.org/spdy/spdy-whitepaper Apache SPDY module code.google.com/p/mod-spdy SPDY Google Developers developers.google.com/speed/spdy SPDY runs over HTTPS, so we need an HTTPS-enabled web site to test SPDY. Please note that SPDY will fall back to HTTPS if the user's browser doesn't support SPDY or if things go wrong, so installing mod_spdy doesn't hurt your existing setup. I'm assuming that you have a working LAMP setup, as described on Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu and Set Up Apache with a Free Signed SSL Certificate on. This tutorial explains how to use mod_spdy with Apache 2.4.12 on Ubuntu 14.04 LTS. Installing mod_spdy Google provides Debian/Ubuntu packages for mod_spdy on https://developers.google.com/speed/spdy/mod_spdy does not yet support Ubuntu 14.04 or Apache 2.4, so I use OpenSSL 1.0.1(h) and Apache 2.4.10 port for mod-spdy. $ cd /tmp $ sudo apt-get -y install git g++ libapr1-dev libaprutil1-dev curl patch binutils make devscripts $ git clone https://github.com/eousphoros/mod-spdy.git $ cd mod-spdy/src $ ./build_modssl_with_npn.sh $ chmod +x ./build/gyp_chromium $ make BUILDTYPE=Release $ sudo service apache2 stop $ cd /usr/lib/apache2/modules $ mv mod_ssl.so mod_ssl.so.bak $ cd /tmp/mod-spdy/src $ sudo cp mod_ssl.so /usr/lib/apache2/modules $ sudo service apache2 start $ sudo a2enmod ssl $ sudo service apache2 restart $ sudo cp out/Release/libmod_spdy.so /usr/lib/apache2/modules/mod_spdy.so $ echo "LoadModule spdy_module /usr/lib/apache2/modules/mod_spdy.so" | sudo tee /etc/apache2/mods-available/spdy.load $ echo "SpdyEnabled on" | sudo tee /etc/apache2/mods-available/spdy.conf $ sudo a2enmod spdy $ sudo service apache2 restart Enforce HTTPS with Strict Transport Security (HSTS) For Apache, you'll only have to indicate: # charge the mod_headers.so module if it is not charged already LoadModule headers_module modules/mod_headers.so or use this commands $ sudo a2enmod headers add the following line to your HTTPS virtual host config <VirtualHost *:443> # ... # Strict-Transport-Security supporteded enforce HTTPS connections for 180 days Header add Strict-Transport-Security: "max-age=15552000; includeSubDomains; preload" # ... </VirtualHost> For nginx, do # Strict-Transport-Security supporteded enforce HTTPS connections for 180 days add_header Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"; Redirect HTTP to HTTPS For Apache, add the following line to HTTPS virtual host config <VirtualHost *:80> # ... # Force HTTPS <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{HTTPS} !=on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} </IfModule> # ... </VirtualHost> or config in .htaccess <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R=301,L] </IfModule> Now let's test if SPDY is working. We need a browser with SPDY support. This website (SPDYCheck.org) can be check if a website properly supports the SPDY protocol, the super fast HTTP replacement, and troubleshoot any problems with the configuration. In another way, we use Google Chrome and reload the SSL web site, it is important that you reload it so that it can use SPDY (the first time you loaded it in chapter 1 it used normal HTTPS). Afterwards, open a new tab and type in the URL chrome://net-internals/#spdy, if everything went well, your SSL vhost should now be listed in the table which means SPDY support is working. See more Enabling SPDY and HSTS on Nginx ### Run Definition of Arbitrary Scripts with Vagrant Commands What and why Vagrant Vagrant is computer software for creating and configuring virtual development environments. It can be seen as a wrapper around virtualization software such as VirtualBox, KVM, VMware and around configuration management software such as Ansible, Chef, Salt or Puppet. Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. Vagrant Official Website www.vagrantup.com After install Vagrant and VirtualBox, we can create a Vagrant development environment such as Ubuntu 14.04 LTS 64bit by this command vagrant init ubuntu/trusty64, will be generate a Vagrantfile in current directory. If you wanna ro moving VirtualBox and Vagrant to another directory, first move ~/.vagrant.d to the another directory, $ sudo mv ~/.vagrant.d /Volumes/SAMSUNG/.vagrant.d Set VAGRANT_HOME to new path in ~/.bash_profile $ export VAGRANT_HOME="/Volumes/SAMSUNG/.vagrant.d" Open the VirtualBox, in Preferences set its Default Machine Folder to new path, then quit VirtualBox, move VirtualBox VMs folder to the new directory. Reopen VirtualBox. will see VMs are listed as inaccessible. Remove them from the list. For each VM in your VirtualBox VMs folder on the external drive, browse to its folder in Finder and double-click the .vbox file to restore it to the VirtualBox Manager. Finally, move any existing Vagrant directories you've made with vagrant init (these are the directories with a Vagrantfile in each) to the external drive. Since these directories only store metadata you could leave them on your main drive, but it's nice to keep everything together so you could fairly easily plug the whole drive into another machine and start your VMs from there. The Vagrantfile The primary configuration location for any Vagrant development environment is a file called Vagrantfile which you need to place in your project’s folder. The configuration syntax of this Vagrantfile is Ruby, but you do not need to be a Ruby programmer or have any knowledge of the programming language to write this configuration file. You'll mostly do basic variable assignment in the configuration. Every configuration option you will need you can place inside this file. About provisioning The primary purpose of Vagrant is to have a base virtual machine and to give you the framework for creating automatic software installations and configurations in the virtual machine. By letting Vagrant handle the provisioning of software, it also gives you the flexibility in configuration and more importantly, makes this process repeatable and automatic. Vagrant doesn't care how you provision the virtual machine, it offers multiple options ranging from basic shell scripts to software automation managers such as Puppet, Chef or Ansible. You can even configure it to use multiple provisioners at the same time. Of course there's always the possibility to vagrant ssh into the base virtual machine and install your required software manually, but that defeats the purpose of Vagrant and all the flexibility it offers when provisioning a box. Shell Script Provisioning The easiest way to provision a base box is to use basic shell script commands which then run inside the virtual machine. This also removes the need for learning or installing Puppet, Ansible and similar tools – even though they can sometimes be more effective provisioners. We’ll deal with them in future articles. We need to define the provisioning type, which in our case is called shell. Let's write that inside this block in the configuration file config.vm.provision "shell" do |s| s.path "provision/setup.sh" end Vagrant has two types of shell provisioning, inline and external. With inline you can write shell commands in the Vagrantfile itself, but let’s focus on external provisioning, which simply means to load and run a shell script from a file (relative to Vagrantfile) or even from a URL. In our case we want to load provision/setup.sh file, let's create it and write the following in this file #!/bin/bash echo "Provisioning virtual machine..." Now, run vagrant up and it will output Provisioning virtual machine... on the screen. Note that Vagrant will provision the virtual machine only once on the first run, any subsequent provisioning must be executed with the --provision flag either vagrant up --provision or vagrant reload --provision. The provisioning will re-run also if you destroy the VM and rebuild it with vagrant destroy and vagrant up. Update and Upgrade echo "Update and Upgrade..." sudo apt-get update && sudo apt-get upgrade Vagrantfile Vagrant.configure(2) do |config| # Specifying the base box config.vm.box = "ubuntu/trusty64" # Shell provisioning config.vm.provision "shell" do |s| s.path = "provision/setup.sh" end end provision/setup.sh #!/bin/bash echo "Provisioning virtual machine..." echo "Update and Upgrade..." sudo apt-get update && sudo apt-get upgrade Running vagrant up to set up virtual machine. This article reference Moving VirtualBox and Vagrant to an external drive and Vagrantfile Explained: Setting Up and Provisioning with Shell. ### Ajax Refresh an Image in Mozilla Firefox I use Ajax reload an captcha code image. $('.load_captcha').click(function(){ var formData = { // CSRF token _token : csrf_token }; $.ajax({ // The URL where we want to POST url : captcha_url, // Define the type of HTTP verb we want to use (POST for our form) type : "POST", data : formData }).done(function(data) { // Here we will handle errors and validation messages if (data.success) { // Handle errors $('.captcha_img').replaceWith(data.captcha); } else { // ... } }); }); /** * View: postCaptcha * @return Response */ public function postCaptcha() { return Response::json( array( 'success' => true, 'captcha' => HTML::image(URL::to('simplecaptcha'), 'Captcha', array('class' => 'captcha_img')) ) ); } It works well in Safari, Chrome and IE, but when I try it in Firefox, it doesn't work. And now, I know how to make it works, just add dynamic params of images file name, such as timestamp. /** * View: postCaptcha * @return Response */ public function postCaptcha() { return Response::json( array( 'success' => true, 'captcha' => HTML::image(URL::to('simplecaptcha'. '?' . time()), 'Captcha', array('class' => 'captcha_img')) ) ); } ### Adding a Queue Worker in Laravel Queue in laravel are no different with other queue services, it use easy and common procedures: There is a place to store the queue information, a PHP process at runtime writes tasks, another guard PHP polling process queue information, will perform the required task execution to reach and remove. Because PHP is url-driven synchronous language itself is blocked, so laravel provide a daemon tool to query and execute queue information. The queue configuration file is stored in app/config/queue.php. In this file you will find connection configurations for each of the queue drivers that are included with the framework, which includes a Beanstalkd, IronMQ, Amazon SQS, Redis, and synchronous (for local use) driver. We can find more information about this from Laravel Official Website. Queue service need specialized class, as an independent class, they needn't inherit other class. Because that use PHP daemon to call an task execute in queue independent, if you wanna use other class before, also not go wrong. Queue use is so easy, here is an example use Queue; Queue::push('CurlJsonQueue', [ 'url' => $url, 'json' => $json ]); At here, I put my queue class CurlJsonQueue.php in app/services/ folder, and this foler already register as classmap at autoload in composer.json, the namespace at the top level, so I can directly call it. <?php class CurlJsonQueue extends BaseController { public function fire($job, $data) { $url = $data['url']; $json = $data['json']; parent::base_post_curl($url, $json); $job->delete(); } } Default method of this class is fire and parameters is $job and $data, now we created an custom task with queue and we can use following command to open daemon: $ php artisan queue:listen Find more about queue service configure reference articles: Production-Ready Beanstalkd with Laravel Queues ### Upgrade PHP on Ubuntu via PPA If you want to take advantage of the new awesomeness, you can do so very easily thanks to a PPA. If you've never added a PPA before, you will want to make sure that you have python-software-properties installed so you have the command available. To install it, you just need to run: $ sudo apt-get update && sudo apt-get install python-software-properties Note that Tested on 13.10 as the comments below indicated, this may also be necessary: $ sudo apt-get install software-properties-common Once that's installed we can add the new PPA: $ sudo add-apt-repository ppa:ondrej/php5-5.6 After the PPA is added you will need to update and upgrade. This will upgrade your current version of PHP to 5.6 along with any modules you have installed: $ sudo apt-get update && sudo apt-get upgrade If you don't happen to have PHP installed already, you can do so now and you will be installing the latest and greatest: $ sudo apt-get install php5 You can check current installed PHP version use this command: $ php -v Once this is done, you will want to restart your web server software and/or fast process manager. ### Website Optimization Related Books Even Faster Web Sites: Performance Best Practices for Web Developers Steve Souders | O'Reilly Media | June 2009 | 256 pages Performance is critical to the success of any web site, and yet today's web applications push browsers to their limits with increasing amounts of rich content and heavy use of Ajax. In this book, Steve Souders, web performance evangelist at Google and former Chief Performance Yahoo!, provides valuable techniques to help you optimize your site's performance. Souders' previous book, the bestselling High Performance Web Sites, shocked the web development world by revealing that 80% of the time it takes for a web page to load is on the client side. In Even Faster Web Sites, Souders and eight expert contributors provide best practices and pragmatic advice for improving your site's performance in three critical categories: JavaScript: Get advice for understanding Ajax performance, writing efficient JavaScript, creating responsive applications, loading scripts without blocking other components, and more. Network: Learn to share resources across multiple domains, reduce image size without loss of quality, and use chunked encoding to render pages faster. Browser: Discover alternatives to iframes, how to simplify CSS selectors, and other techniques. Speed is essential for today's rich media web sites and Web 2.0 applications. With this book, you'll learn how to shave precious seconds off your sites' load times and make them respond even faster. This book contains six guest chapters contributed by Dion Almaer, Doug Crockford, Ben Galbraith, Tony Gentilcore, Dylan Schiemann, Stoyan Stefanov, Nicole Sullivan, and Nicholas C. Zakas. ### Laravel Ajax Multi Pagination in a Single Page If you're existing Laravel user, you know that Laravel has make pagination easy. If you're new, I will show you how. First of all, in your app/routes.php, add in an extra routes. <?php // ... Route::get('items', array( 'as' => 'items', 'uses' => 'ItemController@getIndex' )); // Get items type Route::get('items/ajax/{type}', array( 'as' => 'items.type', 'uses' => 'ItemController@getItemType' ))->where('type', 'first|second|third'); // ... The where condition is to only match for either first, second or third. Implement the Ajax method in app/controllers/ItemController.php <?php // ... public function getIndex() { return View::make('items'); } public function getItemType($type) { $items_per_page = Input::get('per_pg', 10); if ($type == 'first') { $items = First::paginate($items_per_page); } else if ($type == 'second') { $items = Second::paginate($items_per_page); } else if { $items = Third::paginate($items_per_page); } $view = View::make('item_type')->with('items', $items); return $view; exit; } // ... Not return Json on the Ajax call, but instead, I return the whole bunch of HTML. But before that, make sure you create a model class for first, second and third just like what you did for ItemController.php just now. Create a new view file that keep only the particular section of code // in app/views/item_type.php @foreach ($items as $item) <article> <h2>{{ $item->title }}</h2> {{ $item->summary }} </article> @endforeach {{ pagination($items->appends(Input::except('page')), 'layout.paginator') }} // ... The pagination link I also included into the ajax, this will make your life easier. Then add in the Ajax call on the main view file, add the section to bottom right before in app/views/items.php. $(function() { // 1. function getPaginationSelectedPage(url) { var chunks = url.split('?'); var baseUrl = chunks[0]; var querystr = chunks[1].split('&'); var pg = 1; for (i in querystr) { var qs = querystr[i].split('='); if (qs[0] == 'page') { pg = qs[1]; break; } } return pg; } // 2. $('#first').on('click', '.pagination a', function(e) { e.preventDefault(); var pg = getPaginationSelectedPage($(this).attr('href')); $.ajax({ url: '/items/ajax/first', data: { page: pg }, success: function(data) { $('#first').html(data); } }); }); $('#second').on('click', '.pagination a', function(e) { e.preventDefault(); var pg = getPaginationSelectedPage($(this).attr('href')); $.ajax({ url: '/items/ajax/second', data: { page: pg }, success: function(data) { $('#second').html(data); } }); }); $('#third').on('click', '.pagination a', function(e) { e.preventDefault(); var pg = getPaginationSelectedPage($(this).attr('href')); $.ajax({ url: '/items/ajax/third', data: { page: pg }, success: function(data) { $('#third').html(data); } }); }); // 3. $('#first').load('/items/ajax/first?page=1'); $('#second').load('/items/ajax/second?page=1'); $('#third').load('/items/ajax/third?page=1'); }); Look for the selected page number Create an event listener to listen the onclick event on the pagination link. When the page first loaded (which has no any click yet), load the first page of both items. You have done. Related Article Laravel Ajax Pagination with jQuery Custom Pagination in Laravel ### Laravel Ajax Pagination with jQuery Laravel has make pagination very easy. In this post I will show you how to make a single Ajax pagination in Laravel with jQuery. Controller: app/controllers/BlogController.php <?php class BlogController extends Controller { /** * Posts * * @return void */ public function showPosts() { $posts = Post::paginate(5); if (Request::ajax()) { return Response::json(View::make('ajax-posts')->with(compact('posts'))->render()); } return View::make('blog')->with(compact('posts')); } // ... } View: blog.blade.php <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Laravel Ajax Pagination with jQuery</title> </head> <body> <div id="post-ajax"> @foreach ($posts as $post) <article> <h2>{{ $post->title }}</h2> {{ $post->summary }} </article> @endforeach {{ pagination($posts->appends(Input::except('page')), 'layout.paginator') }} <div> <script src="//code.jquery.com/jquery-2.1.3.min.js"></script> <script> // Ajax pagination $(function() { $('#post-ajax').on('click', '.pagination a', function (e) { getPosts($(this).attr('href').split('page=')[1]); e.preventDefault(); }); }); function getPosts(page) { $.ajax({ url : '?page=' + page, dataType: 'json', }).done(function (data) { $('#post-ajax').html(data); }).fail(function () { alert('Posts could not be loaded.'); }); } </script> </body> </html> View: ajax-posts.blade.php <div id="post-ajax"> @foreach ($posts as $post) <article> <h2>{{ $post->title }}</h2> {{ $post->summary }} </article> @endforeach {{ pagination($posts->appends(Input::except('page')), 'layout.paginator') }} <div> Related Article Custom Pagination in Laravel Laravel Ajax Multi Pagination in a Single Page ### Wiener Philharmoniker Vienna New Year's Concert 2015 The Vienna Philharmonic New Year's Concert takes place under the baton of Zubin Mehta on January 1, 2015, in the Golden Hall of the Musikverein in Vienna. The Philharmonic's Honorary Conductor Zubin Mehta, with whom the orchestra has enjoyed an artistic partnership for over 50 years, conducts the New Year's Concert for the fifth time. Conductor Zubin Mehta Zubin Mehta was born in 1936 in Bombay and received his first musical education under his father’s Mehli Mehta’s guidance who was a noted concert violinist and the founder of the Bombay Symphony Orchestra. In 1954 he left for Vienna, where he eventually entered the conducting program under Hans Swarowsky at the Akademie für Musik. Zubin Mehta won the Liverpool International Conducting Competition in 1958 and was also a prize-winner of the summer academy at Tanglewood. By 1961 he had already conducted the Vienna, Berlin and Israel Philharmonic Orchestras and has recently celebrated 50 years of musical collaboration with all three ensembles. Program Franz von Suppé Ouv. Ein Morgen, ein Mittag, ein Abend in Wien Johann Strauss, Jr. Märchen aus dem Orient. Walzer, op. 444 Josef Strauss Wiener Leben. Polka francaise, op. 218 Eduard Strauss Wo man lacht und lebt. Polka schnell, op. 108 Josef Strauss Dorfschwalben aus Österreich. Walzer, op. 164 Johann Strauss, Jr. Vom Donaustrande. Polka schnell, op. 356 -- Pause -- Johann Strauss, Jr. Perpetuum mobile. Musikalischer Scherz, op. 257 Accelerationen. Walzer, op.234 Elektro-magnetische Polka, op. 110 Eduard Strauss Mit Dampf. Polka schnell, op. 70 Johann Strauss, Jr. An der Elbe. Walzer, op. 477 Hans Christian Lumbye Champagner-Galopp, op. 14 Johann Strauss, Jr. Studenten-Polka. Polka francaise, op. 263 Johann Strauss, sen. Freiheits-Marsch, op. 226 Johann Strauss, Jr. Annen-Polka, op. 117 Wein, Weib und Gesang. Walzer, op. 333 Eduard Strauss Mit Chic. Polka schnell, op. 221 Download Magnetic Link magnet:?xt=urn:btih:2ea2debbac03e30d93a6ce3fa8b54c168ef633ea Vienna Philharmonic Orchestra - New Year's Concert 2015 with Zubin Mehta - MPEG2 422HD +DolbyE - UPLINK [SRSL][50.90GB] ******************************************************************************* Vienna Philharmonic Orchestra New Year's Concert 2015 with Zubin Metha LIVE ! Satellite UPLINK FULL HD 1.1.2015 1080i 50 MPEG-2 HD 4:2:2 41Mbit/s ******************************************************************************* ------------------------------------------------------------------------------- General Information ------------------------------------------------------------------------------- Type.................: Live Event (Live Concert) ------------------------------------------------------------------------------- Release Notes ------------------------------------------------------------------------------- General Satellite Information: Satellite Sign : Eutelsat 7B (E7B, 3D, Eutelsat 3D) Satellite Position : 7°E Satellite Position 1.1 .2015 : 7.11°E Satellite Name : Eutelsat 7B Norad : 39163 Cospar number : 2012-022A Operator : Eutelsat Communications S.A. Launch date : 14-May-2013 Launch site : Baikonur Cosmodrome Launch vehicle : Proton M Launch mass (kg) : 5470 Dry mass (kg) : N/A Manufacturer : Thales Alenia Space Model (bus) : Spacebus-4000C3 Orbit : Geostationary orbit Expected lifetime : 15 Years Frequency : 11051 SymbolRate : 19200 Polarity : Horizontal Video PID : 512 Audio 1 PID : 4112 Audio 2 PID : 4128 DOLBY-E DVB-S2 : YES FEC : 4/3 Modulation : AUTO System Hardware : Spc TriChea™ Modular System Dish : 2 meters Low Noise Block : Inverto BLACK ULTRA Quad High Gain Capture Device : TeVii s471 DVB-S2 PCIe, x16 Capture OS : Windows Server 2003 R2 SP2, Datacenter x64 Edition Capture Program : Spc TriChea™ Packet Capture 2.0 Format : MPEG-2 4:2:2 HD (1080i50) Format/Info : Moving Picture Experts Group Stream Type : Program Sector Size : 2048 Packets Per Block : 1 Video Buffer Size : 232 KB bit rate total : 41.40 Mbps Video Format : MPEG-2 4:2:2 Format profile : High Level 4.0 Codec ID : MPEG-2 Codec ID/Hint : MPEG-2 Bit rate : 38,900 Mbps Width : 1920 pixels Height : 1080 pixels Display aspect ratio : 16/9 Mode : Interlaced Frame rate : 25 fps Resolution : 32 bits DC Coefficient : 12 bit I-Frames : 15 B-Frames : 2 Color Primaries : ITU-R Rec. 624-4 System B,G Transfer : ITU-R Rec. 624-4 System B,G Matrix Coefficients : ITU-R Rec. 624-4 System B,G Colorimetry : 4:2:2 Audio 1: Format : MPEG1 Layer2 Format version : MP2 Format profile : MPG Bit rate mode : Constant Bit rate : 384 Kbps Minimum bit rate : 384 Kbps bit rate per channel : 192 kbps Channel(s) : 2 (DUAL CHANNEL) Sampling rate : 48.0 KHz Resolution : 16 bits Audio 2: Format : Dolby E Format version : AES-3 Format profile : AES-3 PCM Audio 7.1 Bit rate mode : Constant Bit rate : 2448 Kbps Minimum bit rate : 2448 Kbps bit rate per channel (Uncompressed) : 1125 kbps Total bitrate (Uncompressed) : 9000 kbps Channel(s) : 8 Sampling rate : 48.0 KHz Resolution : 24 bits Note : Dolby-E and Linear Audio always use 48kHz Rate ! Note : Dolby-E must be synchronized to the video signal ! Mix Input : 5.0 DolbyE + 2.0 PCM Stereo ### Laravel Schedule Tasks Artisan Commands with Dispatcher Wiki Dispatcher Dispatcher is a Laravel artisan command scheduling tool used to schedule artisan commands within your project so you don't need to touch your crontab when deploying. Cron The software utility Cron is a time-based job scheduler in Unix-like computer operating systems. People who set up and maintain software environments use cron to schedule jobs (commands or shell scripts) to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration—though its general-purpose nature makes it useful for things like connecting to the Internet and downloading email at regular intervals. The name cron comes from the Greek word for time, χρόνος chronos. In this post, I will show you how to schedule analytics user information artisan commands with Dispatcher in Laravel. Install Dispatcher Get more information from GitHub Indatus/dispatcher Search Indatus/dispatcher in packagist.org. Add "indatus/dispatcher": "2.0.*@dev" in composer.json, add 'Indatus\Dispatcher\ServiceProvider', line to the providers array in laravel app/config/app.php file, then run php composer update in a terminal. Add the following to your root Crontab (via sudo crontab -e): * * * * * php /path/to/artisan scheduled:run 1>> /dev/null 2>&1 If you wanna to run crontab as user:www-data, you can edit the crontab of user www-data with su: $ sudo su -c "crontab -e" www-data Validate Installation Run php artisan command in a terminal, if you can see following info mean's that install successful. scheduled scheduled:make Create a new scheduled artisan command scheduled:run Run scheduled commands scheduled:summary View a summary of all scheduled artisan commands Validate Cron Config $ crontab -l Generating New Scheduled Commands Use php artisan scheduled:make to generate a new scheduled command. $ php artisan scheduled:make analytics Register your command in app/start/artisan.php using the following method Artisan::add(new analytics); After that, will generate analytics.php in app/commands folder. Use your favorite text editor edit this file like this. <?php use Indatus\Dispatcher\Scheduling\ScheduledCommand; use Indatus\Dispatcher\Scheduling\Schedulable; use Indatus\Dispatcher\Drivers\Cron\Scheduler; use Symfony\Component\Console\Input\InputOption; use Symfony\Component\Console\Input\InputArgument; class analytics extends ScheduledCommand { /** * The console command name. * * @var string */ protected $name = 'command:analytics'; /** * The console command description. * * @var string */ protected $description = 'Automatically analytics.'; /** * Create a new command instance. * * @return void */ public function __construct() { parent::__construct(); } /** * When a command should run * * @param Scheduler $scheduler * @return \Indatus\Dispatcher\Scheduling\Schedulable */ public function schedule(Schedulable $scheduler) { //every day at 4:10am return $scheduler->daily()->hours(4)->minutes(10); } /** * Execute the console command. * * @return mixed */ public function fire() { // Analytics user information ... } } Usage Run php artisan command:analytics in terminal to run analytics command. Because we installed Dispatcher, so we use php artisan scheduled:run to management commands. We can alse use php artisan scheduled:summary to show scheduled command list. $ php artisan scheduled:summary +----------------+-------------------+-----------+--------+------+--------------+-------+-------------+--------+ | Environment(s) | Name | Args/Opts | Minute | Hour | Day of Month | Month | Day of Week | Run as | +----------------+-------------------+-----------+--------+------+--------------+-------+-------------+--------+ | * | command:analytics | | 10 | 4 | * | * | * | | +----------------+-------------------+-----------+--------+------+--------------+-------+-------------+--------+ ### Using Dnsmasq Configure Wildcard DNS Record on Mac Wiki Dnsmasq Dnsmasq provides network infrastructure for small networks: DNS, DHCP, router advertisement and network boot. It is designed to be lightweight and have a small footprint, suitable for resource constrained routers and firewalls. It has also been widely used for tethering on smartphones and portable hotspots, and to support virtual networking in virtualisation frameworks. Supported platforms include Linux (with glibc and uclibc), Android, *BSD, and Mac OS X. Dnsmasq is included in most Linux distributions and the ports systems of FreeBSD, OpenBSD and NetBSD. Dnsmasq provides full IPv6 support. Wildcard DNS record A wildcard DNS record is a record in a DNS zone that will match requests for non-existent domain names. A wildcard DNS record is specified by using a "*" as the leftmost label (part) of a domain name, e.g. *.example.com. The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used. Install brew Run command in terminal $ ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)" Install Dnsmasq via brew $ brew install dnsmasq After install, create etc folder in /usr/local/ path, then copy and rename /usr/local/opt/dnsmasq/dnsmasq.conf.example to /usr/local/etc/dnsmasq.conf $ sudo cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf Creat resolv.dnsmasq.conf in etc folder $ sudo touch /usr/local/etc/resolv.dnsmasq.conf Use your favorite text editor to edit the resolv.dnsmasq.conf file $ sudo subl /usr/local/etc/resolv.dnsmasq.conf Add your favorite DNS address, you can add more, such as OpenDNS, like this nameserver 8.8.8.8 nameserver 8.8.4.4 nameserver 42.120.21.30 nameserver 168.95.1.1 Then edit dnsmasq.conf file, add following code resolv-file=/usr/local/etc/resolv.dnsmasq.conf strict-order no-hosts cache-size=32768 listen-address=127.0.0.1 Note: that's possible that more than one listen-address behind the IP separated by a comma, for example, you write the listen-address=127.0.0.1,192.168.1.102, 192.168.1.102 is your computer IP address in current network, you can achieve the same equipment within a local area network by setting the DNS for the IP, to achieve through your dnsmasq to query dns, ie a LAN WildCard DNS Record. To start automatically and run Dnsmasq, running in a Terminal $ sudo cp -fv /usr/local/opt/dnsmasq/*.plist /Library/LaunchDaemons $ sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist Now, you can set you Mac DNS to 127.0.0.1 to use Dnsmasq. With LAN users can also modify the DNS IP to connect this Mac. This premise is to set this Mac IP in listen-address. IFirewall Configuration If you're using Ubuntu server, the default firewall configuration tool for Ubuntu is ufw. Anyway for anyone that is wondering, to allow a DHCP server to work through ufw, after enabling ufw on the same computer as is running the DHCP server, by running: $ sudo ufw enable all you need to do is: $ sudo ufw allow bootps Also if you are using dnsmasq like me and you want the same computer to work as a dns forwarder, you will want to run: $ sudo ufw allow domain To check the operation, you can run following command in a terminal to check whether you are using the local dns Dnsmasq were parsed. $ dig g.cn Dnsmasq Wildcard DNS Record We just installed the dnsmasq, now configure Wildcard DNS record. To add rules, only additional content to file in dnsmasq.conf Dnsmasq Wildcard DNS Record Rules like this address=/google.com/1.1.1.1 This means, *. google.com/* will be directed to the IP of 1.1.1.1. Google Wildcard DNS Record Service Let's add the Wildcard DNS record rules apply to most Google services. First need to find an available IP. It is best servers in the United States, this will ensure that most of the services available. Known Google's IP is available in China is 74.125.224.18 Add following rules in /usr/local/etc/dnsmasq.conf file. address=/google.com/74.125.224.18 address=/googleapis.com/74.125.224.18 address=/googlevideo.com/74.125.224.18 address=/google.com.hk/74.125.224.18 address=/youtube.com/74.125.224.18 address=/ytimg.com/74.125.224.18 address=/ggpht.com/74.125.224.18 address=/googleusercontent.com/74.125.224.18 Restart Dnsmasq Service $ sudo launchctl stop homebrew.mxcl.dnsmasq $ sudo launchctl start homebrew.mxcl.dnsmasq Flush Local DNS Cache OS X 10.10 $ sudo discoveryutil udnsflushcaches OS X 10.9 dscacheutil -flushcache; sudo killall -HUP mDNSResponder OS X 10.7 – 10.8 $ sudo killall -HUP mDNSResponder OS X 10.5 – 10.6 $ sudo dscacheutil -flushcache Microsoft Windows $ ipconfig /flushdns Linux (depending on what you’re running) /etc/init.d/named restart /etc/init.d/nscd restart If you're using Google Chrome browser, visit chrome://net-internals/#dns to cleanup cache. Install Dnsmasq on Ubuntu $ sudo apt-get install dnsmasq Configure Dnsmasq $ sudo vim /etc/dnsmasq.conf Uncomment conf-dir=/etc/dnsmasq.d, and create dnsmasq.conf file in /etc/dnsmasq.d folder, then edit it. $ sudo vim /etc/dnsmasq.d/dnsmasq.conf Start Dnsmasq Service $ sudo service dnsmasq start ### Avoid VMware ESXi Loading module ipmi_si_drv ... Today, when I restart a failed server, stuck in Loading module ipmi_si_drv ... , about 30 minutes boot is completed. First try setting IPMI to Shared in BIOS, if the option is available. When booting your installation media, press Shift O to display the boot arguments and add noipmiEnabled to the boot arguments. Remember to do the same once the installation/upgrade is complete. Manually turn off or remove the module by turning the option VMkernel.Boot.ipmiEnabled off in vSphere or using the commands below: # Do a dry run first: esxcli software vib remove –dry-run –vibname ipmi-ipmi-si-drv # Remove the module: esxcli software vib remove –vibname ipmi-ipmi-si-drv or try the following command in an unsupported shell connection: esxcfg-module -d ipmi_si_drv This disables the module although it still gets loaded. Use a -l argument to see what modules are enabled/loaded and check that your desired one is disabled. This appears to be persistent across a reboot. ### Install Avira Antivirus on Windows Server Wiki Avira Operations GmbH & Co. KG is a German multinational and family-owned antivirus software company that provides IT-security for computers, smartphones, servers and networks – delivered as both software and cloud-based services. Avira's headquarters are located near Lake Constance, in Tettnang, Germany, and the company has additional European offices in Munich, Bucharest, and the Netherlands. Avira also has offices in Japan and China, as well as an R&D facility in the USA's Silicon Valley. With an estimated 9.6% of global market share according to OPSWAT, and over 100 million customers, Avira was considered the sixth largest antivirus vendor in 2012. The company supports the Auerbach Stiftung, a foundation created by the company's founder, Tjark Auerbach. It promotes charitable and social projects as well as the arts, culture and science. Official Website www.avira.com Here is steps Download Avira Antivirus from official website. Download verset.zip (48KB) from Google Drive, and extract the zip file. Run disable-avira-toolbar.reg → OK. Extract Avira Antivirus setup installer file to a floder and run presetup.exe in it, will prompt can't install in a server, then quit (This step is necessary). Run Verset.exe, choose Microsoft Windows XP compatible and use setup.exe in Avira Antivirus setup floder to install. Update, and that's it! MD5 : bbfe016fab214d7c33843cede8b76741 SHA1 : f10d648c436541fa1d97ed39f9ab91f3e3d673e4 Update : Dec 3nd, 2014 ### Enable Remote Access to MySQL Server on Windows On Linux, we can easily enable remote access to MySQL server, here is an article talking about that. On Windows, the approach is different. Here's the steps: Open a DOS command prompt on the server. Run the following command from the mysql\bin directory: $ mysql -u root --password= A mysql prompt should be displayed. To create a remote user account with root privileges, run the following commands: GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'IP' IDENTIFIED BY 'PASSWORD'; USERNAME is the username to be created. IP is the public IP address of the remote connection. PASSWORD is the password to be assigned for this username. (IP can be replaced with % to allow this user to logon from any host or IP) mysql> FLUSH PRIVILEGES; mysql> exit; ### Save a PNG image from a base64 data string with PHP We can extract the base64 image data from that string, decode it and save it to disk, don't need GD since it already is a png. <?php // requires php5 define('UPLOAD_DIR', 'images/'); $img = $_POST['img']; $img = str_replace('data:image/png;base64,', '', $img); $img = str_replace(' ', '+', $img); $data = base64_decode($img); $file = UPLOAD_DIR . uniqid() . '.png'; $success = file_put_contents($file, $data); print $success ? $file : 'Unable to save the file.'; ?> ### Test a website for Retina without an actual Retina Display There a way to simulate a Retina display to test a website for HiDPI displays such as Retina. We can using Firefox. Go to about:config Find layout.css.devPixelsPerPx Change it to your desired ratio (1 for normal, 2 for retina, etc. -1 seems to be Default.) Refresh your page - boom, your media query has now kicked in! Hats off to Firefox for being awesome for web developing! Heads up, not only will the website now be boosted to twice the size, the Firefox UI will also be doubled. It also kicks in media queries. This doubling or zooming is necessary, as that's the only way you'll be able to examine all the pixels on a standard pixel ratio screen. This works fine on Windows and OS X with Firefox 27.0.1. If you're not using media queries and other more advanced logic (i.e. you're feeding everyone the HiDPI images), you can just zoom in with your browser to 200%. The Chrome emulation is a helpful tool as well as it kicks in media queries, but because it prevents zooming, you can't examine image quality. This tutorial to make my website Retina ready How to Create Retina Graphics for your Web Designs. ### Fix ERROR InnoDB: The InnoDB memory heap is disabled Wiki InnoDB is a storage engine for MySQL. MySQL 5.5 and later use it by default. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by MySQL AB, the exception being some OEM versions. InnoDB became a product of Oracle Corporation after its acquisition of Innobase Oy in October 2005. The software is dual licensed; it is distributed under the GNU General Public License, but can also be licensed to parties wishing to combine InnoDB in proprietary software. MariaDB and Percona Server use a fork of InnoDB called XtraDB by default. XtraDB is maintained by Percona. Oracle InnoDB's changes are regularly imported into XtraDB, and some bug fixes and extra features are added. InnoDB Architecture Today I got the following error in the log of MySQL InnoDB: The InnoDB memory heap is disabled Add this entry innodb_use_sys_malloc = 0 into the my.cnf and restarted the MySQL Server the error was gone. We can get more info about The InnoDB Storage Engine. ### Slove MySQL Plugin 'FEDERATED' is disabled. Recently, a MySQL server I manage has shut down abnormally. I find some information in error log Plugin 'FEDERATED' is disabled. The FEDERATED storage engine is available beginning with MySQL 5.0.3. It is a storage engine that accesses data in tables of remote databases rather than in local tables. The FEDERATED storage engine is available beginning with MySQL 5.0.3. This storage engine enables data to be accessed from a remote MySQL database on a local server without using replication or cluster technology. When using a FEDERATED table, queries on the local server are automatically executed on the remote (federated) tables. No data is stored on the local tables. We can find more information about the FEDERATED storage engine from MySQL Official. # cat /var/log/mysql/error.log 141104 4:53:28 [Note] Plugin 'FEDERATED' is disabled. 141104 4:53:28 InnoDB: The InnoDB memory heap is disabled 141104 4:53:28 InnoDB: Mutexes and rw_locks use GCC atomic builtins After the time mysqld will be stop, this also is due that the FEDERATED storage engine is disabled. 141105 23:26:16 [Note] /usr/sbin/mysqld: Normal shutdown 141105 23:26:16 [Note] Event Scheduler: Purging the queue. 0 events 141105 23:26:16 InnoDB: Starting shutdown... 141105 23:26:16 InnoDB: Shutdown completed; log sequence number 36647590017 141105 23:26:16 [Warning] Forcing shutdown of 1 plugins 141105 23:26:16 [Note] /usr/sbin/mysqld: Shutdown complete Use SHOW PLUGINS command, we can confirm that the FEDERATED storage engine is disabled. mysql> show plugins -> ; +--------------------------+----------+--------------------+---------+---------+ | Name | Status | Type | Library | License | +--------------------------+----------+--------------------+---------+---------+ | binlog | ACTIVE | STORAGE ENGINE | NULL | GPL | | mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL | | mysql_old_password | ACTIVE | AUTHENTICATION | NULL | GPL | | MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | | MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL | | CSV | ACTIVE | STORAGE ENGINE | NULL | GPL | MRG_MYISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | ▽ BLACKHOLE | ACTIVE | STORAGE ENGINE | NULL | GPL | | FEDERATED | DISABLED | STORAGE ENGINE | NULL | GPL | | InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL | | INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMPMEM | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMPMEM_RESET | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_PAGE | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_PAGE_LRU | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_BUFFER_POOL_STATS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL | | ARCHIVE | ACTIVE | STORAGE ENGINE | NULL | GPL | | partition | ACTIVE | STORAGE ENGINE | NULL | GPL | +--------------------------+----------+--------------------+---------+---------+ 23 rows in set (0.00 sec) Describe the following settings in /etc/mysql/my.cnf, in [mysqld] section, will be able to re-enable the FEDERATED storage engine. federated = 1 After save my.cnf, restart MySQL service. ### Slove Sublime Text There are no packages available for installation Today, when I clicked on Package Control: Install Package from Command Shift P, I got "There are no packages available for installation" error. This error is happened with network error. First get IPV4 address of sublime.wbond.net, run this command line on terminal ping sublime.wbond.net. The IPv4 address is 50.116.34.243. Then edit the hosts file of your system, open /etc/hosts (OS X /Linux/Unix) or C:\Windows\system32\drivers\etc\ (Windows), add this line 50.116.34.243 sublime.wbond.net Now, let's play with Package controller. Addition share my Sublime Text 2 preference file # Preference → Settings - User { "color_scheme": "Packages/Color Scheme - Default/Solarized (Dark).tmTheme", "font_size": 16.0, "ignored_packages": [ "Vintage" ], // Indent 4 spaces per tab "tab_size": 4, "translate_tabs_to_spaces": true, "trim_trailing_white_space_on_save": true, "word_wrap": true } My Key Bindings Settings, you can find more information about this from Sublime Text Unofficial Documentation. # Preference → Key Bindings - User [ { // Custom shortcut key for PhpCodeGen package "keys": ["ctrl+s"], "command": "php_code_gen" } ] Recommended Packages Package Control Package Control is an amazing plugin developed by Will Bond that allows you to easily search for, install, manage, remove, and update other packages. Package Control adds a nice menu to easily manage your packages. This is by far the most important plugin as it allows you to easily add and manage the other plugins. PhpCodeGen by bteryek PhpCodeGen is a Sublime Text 2 and 3 Plugin that generates Object Oriented Code from a simple shorthand syntax. DocBlockr by spadgos DocBlockr is an Sublime Text 2 plugin that autocompletes your doc blocks, all you need to do is start the doc block with a /** and hit enter to complete it. DocBlockr also includes snippets for adding documentation to the doc block speeding up and standardizing your doc blocks. Alignment by Will Bond (wbond) Will Bond comes to the rescue again with a simple but very useful plugin that automates the alignment of special characters in a selection. This is very useful for aligning '=' characters in variable assignment or '=>' characters in array definitions. This eliminates all of the wasted time spent writing clean code and makes you much more productive. Emmet by sergeche Emmet is a plugin for many popular text editors which greatly improves HTML & CSS workflow ConvertToUTF8 by seanliang With this plugin, you can edit and save the files which encodings are not supported by Sublime Text currently, especially for those used by CJK users, such as GB2312, GBK, BIG5, EUC-KR, EUC-JP, etc. ConvertToUTF8 supports both Sublime Text 2 and 3. PHP Syntax Checker by naomichi-y This package is plugins for Sublime Text 2 and 3. When you save PHP file, perform syntax check of PHP. Pretty JSON by dzhibas Prettify/Minify/Query JSON plugin for Sublime Text 2 & 3. HTML-CSS-JS Prettify by victorporof HTML, CSS, JavaScript and JSON code formatter for Sublime Text 2 and 3 via node.js SFTP by Will Bond (wbond) Commercial SFTP/FTP plugin - upload, sync, browse, remote edit, diff and vcs integration Codecs​33 by seanliang CJK library files missing in the embedded Python of Sublime Text 3. Go​Sublime by DisposaBoy A Go plugin collection for the text editor SublimeText 2 providing code completion and other IDE-like features. Synced​Side​Bar by sobstel Sublime Text 2 plugin to sync project sidebar (folder view) with currently active file. LaTexTools by SublimeText LaTeX plugin for Sublime Text 2 and 3. Apache​Conf.​tm​Language by GreyWyvern and radiosilence Apache Conf Syntax Highlighting INI by clintberry INI syntax highlighting for Sublime Text 2 nginx by brandonwamboldt Improved syntax support for Nginx configuration files. Git Config by robballou Sublime Text 2 language file for .gitconfig and .gitignore files. Terminal by Will Bond (wbond) Launch terminals from the current file or the root project folder. Open Terminal at File Press Ctrl Shift T on Windows and Linux, or Command Shift T on OS X Open Terminal at Project Folder Press Ctrl alt Shift T on Windows and Linux, or Command alt Shift T on OS X GitGutter by jisaacks A sublime text 2/3 plugin to show an icon in the gutter area indicating whether a line has been inserted, modified or deleted. Markdown Preview by revolunet Preview and build your markdown files quickly in your web browser from sublime text 2/3. DiffView by CJTozer Side-by-Side Git, SVN and Bazaar Diff Viewer for Sublime Text 3. Log​View by FlashSystems Logfile viewer and highlighter for Sublime Text 3. etc ... Related Articles Atom Essential Packages ### Remove and Rebuild Macintosh Recovery HD Partition Remove Recovery HD Partition Open terminal and run this command $ diskutil eraseVolume HFS+ Blank /dev/disk1s3 ... you will see ... Started erase on disk1s3 Recovery HD Unmounting disk Erasing Initialized /dev/rdisk1s3 as a 620 MB case-insensitive HFS Plus volume Mounting disk Finished erase on disk1s3 Blank Now open Disk Utility, there a new Blank partition. Merge the partition, run following command in terminal: $ diskutil mergePartitions HFS+ SAMSUNG disk1s2 disk1s3 You will see like this: The chosen disk supports resize; disregarding your new file system type and volume name Merging partitions into a new partition Start partition: disk1s2 SAMSUNG Finish partition: disk1s3 Recovery HD Started partitioning on disk1 Merging partitions Waiting for the disks to reappear Growing disk Finished partitioning on disk1 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.0 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Apple_HFS SAMSUNG 499.7 GB disk1s2 Rebuild Recovery HD Partition Download Lion Recovery Update v1.0 Run in Terminal $ hdiutil attach ~/Downloads/RecoveryHDUpdate.dmg $ pkgutil --expand /Volumes/Mac\ OS\ X\ Lion\ Recovery\ HD\ Update/RecoveryHDUpdate.pkg ~/Desktop/RHD $ hdiutil attach ~/Desktop/RHD/RecoveryHDUpdate.pkg/RecoveryHDMeta.dmg $ ~/Desktop/RHD/RecoveryHDUpdate.pkg/Scripts/Tools/dmtest ensureRecoveryPartition / /Volumes/Recovery\ HD\ Update/BaseSystem.dmg 0 0 /Volumes/Recovery\ HD\ Update/BaseSystem.chunklist After that you will see: Creating recovery partition: finished Now we have rebuild Recovery HD Partition successfully. /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *120.0 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_HFS Macintosh HD 119.2 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.1 GB disk1 1: EFI EFI 209.7 MB disk1s1 2: Apple_HFS SAMSUNG 499.8 GB disk1s2 ### Automatically Start MySQL Server on OS X Yosemite MySQL can't start automatically on starts up in OS X Yosemite. Create /Library/LaunchDaemons/com.mysql.mysql.plist and save it with the following plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <true /> <key>Label</key> <string>com.mysql.mysqld</string> <key>ProgramArguments</key> <array> <string>/usr/local/mysql/bin/mysqld_safe</string> <string>--user=mysql</string> </array> </dict> </plist> Then changed the permissions and load following launchd scripts: $ sudo chown root:wheel /Library/LaunchDaemons/com.mysql.mysql.plist $ sudo chmod 644 /Library/LaunchDaemons/com.mysql.mysql.plist $ sudo launchctl load -w /Library/LaunchDaemons/com.mysql.mysql.plist That's it. ### Get MAMP Working OS X 10.10 Yosemite I upgrade to OS X 10.10 Yosemite today form App Store, but my Apache , PHP and MySQL not work well. So I write this post to note my fix method, I wanna this maybe helpful for you. User Level Root The other web root directory which is missing by default is the ~/Sites folder in the User account. This takes a bit longer to set up but some users are very accustomed to using it. Launch Terminal, (Applications → Utilities), and follow the commands below, first one gets you to the right spot, 2nd one cracks open the text editor on the command line (swap username with your account's shortname, if you don't know your account shortname type whoami the Terminal prompt): $ cd /etc/apache2/users $ sudo nano username.conf Then add the content below swapping in your username in the code below: <Directory "/Users/username/Sites/"> Options FollowSymLinks Indexes MultiViews AllowOverride All Require all granted Allow from all </Directory> Permissions on the file should be: -rw-r--r-- 1 root wheel 274 Oct 17 12:47 username.conf If not you need to change... $ sudo chmod 644 username.conf Open the main httpd.conf and allow some modules: $ sudo nano /etc/apache2/httpd.conf And make sure these 3 modules are uncommented (the first 2 should be on a clean install): LoadModule authz_core_module libexec/apache2/mod_authz_core.so LoadModule authz_host_module libexec/apache2/mod_authz_host.so LoadModule userdir_module libexec/apache2/mod_userdir.so And also uncomment this configuration file also in httpd.conf Include /private/etc/apache2/extra/httpd-userdir.conf Then open another Apache config file and uncomment another file: $ sudo nano /etc/apache2/extra/httpd-userdir.conf And uncomment: Include /private/etc/apache2/users/*.conf Restart Apache for the new file to be read: $ sudo apachectl restart Then this user level document root will be viewable at: http://localhost/~username/ Upgrade PHP Version via Homebrew $ brew doctor If you wanna use Homebrew in OS X, you need install XCode 6.1. Get Xcode 6.1. This a direct download link, but you have to verify a developer account to download. Xcode 6.1 will be available soon in the app store. After install, do this in terminal: $ xcode-select --install Now, you can use Homebrew normally :) $ brew update $ brew install php55 After upgrade PHP, open /etc/apache2/httpd.conf, modify #LoadModule php5_module libexec/apache2/libphp5.so ... to ... LoadModule php5_module /usr/local/opt/php55/libexec/apache2/libphp5.so About MySQL MySQL can't start automatically on starts up, we can use this command $ sudo /usr/local/mysql/support-files/mysql.server start I hope this bug will fix on next release version. Another way to install MySQL Apple has enabled a new default security oriented featured called System Integrity Protection, often called rootless, in macOS from versions 10.11 onward. The rootless feature is aimed at preventing macOS compromise by malicious code, whether intentionally or accidentally, and essentially what SIP does is lock down specific system level locations in the file system while simultaneously preventing certain processes from attaching to system-level processes. While the System Integrity Protection security feature is effective and the vast majority of Mac users should leave rootless enabled, some advanced Mac users may find rootless to be overly protective. Thus, if you're in the group of advanced Mac users who do not want SIP rootless enabled on their macOS installation, we'll show you how to turn this security feature off. Checking the Status of Rootless / System Integrity Protection in macOS $ csrutil status To enable or disable System Integrity Protection, you must boot to Recovery OS and run the csrutil command from the Terminal. Boot to Recovery OS by restarting your machine and holding down the Command R keys keys at startup Launch Terminal from the Utilities menu Enter the following command $ csrutil disable After enabling or disabling System Integrity Protection on a machine, a reboot is required. Install MySQL with HomeBrew $ brew install mysql Start MySQL $ brew services start mysql Recommended MySQL Client Sequel Pro is a fast, easy-to-use Mac database management application for working with MySQL & MariaDB databases. ### Solve MySQL: Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Problem Initially I started mysql with incorrect and ran into error in the error log. 141005 12:00:00 [ERROR] Plugin 'InnoDB' init function returned error. 141005 12:00:00 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 141005 12:00:00 [ERROR] Unknown/unsupported storage engine: INNODB 141005 12:00:00 [ERROR] Aborting 141005 12:00:00 [Note] /usr/sbin/mysqld: Shutdown complete 141005 12:00:00 mysqld_safe mysqld from pid file /local/mysql/data/mysqld.pid ended Solution Go to the data directory and remove log files look like ib_logfile0, ib_logfile1 and ib_logfile2. Then start mysql, this fixed the issue. $ sudo service mysql stop $ sudo rm /var/lib/mysql/ib_logfile[0] $ sudo service mysql start # Starting MySQL ... SUCCESS! ### Install Memcached for PHP on OS X Wiki Memcached is a general-purpose distributed memory caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, subject to the terms of the Revised BSD license. Memcached runs on Unix-like (at least Linux and OS X) and on Microsoft Windows. To build it use Autoconf, Automake, GNU build system and libevent. Memcached's APIs provide a giant hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database. The size of this hash table is often giant. It is limited by available memory across a large number of servers in a data centre. Where high volume, wide audience web publishing requires it, this may stretch to many GB. Memcached can be equally valuable for situations where either the number of requests for content is high, or the cost of generating a particular piece of content is high. Memcached was originally developed by Danga Interactive for LiveJournal, but is now used by many other systems, including MocoSpace, YouTube, Reddit, Zynga, Facebook, Orange, Twitter, Tumblr and Wikipedia. Engine Yard and Jelastic are using Memcached as part of their platform as a service technology stack and Heroku offers several Memcached services as part of their platform as a service. Google App Engine, AppScale, Microsoft Azure and Amazon Web Services also offer a Memcached service through an API. Official Website memcached.org Install via Homebrew $ brew install memcached $ brew install libmemcached $ brew install php55-memcached $ memcached -d # Run memcached as a daemon $ sudo apachectl restart # Restart Apache Service ### Database Sharding Basic Idea and Sharding Strategies This article focuses on the basic idea and theory sharding sharding strategy. The Basic Idea The basic idea is to make a database Sharding cut into multiple parts into different databases (server), thereby alleviate performance problems single database. Is not strictly speaking, mass data for the database, if it is because the data table and more and more, this time for using a vertical sharding, that close relationship (such as the same module) sharding table out on a server. If the table is not much, but the data of each table is very large, this time for the level of sharding, ie the data table by certain rules (such as by ID hash) cut into multiple databases (server) on. Of course, in reality, these two cases is more mixed together, this time the need to make a selection according to the actual situation, may also be integrated with the vertical and horizontal sharding, which will cut into the existing database, as a matrix-like expanded infinitely database (server) array. The following describes in detail what are the vertical and horizontal sharding sharding. The maximum vertical sharding feature is a simple rule, the implementation is also more convenient, especially for the coupling between the business of non- Often low, the interaction is very small, the business logic is very clear system. In such a system, can be done easily with different industry Table Service module used spin to a different database. Depending on the split table, also on the application Smaller, splitting rules will be relatively simple and clear. (This is the so-called "share nothing"). +----------------------------------------------------------+ | Application Servers | +---+------------------------------------------------+-----+ | | | +------------------------+ | | | Single DB | | | +------------+-----------+ | | | | | | | | Vertical | Sharding | | | | v <-----------------------+--------------------> v +---------+ +----------------+ +--------------+ +----------+ | Users | | Group Messages | | Photo Albums | | Events | +---------+ +----------------+ +--------------+ +----------+ Level of sharding compared to the vertical sharding, relatively speaking, a little more complex. Because you want a table with different data split Assigned to different databases, for applications, compared to split itself in accordance with the rules of the table name to split more complicated, after Also some of the more complex data maintenance. +----------------------------------------------------------+ | Application Servers | +-----+-------------------------------------------------+--+ | +----------------+ | | | user id & n=0 | | | +----------------+ | v | user id & n=1 | | +-----------+ +----------------+ | | | Horizontal | user id & n=2 | | | | Sharding +----------------+ | | Single DB +------------------>| user id & n=3 |<----+ | | +----------------+ | | | user id & n=4 | +-----------+ +----------------+ | ... | +----------------+ Let us consider the general case of data sharding: on the one hand, all the tables in a database is usually not possible by a whole series up a table, the implied meaning of this sentence is that almost all level sharding for a small rub a little twist (in fact, points out vertical shear block) close relationship between tables, and can not be carried out for all tables. On the other hand, some very high system load, even if just a single table are unable to bear the load through a single database host, which means that just can not be completely vertical sharding solved and ask. Therefore, most systems will be vertical and horizontal sharding sharding used in combination, the first vertical sharding of the system to do, and then do selective level sharding for the case every little twist of the table. So that the entire database is cut into a distributed matrix. +--------------------------------------------------------------------+ | Application Servers | +---------+------------------------------------------------+---------+ | | | +------------------------+ | | | Single DB | | | +------------+-----------+ | | | | | | | | Vertical | Sharding | | | | v <-----------------------+--------------------> v +---------------+ +----------------+ +----------------+ +------------+ | Users | | Group Messages | | Photo Albums | | Events | +-------+-------+ +------+---------+ +--------+-------+ +------+-----+ | | | | | | Horizontal Sharding| | v v v v +---------------+ +----------------+ +----------------+ +------------+ | user id & n=0 | | group id & n=0 | | album id & n=0 | | time<0 | +---------------+ +----------------+ +----------------+ +------------+ | user id & n=1 | | group id & n=1 | | album id & n=1 | | n<time<2n | +---------------+ +----------------+ +----------------+ +------------+ | user id & n=2 | | group id & n=2 | | album id & n=2 | | 2n<time<3n | +---------------+ +----------------+ +----------------+ +------------+ | user id & n=3 | | group id & n=3 | | album id & n=3 | | 3n<time<4n | +---------------+ +----------------+ +----------------+ +------------+ | ... | | ... | | ... | | ... | +---------------+ +----------------+ +----------------+ +------------+ Sharding Strategy As previously mentioned, the first step of sharding is based on the level of the vertical slit and then carried sharding. Results exactly vertical to horizontal sharding sharding and prepared the way. The idea is vertical sharding analysis aggregation relationship between tables, the close relationship between the tables together. In most cases may be the same module, or the same "aggregation." Here the "gathering" is the domain-driven design in the said gathering. In the vertical separation of the table cut gathered to identify the "root element" (where "root element" is the domain-driven design in the "aggregate root"), press the "root element" horizontal sharding, which is from the "root elements "began, all the data and its direct and indirect association into a shard inside. Thus the possibility of cross-shard associated appear very small. Applications do not have to interrupt the association between both tables. For example: For social networking sites, almost all of the data will eventually be associated to a user, based on user sharding is the best choice. Another example is the forum systems, users and forum two modules should be sharing in the vertical cut shard was in the two years, the Forum module for it, Forum apparently aggregate root, so the horizontal sharding by Forum, the Forum where all posts and replies are placed in a shard with the Forum where it is natural. For shared data in the data, if it is read-only dictionary tables, each shard in the maintenance of a supposed to be a good choice, so do not have to interrupt the association. If you are associated with general data across nodes between, we must break. Of particular note are: When both vertical and horizontal cut-sharing, sharding strategy will be some subtle changes. For example: In considering only the vertical sharding of the time, is divided into an arbitrary relationship can be maintained between the tables together, so you can press the "functional modules" partition table, but once introduced level sharding, relationship between tables on will be greatly restricted, usually only allow the primary table (table ID in the hash table), and to preserve the association between the plurality of sub-tables thereof, that is: when both vertical and horizontal shear sharing in the vertical direction sharding will no longer be a "functional modules" to be divided, but the need for more fine-grained vertical sharding, and this granularity and domain-driven design in the "convergence" concept coincide, even said to be exactly the same, each shard in the primary table is an aggregate root aggregation! Such sharding points down you will find database is cut too distracted subdivision (number shard will be more, but not many shard in the table), in order to avoid excessive data source management, make full use of each database server resources that can be considered similar to the business, and with similar data growth rate (the amount of data in the main table on the same order of magnitude) of two or more of the shard into the same data source, each shard is still independent, they have respective main table, and the independent use of each hash table ID, except that their hash modulo (i.e., number of nodes) required is the same. Affairs Issues There are two problems to solve matters feasible solution: distributed transactions and through the application and database transactions together to achieve the following two sets of programs to control a simple comparison. Using Distributed Transaction Advantages: by database management, simple and effective. Disadvantages: high cost performance, especially a growing number shard. By jointly controlled applications and databases Principle: a distributed transaction across multiple databases at only split into multiple In a single database above a small business, and through the application to the total control Various small business. Advantages: There are advantages in performance. Disadvantages: Requires an application designed to make the transaction control flexibility. If you use the spring of transaction management, change it will face some difficulties. Cross-node Join issues As long as they were slicing across nodes Join problem is inevitable. But good design and slicing it can reduce the occurrence of such situations. To solve this problem is common practice to achieve twice the query. In the results of the first set of the query to find related data id, id get the associated data to initiate a second time according to these requests. Cross-node count, order by, group by and aggregate functions problem These are a class of problems, because they are based on the entire set of data needs to be calculated. Most agents are not automatically handle merging. Solution: Similar to solve the problem of inter-node join, get the end result of the merger in the application separately on each node. And join a different query each node can be executed in parallel, so many times it is much faster than the speed of a single large table. However, if the result set is large, the application memory consumption is a problem. ### Configure Git and Github Wiki Git Git is a distributed revision control and source code management (SCM) system with an emphasis on speed, data integrity, and support for distributed, non-linear workflows. Git was initially designed and developed by Linus Torvalds for Linux kernel development in 2005, and has since become the most widely adopted version control system for software development. As with most other distributed revision control systems, and unlike most client–server systems, every Git working directory is a full-fledged repository with complete history and full version-tracking capabilities, independent of network access or a central server. Like the Linux kernel, Git is free software distributed under the terms of the GNU General Public License version 2. Official Website git-scm.com GitHub GitHub is a Git repository web-based hosting service which offers all of the distributed revision control and source code management (SCM) functionality of Git as well as adding own features. Unlike Git, which is strictly a command-line tool, GitHub provides a web-based graphical interface and desktop as well as mobile integration. It also provides access control and several collaboration features such as wikis, task management, and bug tracking and feature requests for every project. Website github.com Now let's start. You can download Git GUI Clients for your OS form git-scm.com or use git command in terminal. For Microsoft Windows Install Generate SSH Keys $ ssh-keygen -t rsa -C "your_email@youremail.com" # Creates a new ssh key using the provided email # Generating public/private rsa key pair.# Enter file in which to save the key (/home/you/.ssh/id_rsa): After press Enter will be prompted to enter a password and confirm password: Enter passphrase (empty for no passphrase): [Type a passphrase]# Enter same passphrase again: [Type passphrase again] Then see this information similar to that showed the success of key generation Your identification has been saved in /home/you/.ssh/id_rsa. # Your public key has been saved in /home/you/.ssh/id_rsa.pub.# The key fingerprint is: # 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@youremail.com Go to GitHub official website, register an account. In Github SSH key and enter the server detects the connection success. Log in to your Github account, enter "Account settings/SSH Keys", you can enter just generated key. Try to detect what SSH connection to Github git server. $ ssh -T git@github.com # Attempts to ssh to github Might see such a warning, that's doesn't matter, enter "yes" and press Enter. The authenticity of host 'github.com (207.97.227.239)' can't be established.# RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.# Are you sure you want to continue connecting (yes/no)? Finally, you can see this message a successful connection: Hi username! You've successfully authenticated, but GitHub does not # provide shell access. $ git config --global user.name "Your Name Here" # Sets the default name for git to use when you commit $ git config --global user.email "your_email@youremail.com" # Sets the default email for git to use when you commit Simple Guide Push Creates a directory for your project called "Hello-World" in your user directory: $ mkdir ~/Hello-World Changes the current working directory to your newly created directory: $ cd ~/Hello-World Sets up the necessary Git files. Initialized empty Git repository in /Users/you/Hello-World/.git/: $ git init Creates a file called README in your Hello-World directory: $ touch README Stages your README file, adding it to the list of files to be committed: $ git add README Commits your files, adding the message first commit: $ git commit -m 'first commit' Creates a remote named origin pointing at your GitHub repogit push origin master# Sends your commits in the master branch to GitHub: $ git remote add origin https://github.com/username/Hello-World.git Clone Clones your fork of the repo into the current directory in terminal $ git clone https://github.com/username/Spoon-Knife.git Branch Create feature_x branch: $ git checkout -b feature_x Back to master branch: $ git checkout master Delete feature_x branch: $ git branch -d feature_x Unless you will be pushed to the remote branch warehouse, or the branch is not seen for others. $ git push origin <branch> Pull and Merge Update: $ git pull Merge other branch to master branch: $ git merge <branch> In both cases, git will try to automatically merge changes. Unfortunately, the automatic consolidation is not the bottom in successful and may lead to conflicts (conflicts). This time you will need to modify these files come meat merge these conflicts (conflicts) a. After he was done, you need to perform this command: $ git add <filename> Before to the merger changes, they can use the following command to view: $ git diff <source_branch> <target_branch> Fix Some Error How do you get git to always pull from a specific branch? $ git config branch.master.remote origin $ git config branch.master.merge refs/heads/master Require input github account and password when git push command When access to a remote GitHub repository, there are ways ssh and https mode. URL address two different ways, authentication methods are also different. When using ssh key save on future can no longer enter the account password, but https can not. So if you want to no longer enter the account password, one way is to git clone mode when using ssh, another way is to modify the config file .git directory of existing projects in the url, as follows: [remote "origin"] url = git@github.com:example/project.git fetch = +refs/heads/*:refs/remotes/origin/* ### Run BrainSocket Laravel Artisan Command in Supervisor Wiki Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems. It shares some of the same goals of programs like launchd, daemontools, and runit. Unlike some of these programs, it is not meant to be run as a substitute for init as “process id 1”. Instead it is meant to be used to control processes related to a project or a customer, and is meant to start like any other program at boot time. Official Website supervisord.org If you haven't install supervisor, you'll install it first. # Debian / Ubuntu: $ sudo apt-get install supervisor Next, we'll configure it. We need to define a process to listen to. $ sudo vim /etc/supervisor/conf.d/mysocket.conf Add this to your new conf file, changing file paths and your environment as necessary [program:mysocket] command=php artisan brainsocket:start directory=/path/to/laravel stdout_logfile=/path/to/laravel/app/storage/logs/mysocket_brainsocket.log redirect_stderr=true We now have a process called "mysocket" which we can tell Supervisord to start and monitor. Let's do that: $ sudo supervisorctl > reread # Tell supervisord to check for new items in /etc/supervisor/conf.d/ > add mysocket # Add this process to Supervisord > start mysocket # May say "already started" Now the "mysocket" process is on and being monitored. If our queue listener fails, Supervisord will restart the php artisan brainsocket:start process. Reference Read Laravel 4 Real Time Chat and Production-Ready Beanstalkd with Laravel Queues ### Laravel 4 Real Time Chat Wiki Laravel is a free, open source PHP web application framework, designed for the development of MVC web applications. Laravel is released under the MIT license, with its source code hosted on GitHub. It’s beautifully written, full of features and the community is presently exploding. In this post, we will use it to create a socket based, real time chat application. BrainSocket BrainSocket is a Laravel package that allows you to get up and running with real-time event-driven PHP apps using WebSockets. BrainSocket provides an Artisan command to setup and run a WebSocket server with Ratchet inside of a Laravel app. Official Website brainsocket.brainboxmedia.ca ReactPHP Before we can understand Ratchet, we need to understand ReactPHP. ReactPHP was born out of the need to develop event-based, asynchronous PHP applications. If you’ve worked with Node.JS you’ll feel right at home developing applications with ReactPHP; as they share a similar approaches to code. We’re not going to develop our chat application in ReactPHP, but it’s a dependency for Ratchet ... We can learn more about ReactPHP at: reactphp.org. Ratchet One of the many ways in which real-time client-server applications are made possible is by what’s called socket programming. Believe it or not; most of what you do on the internet depends on socket programming. From simple browsing to streaming — your computer opens a socket connection to a server and the server sends data back through it. PHP supports this type of programming but PHP websites have not typically been developed with this kind of model in mind. PHP developers have preferred the typical request/response model, and it’s comparatively easier than low-level socket programming. Enter ReactPHP. One of the requirements for building a fully-capable socket programming framework is creating what’s called an Event Loop. ReactPHP has this and Ratchet uses it, along with the Publish/Subscribe model to accept and maintain open socket connections. ReactPHP wraps the low-level PHP functions into a nice socket programming API and Ratchet wraps that API into another API that's even easier to use. We can learn more about Ratchet at: socketo.me. Install Laravel through Composer. Edit your Laravel project's composer.json file and add the require brainboxlabs/brain-socket: "require": { ... "cboden/ratchet" : "0.3.*", "brainboxlabs/brain-socket" : "dev-master" }, Once the package and all of its dependencies have been installed we need to add the BrainSocketServiceProvider to our app/config/app.php file. Add this line: 'providers' => array( ... 'BrainSocket\BrainSocketServiceProvider', to the end of the providers array in the config file. There is also an optional but recommended Facade you should add to the aliases array in the app/config/app.php file. 'aliases' => array( ... 'BrainSocket' => 'BrainSocket\BrainSocketFacade', Next open terminal and cd into your Laravel project directory. run php artisan list and confirm you see the brainsocket: command in the list of commands. It should look like this: Available commands: brainsocket brainsocket:start Once you have confirmed the list, run the following command to start the WebSocket server: $ php artisan brainsocket:start Note: The websocket server runs on port 8080 by default. You can change this with the optional --port=port_number on the end of the artisan command. $ php artisan brainsocket:start --port=8081 At this point you should see a message in the terminal saying the websocket has been started on the selected port. Terminal will be locked down / unusable at this point, to stop the WebSocket server hit Ctrl C in the terminal. Note: Any changes to your laravel app / code while the ws server is running are not taken into account. You need to restart the ws server to see any of your changes. Lets stop the ws server now by hit Ctrl C in the terminal. Next in your app/ folder create a file called events.php Lets add the following code to events.php: <?php Event::listen('generic.event',function($client_data) { return BrainSocket::message('generic.event',array('message'=>'A message from a generic event fired in Laravel!')); }); Event::listen('app.success',function($client_data) { return BrainSocket::success(array('There was a Laravel App Success Event!')); }); Event::listen('app.error',function($client_data) { return BrainSocket::error(array('There was a Laravel App Error!')); }); Now in app/start/global.php add the following line at the end of the file: require app_path().'/filters.php'; require app_path().'/events.php'; Great! Now we have a few events to test out on the client side. Run the $ artisan command php artisan brainsocket:start to start the ws server again. To make things easier we have created a simple js helper that allows us to interact with our new ws server a bit easier. It's not required but it handles some minor formatting tasks in the background so you don't have to and pairs nicely with our BrainSocket Facade. Head over to github.com/BrainBoxLabs/brain-socket-js to grab it. Load the script into your app: <script type="text/javascript" src="js/brain-socket.min.js" /> Create the BrainSocket object: window.app = {}; app.BrainSocket = new BrainSocket( new WebSocket('ws://localhost:8080'), new BrainSocketPubSub() ); You can check out simple chat app in the example/ directory. View <div class="messenger bg-white"> <div class="chat-header text-white bg-gray-dark"> Real-time Chat <a href="#" id="chat-toggle" class="pull-right chat-toggle"> <span class="glyphicon glyphicon-chevron-down"></span> </a> </div> <div class="messenger-body open"> <ul class="chat-messages" id="chat-log"> </ul> <div class="chat-footer"> <div class="p-lr-10"> <input type="text" id="chat-message" class="input-light input-large brad chat-search" placeholder="Your message..."> </div> </div> </div> </div> <script> $(function(){ // var fake_user_id = Math.floor((Math.random()*1000)+1); var fake_user_id = {{ Auth::user()->id }}; //make sure to update the port number if your ws server is running on a different one. window.app = {}; app.BrainSocket = new BrainSocket( new WebSocket('ws://192.168.1.104:8080'), new BrainSocketPubSub() ); app.BrainSocket.Event.listen('generic.event',function(msg){ console.log(msg); if(msg.client.data.user_id == fake_user_id){ $('#chat-log').append('<li><img src="{{ Auth::user()->portrait_small }}" class="img-circle" width="26"><div class="message">'+msg.client.data.message+'</div></li>'); }else{ var str_test='<li class="right"><img src="'+msg.client.data.user_portrait+'" class="img-circle" width="26"><div class="message">'+msg.client.data.message+'</div></li>'; $('#chat-log').append(str_test); } }); app.BrainSocket.Event.listen('app.success',function(data){ console.log('An app success message was sent from the ws server!'); console.log(data); }); app.BrainSocket.Event.listen('app.error',function(data){ console.log('An app error message was sent from the ws server!'); console.log(data); }); $('#chat-message').keypress(function(event) { if(event.keyCode == 13){ app.BrainSocket.message('generic.event', { 'message':$(this).val(), 'user_id':fake_user_id, 'user_portrait':'{{ Auth::user()->portrait_small}}' } ); $(this).val(''); } return event.keyCode != 13; } ); }); </script> ### Install Redis on Ubuntu Wiki Redis is an open-source, networked, in-memory, key-value data store with optional durability. It is written in ANSI C. The development of Redis has been sponsored by Pivotal Software since May 2013; before that, it was sponsored by VMware. According to the monthly ranking by DB-Engines.com, Redis is the most popular key-value store. The name Redis means REmote DIctionary Server. Official Website redis.io Setup Before you install redis, there are a couple of prerequisites that need to be downloaded to make the installation as easy as possible. Start off by updating all of the apt-get packages: $ sudo apt-get update Once the process finishes, download a compiler with build essential which will help us install Redis from source: $ sudo apt-get install build-essential Finally, we need to download tcl: $ sudo apt-get install tcl8.6 Installing Redis With all of the prerequisites and dependencies downloaded to the server, we can go ahead and begin to install redis from source: Download the tarball from google code. The latest stable version is 2.8.13. $ wget http://download.redis.io/releases/redis-2.8.13.tar.gz Untar it and switch into that directory: $ tar xzf redis-2.8.13.tar.gz $ cd redis-2.8.13 Proceed to with the make command: $ make Run the recommended make test: $ make test Finish up by running make install, which installs the program system-wide. $ sudo make install Once the program has been installed, Redis comes with a built in script that sets up Redis to run as a background daemon. To access the script move into the utils directory: $ cd utils From there, run the Ubuntu/Debian install script: $ sudo ./install_server.sh As the script runs, you can choose the default options by pressing enter. Once the script completes, the redis-server will be running in the background. You can start and stop redis with these commands (the number depends on the port you set during the installation. 6379 is the default port setting): $ sudo service redis_6379 start $ sudo service redis_6379 stop You can then access the redis database by typing the following command: $ redis-cli You now have Redis installed and running. The prompt will look like this: $ redis 127.0.0.1:6379> To set Redis to automatically start at boot, run: $ sudo update-rc.d redis_6379 defaults ### The Internet's Own Boy: The Story of Aaron Swartz Yesterday I watched this movie. From Wikipedia The Internet's Own Boy: The Story of Aaron Swartz is a 2014 American documentary film written, directed and produced by Brian Knappenberger. The film premiered in the US Documentary Competition program category at the 2014 Sundance Film Festival on January 20, 2014. After its premiere at Sundance Film Festival, Participant Media and FilmBuff acquired distribution rights of the film. The film was released to theatres and VOD on June 27, 2014 in United States. It will be followed by a broadcast television premiere on Participant's network Pivot in late 2014. The film also played at the 2014 SXSW on March 15, 2014. It served as the opening film at the 2014 Hot Docs Canadian International Documentary Festival on April 24, 2014. The film's UK premiere took place at Sheffield Doc/Fest in June 2014. In August 2014 the film was screened at the Barbican Centre in London as part of Wikimania 2014. Aaron Swartz Aaron Hillel Swartz (November 8, 1986 – January 11, 2013) was an American computer programmer, writer, political organizer and Internet Hacktivist. Swartz was involved in the development of the web feed format RSS. the organization Creative Commons,[4] the website framework web.py[5] and the social news site, Reddit, in which he became a partner after its merger with his company, Infogami. Swartz's work also focused on sociology, civic awareness and activism. He helped launch the Progressive Change Campaign Committee in 2009 to learn more about effective online activism. In 2010 he became a research fellow at Harvard University's Safra Research Lab on Institutional Corruption, directed by Lawrence Lessig. He founded the online group Demand Progress, known for its campaign against the Stop Online Piracy Act. On January 6, 2011, Swartz was arrested by MIT police on state breaking-and-entering charges, after systematically downloading academic journal articles from JSTOR. Federal prosecutors later charged him with two counts of wire fraud and 11 violations of the Computer Fraud and Abuse Act,[12] carrying a cumulative maximum penalty of $1 million in fines, 35 years in prison, asset forfeiture, restitution and supervised release. Swartz declined a plea bargain under which he would serve six months in federal prison. Two days after the prosecution rejected a counter-offer by Swartz, he was found dead in his Brooklyn, New York apartment, where he had hanged himself. In June 2013, Swartz was posthumously inducted into the Internet Hall of Fame. ### Production-Ready Beanstalkd with Laravel Queues Introduction Carefully declaring the duties of each and every element of an application deployment stack brings along a lot of benefits with it, including simpler diagnosis of problems when they occur, capacity to scale rapidly, as well as a more clear scope of management for the components involved. In today's world of web services engineering, a key component for achieving the above scenario involves making use of messaging and work (or task) queues. These usually resilient and flexible applications are easy to implement and set up. They are perfect for splitting the business logic between different parts of your application bundle when it comes to production. In this article, continuing our series on application level communication solutions, we will be looking at Beanstalkd to create this separation of pieces. Beanstalkd Beanstalkd was first developed to solve the needs of a popular web application (Causes on Facebook). Currently, it is an absolutely reliable, easy to install messaging service which is perfect to get started with and use. As mentioned earlier, Beanstalkd's main use case is to manage the workflow between different parts and workers of your application deployment stack through work queues and messages, similar to other popular solutions such as RabbitMQ. However, the way Beanstalkd is created to work sets it apart from the rest. Since its inception, unlike other solutions, Beanstalkd was intended to be a work queue and not an umbrella tool to cover many needs. To achieve this purpose, it was built as a lightweight and rapidly functioning application based on C programming language. Its lean architecture also allows it to be installed and used very simply, making it perfect for a majority of use cases. Here is a picture with more possibilities: put with delay release with delay ----------------> [DELAYED] <------------. | | | (time passes) | | | put v reserve | delete -----------------> [READY] ---------> [RESERVED] --------> *poof* ^ ^ | | | \ release | | | `-------------' | | | | kick | | | | bury | [BURIED] <---------------' | | delete `--------> *poof* Features Being able to monitor jobs with a returned ID, returned upon creation, is only one of the features of Beanstalkd that sets it apart from the rest. Some other interesting features offered are: Persistence - Beanstalkd operates in-memory but offers persistence support as well. Prioritisation - unlike most alternatives, Beanstalkd offers prioritisation for different tasks to handle urgent things when they are needed to. Distribution - different server instances can be distributed similarly to how Memcached works. Burying - it is possible to indefinitely postpone a job (i.e. a task) by burying it. Third party tools - Beanstalkd comes with a variety of third-party tools including CLIs and web-based management consoles. Expiry - jobs can be set to expire and auto-queue later (TTR - Time To Run). Beanstalkd Use-case Examples Some exemplary use-cases for Banstalkd are: Allowing web servers to respond to requests quickly instead of being forced to perform resource-heavy procedures on the spot Performing certain jobs at certain intervals (i.e. crawling the web) Distributing a job to multiple workers for processing Letting offline clients (e.g. a disconnected user) fetch data at a later time instead of having it lost permanently through a worker Introducing fully asynchronous functionality to the backend systems Ordering and prioritising tasks Balancing application load between different workers Greatly increase reliability and uptime of your application Processing CPU intensive jobs (videos, images etc.) later Sending e-mails to your lists and more. Beanstalkd Elements Just like most applications, Beanstalkd comes with its own jargon to explain its parts. Tubes / Queues Beanstalkd Tubes translate to queues from other messaging applications. They are through where jobs (or messages) are transferred to consumers (i.e. workers). Jobs / Messages Since Beanstalkd is a "work queue", what's transferred through tubes are referred as jobs - which are similar to messages being sent. Producers / Senders Producers, similar to Advanced Message Queuing Protocol's definition, are applications which create and send a job (or a message). They are to be used by the consumers. Consumers / Receivers Receivers are different applications of the stack which get a job from the tube, created by a producer for processing. Queue Queues are a great way to take some task out of the user-flow and put them in the background. Allowing a user to skip waiting for these tasks makes our applications appear faster, and gives us another opportunity to segment our application and business logic out further. For example, sending emails, deleting accounts and processing images are all potentially long-running or memory-intensive tasks; They make great candidates for work which we can off-load to a queue. Laravel can accomplish this with its Queue package. Specifically, I use the Beanstalkd work queue with Laravel. Here's how I set that up to be just about production-ready. Note: I use Ubuntu for development and often in production. The following is accomplishsed in Ubuntu 14.04 Server LTS. Some instructions may differ for you depending on your OS Here's what we'll cover Laravel and Queues Installing Beanstalkd Churning through the queue with Supervisor Laravel and Queues Laravel makes using queues very easy. Our application, the producer, can simply run something like Queue::push('SendEmail', array('message' => $message)); too add a job to the queue. On the other end of the queue is the code listening for new jobs and a script to process the job (collectively, the workers). This means that in addition to adding jobs to the queue, we need to set up a worker to pull from the stack of available jobs. Here's how that looks in Laravel. In this example, we'll create an image-processing queue. Install dependencies As noted in the docs, Laravel requires the Pheanstalk package for using Beanstalkd. We can install this using Composer $ composer require pda/pheanstalk:dev-master Create a script to process it Once our PHP dependency in installed, we can begin to write some code. In this example, we'll create a PhotoService class to handle the processing. If no method is specified, laravel assumes the class will have a fire() method. This is half of a worker - the code which does some processing. <?php namespace Myapp\Queue; class PhotoService { public function fire($job, $data) { // Minify, crop, shrink, apply filters or otherwise manipulate the image } } Push a job to a Queue When a user uploads an image, we'll add a job to the queue so our worker can process it. In Laravel, we'll create a job by telling the Queue library what code will handle the job (in this case the fire() method inside of Myapp\Queue\PhotoService as defined above) and give it some data to work with. In our example, we simply pass it a path to an image file. Queue::push('Myapp\Queue\PhotoService', array('image_path' => '/path/to/image/file.ext')); Process the jobs At this point, we have code to process an image (most of a worker), and we've added a job to the queue. The last step is to have code pull a job from the queue. This is the other half of a worker. The worker needs to both pull a job from the queue and do the processing. In Laravel, that's split into 2 functionalities - Laravel's queue listener, and the code we write ourselves - in this case, the PhotoService. Laravel has some CLI tools to help with queues: // Fire the latest job in the queue $ php artisan queue:work // Listen for new jobs in the queue // and fire them off one at a time // as they are created $ php artisan queue:listen When not working with the "sync" driver, these tools are what you need to use in order to process the jobs in your queue. We run the queue:listen command to have laravel listen to the queue and pull jobs as they become available. Let's install Beanstalkd to see how that works. By default, laravel will run queue jobs synchronously - that is, it runs the job at the time of creation. This means the image will be processed in the same request that the user created when uploading an image. That's useful for testing, but not for production. We'll make this asynchronous by introducing Beanstalkd. Beanstalkd Let's install Beanstalkd: # Debian / Ubuntu: $ sudo apt-get update $ sudo apt-get install beanstalks # or $ sudo aptitude install -y beanstalkd Note: You may be able to get a newer version of Beanstalkd by adding this PPA. Ubuntu 14.04 installs an older version of Beanstalkd. Using Beanstalkd Upon installing, you can start working with the Beanstalkd server. Here are the options for running the daemon: -b DIR wal directory -f MS fsync at most once every MS milliseconds (use -f0 for "always fsync") -F never fsync (default) -l ADDR listen on address (default is 0.0.0.0) -p PORT listen on port (default is 11300) -u USER become user and group -z BYTES set the maximum job size in bytes (default is 65535) -s BYTES set the size of each wal file (default is 10485760) (will be rounded up to a multiple of 512 bytes) -c compact the binlog (default) -n do not compact the binlog -v show version information -V increase verbosity -h show this help Example Usage # Usage: beanstalkd -l [ip address] -p [port #] # For local only access: beanstalkd -l 127.0.0.1 -p 11301 & Managing The Service If installed through the package manager (i.e. aptitude), you will be able to manage the Beanstalkd daemon as a service. # To start the service: $ sudo service beanstalkd start # To stop the service: $ sudo service beanstalkd stop # To restart the service: $ sudo service beanstalkd restart # To check the status: $ sudo service beanstalkd status Obtaining Beanstalkd Client Libraries Beanstalkd comes with a long list of support client libraries to work with many different application deployments. For a full list of support languages and installation instructions for your favourite, check out the client libraries page on Github for Beanstalkd. Next, some quick configuration. The first thing we need to do is tell Beanstalkd to start when the system starts up or reboots. Edit /etc/default/beanstalkd and set START to "yes". $ sudo vim /etc/default/beanstalkd > START yes # uncomment Then we can start Beanstalkd: $ sudo service beanstalkd start # Alternatively: /etc/init.d/beanstalkd start Now we can setup Laravel. In your app/config/queue.php file, set the default queue to 'beanstalkd': 'default' => 'beanstalkd', Then edit any connection information you need to change. I left my configuration with the defaults as I installed it on the same server as the application. 'connections' => array( 'beanstalkd' => array( 'driver' => 'beanstalkd', 'host' => 'localhost', 'queue' => 'default', ), ), Now when we push a job to the queue in Laravel, we'll be pushing to Beanstalkd! Installing Beanstalkd on a remote server You may (read: should) want to consider installing Beanstalkd on another server, rather than your application server. Since Beantalkd is an in-memory service, it can eat up your servers resources under load. To do this, you can install Beanstalkd on another server, and simply point your "host" to the proper server address, rather than localhost. This leaves the final detail - what server runs the job? If you follow all other steps here, Supervisord will still be watching Laravel's listener on your application server. You may want to consider running your job script (or even a copy of your application which has a job script) on yet another server whose job is purely to churn through Beanstalkd queue jobs. This means having a listener and working listener/job code on yet another server. In fact, in a basic distributed setup, we'd probably have an application server (or 2, plus a load-balancer), a database server, a queue server and a job server! Supervisord Let's say you pushed a job to Beanstalkd: Queue::push('Myapp\Queue\PhotoService', array('image_path' => '/path/to/image/file.ext')); Now what? You might notice that it goes to Beanstalkd, but Myapp\Queue\PhotoService@fire() doesn't seem to be getting called. You've checked your error logs, you see if the image was edited, and found that the the job is just "sitting there" in your Beanstalkd queue. Beanstalkd doesn't actually PUSH jobs to a script - instead, we need a worker to check if there are jobs available and ask for them. This is what $ php artisan queue:listen does - It listens for jobs and runs them as they become available. If you run that command, you'll see your job being sent to code. If all goes well, your image will be properly manipulated. The question then becomes: How do I make php listen at all times? We need to avoid having to "supervise" that process manually. This is where Supervisord comes in. Supervisord will watch our queue:listen command and restart it if it fails. Let's see how to set that up. First, we'll install it: # Debian / Ubuntu: $ sudo apt-get install supervisor Next, we'll configure it. We need to define a process to listen to. $ sudo vim /etc/supervisor/conf.d/myqueue.conf Add this to your new conf file, changing file paths and your environment as necessary: [program:myqueue] command=php artisan queue:listen --env=your_environment directory=/path/to/laravel stdout_logfile=/path/to/laravel/app/storage/logs/myqueue_supervisord.log redirect_stderr=true We now have a process called "myqueue" which we can tell Supervisord to start and monitor. Let's do that: $ sudo supervisorctl > reread # Tell supervisord to check for new items in /etc/supervisor/conf.d/ > add myqueue # Add this process to Supervisord > start myqueue # May say "already started" Now the myqueue process is on and being monitored. If our queue listener fails, Supervisord will restart the php artisan queue:listen --env=your_environment process. You can check that it is indeed running that process with this command: $ ps aux | grep php # You should see some output like this: php artisan queue:listen --env=your_environment sh -c php artisan queue:work --queue="default" --delay=0 --memory=128 --sleep --env=your_environment php artisan queue:work --queue=default --delay=0 --memory=128 --sleep --env=your_environment Wrapping up Now we have a full end-to-end queue working and in place! We create a script to process a queued job We installed Beanstalkd to act as the work queue We use Laravel to push jobs to our queue We use Laravel queue:listen to act as a worker and pull jobs from the queue We wrote some code to process a job from the queue We use Supervisord to ensure queue:listen is always listening for new jobs Notes You might want to consider setting up log rotation on the Laravel and Supervisord logs You can read here for more information on setting up Supervisord on Ubuntu. Read the Laravel docs on queues to learn how and when to release or delete jobs. TL;DR For reference, just copy and paste the whole process from here: $ sudo apt-get update $ sudo apt-get install -y beanstalkd supervisor $ sudo vim /etc/default/beanstalkd > START yes # uncomment this line $ sudo service beanstalkd start $ sudo vim /etc/supervisor/conf.d/myqueue.conf Enter this, changing as needed: [program:myqueue] command=php artisan queue:listen --env=your_environment directory=/path/to/laravel stdout_logfile=/path/to/laravel/app/storage/logs/myqueue_supervisord.log redirect_stderr=true Start Supervisord: $ sudo supervisorctl > reread # Get available jobs > add myqueue > start myqueue Read more on Supervisord here for info on supervisorctl. This article reference How To Install and Use Beanstalkd Work Queue on a VPS and Production-Ready Beanstalkd with Laravel 4 Queues. Related Resources Laravel Introduction, Tutorials, and Resources. ### Set Up Apache with a Free Signed SSL Certificate on a VPS Wiki Hypertext Transfer Protocol Secure (HTTPS) is a communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP communications. The main motivation for HTTPS is to prevent wiretapping and man-in-the-middle attacks. Prerequisites Before we get started, here are the web tools you need for this tutorial: Google Chrome browser Apache installed on your VPS (cloud server) A domain name you own Access to an email address at that domain, either: postmaster@duable.co hostmaster@duable.co webmaster@duable.co StartSSL.com offers completely free verified (your users won't have to see those scary red screens saying "this site isn't trusted" anymore) SSL certificates that you can use on your website. This is a great deal as most companies charge $50-$60 for similar services. The free version is a bit tricky to set up, but it's well worth it. To get started, browse to StartSSL.com and using the toolbar on the left, navigate to StartSSL Products and then to StartSSL™ Free. Choose the link for Control Panel from the top of the page. Make sure you are using Google Chrome Choose the Express Signup. option Enter your personal information, and click continue. You'll get an email with a verification code inside it shortly. Copy and paste that email into the form on StartSSL's page. They will review your request for a certificate and then send you an email with the new info. This process might take as long as 6 hours though, so be patient. Once the email comes, use the link provided and the new authentication code (at the bottom of the email) to continue to the next step. They will ask you to Generate a private key and you will be provided with the choice of "High" or "Medium" grade. Go ahead and choose "High". Once your key is ready, click Install. Chrome will show a popdown that says that the certificate has been succesfully installed to Chrome. This means your browser is now authenticated with your new certificate and you can log into the StartSSL authentication areas using your new certificate. Now, we need to get a properly formatted certificate set up for use on your VPS. Click on the Control panel link again, and choose the Authenticate option. Chrome will show a popup asking if you want to authenticate and will show the certificate you just installed. Go ahead and authenticate with that certificate to enter the control panel. You will need to validate your domain name to prove that you own the domain you are setting up a certificate for. Click over to the Validations Wizard in the Control panel and set Type to Domain Name Validation. You'll be prompted to choose from an email at your domain, something like postmaster@yourdomain.com. Check the email inbox for the email address you selected. You will get yet another verification email at that address, so like before, copy and paste the verification code into the StartSSL website. Next, go to the Certificates Wizard tab and choose to create a Web Server SSL/TLS Certificate. Hit continue and then enter in a secure password, leaving the other settings as is. You will be shown a textbox that contains your private key. Copy and paste the contents into a text editor and save the data into a file called ssl.key. When you click continue, you will be asked which domain you want to create the certificate for Choose your domain and proceed to the next step. You will be asked what subdomain you want to create a certificate for. In most cases, you want to choose www here, but if you'd like to use a different subdomain with SSL, then enter that here instead StartSSL will provide you with your new certificate in a text box, much as it did for the private key Again, copy and paste into a text editor, this time saving it as ssl.crt. You will also need the StartCom Root CA and StartSSL's Class 1 Intermediate Server CA in order to authenticate your website though, so for the final step, go over to the Toolbox pane and choose StartCom CA Certificates At this screen, right click and Save As two files: StartCom Root CA (PEM Encoded) (save to ca.pem) Class 1 Intermediate Server CA (save to sub.class1.server.ca.pem) For security reasons, StartSSL encrypts your private key (the ssl.key file), but your web server needs the unencrypted version of it to handle your site's encryption. To unencrypt it, copy it onto your server, and use the following command to decrypt it into the file private.key: openssl rsa -in ssl.key -out private.key OpenSSL will ask you for your password, so enter it in the password you typed in on StartSSL's website. At this point you should have five files. If you're missing any, double-check the previous steps and re-download them ca.pem - StartSSL's Root certificate private.key - The unencrypted version of your private key (be very careful no one else has access to this file!) sub.class1.server.ca.pem - The intermediate certificate for StartSSL ssl.key - The encrypted version of your private key (does not need to be copied to server) ssl.crt - Your new certificate You can discard the ssl.key file. If you haven't already copied the others onto your server you upload them there now $ scp {ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} YOURSERVER:~ Activating the certificate in Apache Having a certificate isn't any good if you can't actually use it. This section explains how to configure Apache to use your new SSL certificate. These instructions are for Apache running on recent versions of Ubuntu VPS. For other Linux-based distros or web servers, you'll have to adjust accordingly. First, create the folders where we'll store the keys. Enable Apache's SSL module, and restart Apache. $ sudo a2enmod ssl $ sudo service apache2 restart $ sudo mkdir -p /etc/apache2/ssl Copy the files you set up in the previous section into the /etc/apache2/ssl folder on your VPS. $ sudo mkdir -p /etc/apache2/ssl $ cp ~/{ca.pem,private.key,sub.class1.server.ca.pem,ssl.crt} /etc/apache2/ssl Execute $ ls /etc/apache2/ssl And it should return ca.pem ssl.crt private.key sub.class1.server.ca.pem Now, open your apache2 configuration file. Unless you've already modified the default configuration, input $ sudo vim /etc/apache2/sites-enabled/000-default It should look something like this <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Copy the entire script above (from to), paste it below the existing one, and change the top line from <VirtualHost *:80> </VirtualHost> to <VirtualHost *:443> </VirtualHost> And add the following lines after the line SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile /etc/apache2/ssl/ssl.crt SSLCertificateKeyFile /etc/apache2/ssl/private.key SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem The end result should look like this <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:443> SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile /etc/apache2/ssl/ssl.crt SSLCertificateKeyFile /etc/apache2/ssl/private.key SSLCertificateChainFile /etc/apache2/ssl/sub.class1.server.ca.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Save your files and restart Apache with: $ sudo service apache2 restart You can check Apache's log files to see if there are any show stopping errors with this command $ cat /var/log/apache2/error.log If everything looks good, try accessing your site in your web browser using an HTTPS URL (e.g. https://www.YOURSITE.com). When your site loads, you should see a little green padlock icon next to the URL. Click on it and you should see the following. The connections tab should show that the site's identity has been verified by StartCom. Congratulations! You are all set! Using StartSSL Certs with Nginx-Webserver Note that we can check our config here: SSL Checker. Decrypt the private key using the password you entered when you created your key: $ sudo openssl rsa -in ssl.key -out /etc/nginx/conf/ssl.key Protect your key from prying eyes: $ sudo chmod 600 /etc/nginx/conf/ssl.key Fetch StartSSL's root CA and class 1 intermediate server CA certificates: $ wget http://www.startssl.com/certs/sub.class1.server.ca.pem Create a unified certificate from your certificate and the CA certificates: $ sudo cat ssl.crt sub.class1.server.ca.pem > /etc/nginx/conf/ssl-unified.crt Configure your nginx server to use the new key and certificate (in the global settings or a server section): ssl on; ssl_certificate /etc/nginx/conf/ssl-unified.crt; ssl_certificate_key /etc/nginx/conf/ssl.key; Now reload Nginx configuration. Reference Links: How To Set Up Apache with a Free Signed SSL Certificate on a VPS ### ShadowSocks Setup Guide Wiki ShadowSocks is a secure socks5 proxy, designed to protect your Internet traffic. GoAgentX is a GUI application on OS X for using and controlling proxy services. Official Website shadowsocks.org Linux / Server Side Install You'll have a client on your local machine, and install a server on a remote server. Debian / Ubuntu $ sudo apt-get install python-pip python-m2crypto $ sudo pip install shadowsocks CentOS $ sudo yum install m2crypto python-setuptools $ sudo easy_install pip $ sudo pip install shadowsocks Configuration On your server create a config file /etc/shadowsocks.json. Example: { "server" : "my_server_ip", "server_port" : 8388, "local_address": "127.0.0.1", "local_port" : 1080, "password" : "mypassword", "timeout" : 300, "method" : "aes-256-cfb", "fast_open" : false, "workers" : 1 } Explanation of the fields Name Explanation server the address your server listens server_port server port local_address the address your local listens local_port local port password password used for encryption timeout in seconds method encryption method, "rc4-md5" is recommended fast_open use TCP_FASTOPEN, true / false workers number of workers, available on Unix/Linux To run in the background: $ sudo ssserver -p 8388 -k password -m rc4-md5 --user nobody -d start To stop: $ sudo ssserver -d stop To check the log: $ sudo less /var/log/shadowsocks.log Optimeze TCP Connection Increase TCP link limit add following configuration to /etc/security/limits.conf file: * soft nofile 51200 * hard nofile 51200 Add following configuration to /etc/sysctl.conf file: # max open files fs.file-max = 51200 # max read buffer net.core.rmem_max = 67108864 # max write buffer net.core.wmem_max = 67108864 # max processor input queue net.core.netdev_max_backlog = 250000 # max backlog net.core.somaxconn = 4096 # resist SYN flood attacks net.ipv4.tcp_syncookies = 1 # reuse timewait sockets when safe net.ipv4.tcp_tw_reuse = 0 # turn off fast time wait sockets recycling net.ipv4.tcp_tw_recycle = 0 # short FIN timeout net.ipv4.tcp_fin_timeout = 30 # short keepalive time net.ipv4.tcp_keepalive_time = 1200 # outbound port range net.ipv4.ip_local_port_range = 10000 65000 # max SYN backlog net.ipv4.tcp_max_syn_backlog = 8192 # max timewait sockets held by system simultaneously net.ipv4.tcp_max_tw_buckets = 5000 # net.ipv4.tcp_fastopen = 3 # net.ipv4.tcp_mem = 25600 51200 102400 # TCP receive buffer net.ipv4.tcp_rmem = 4096 87380 67108864 # TCP write buffer net.ipv4.tcp_wmem = 4096 65536 67108864 # turn on path MTU discovery net.ipv4.tcp_mtu_probing = 1 # net.ipv4.tcp_congestion_control = hybla Reload configuration and set ShadowSocks lanuch params fast_open: true: $ sudo sysctl -p OS X GUI Clients Download and install GoAgentX V2.7.3 build 774 from Google Drive. Setup services, fill in Local Port, Server Address, Server Port, Timeout(Seconds), Service Password and select Encrypt Method. Global Proxy System Preferences → Select your network → Advanced → Proxies → SOCKES Proxy → Fill in SOCKES Proxy Server and Port → Save → Apply. Other clients reference github.com/shadowsocks/shadowsocks/wiki/Ports-and-Clients Google Chrome Extension Install ExtensionProxy SwitchyOmega Plugin on Google Chrome in Proxy Profiles tab, New Profile → Profile Details → Profile Name: Shadowsocks → SOCKS Host → Save. Reference ShadowSocks - A fast tunnel proxy that help you get through firewalls ### Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API Content Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API Signup Alipay and in Alipay business service page, choose service for your website. After identity and some other information validation, you will get a partner ID and partner KEY. Download and refrence Alipay API official SDK, fill your partner ID and partner KEY in app/api/alipay/alipay.config.php Conclusion If you enjoyed this tutorial, you can share this tutorials on your favorite media. ### Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Content Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API This file should be saved as app/controllers/ProductController.php <?php class ProductController extends BaseResource { /** * Resource view directory * @var string */ protected $resourceView = 'account.product'; /** * Model name of the resource, after initialization to a model instance * @var string|Illuminate\Database\Eloquent\Model */ protected $model = 'Product'; /** * Resource identification * @var string */ protected $resource = 'myproduct'; /** * Resource database tables * @var string */ protected $resourceTable = 'products'; /** * Resource name * @var string */ protected $resourceName = 'Goods'; /** * Custom validation message * @var array */ protected $validatorMessages = array( 'title.required' => 'Please fill goods name', 'price.required' => 'Please fill goods price', 'price.numeric' => 'Price only be a number', 'quantity.required' => 'Please fill remaining quantity of goods', 'quantity.integer' => 'Remaining quantity of goods must be an integer', 'province.required' => 'Please select province and city', 'content.required' => 'Please fill content', 'category.exists' => 'Please choose goods category', ); /** * Resource list view * GET /resource * @return Response */ public function index() { // Get sort conditions $orderColumn = Input::get('sort_up', Input::get('sort_down', 'created_at')); $direction = Input::get('sort_up') ? 'asc' : 'desc' ; // Get search conditions switch (Input::get('target')) { case 'title': $title = Input::get('like'); break; } // Construct query statement $query = $this->model->orderBy($orderColumn, $direction)->where('user_id', Auth::user()->id)->paginate(15); isset($title) AND $query->where('title', 'like', "%{$title}%"); $datas = $query; return View::make($this->resourceView.'.index')->with(compact('datas')); } /** * Resource create view * GET /resource/create * @return Response */ public function create() { if( Auth::user()->alipay == NULL ){ return Redirect::route('account.settings') ->with('info', 'Notice: You need to set Alipay account before sale goods'); } else { $categoryLists = ProductCategories::lists('name', 'id'); return View::make($this->resourceView.'.create')->with(compact('categoryLists')); } } /** * Resource create action * POST /resource * @return Response */ public function store() { // Get all form data. $data = Input::all(); // Create validation rules $unique = $this->unique(); $rules = array( 'title' => 'required|'.$unique, 'price' => 'required|numeric', 'quantity' => 'required|integer', 'content' => 'required', 'category' => 'exists:product_categories,id', 'province' => 'required', ); $slug = Input::input('title'); $hashslug = date('H.i.s').'-'.md5($slug).'.html'; // Custom validation message $messages = $this->validatorMessages; // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { // Verification success // Add recource $model = $this->model; $model->user_id = Auth::user()->id; $model->category_id = $data['category']; $model->title = e($data['title']); $model->province = e($data['province']); $model->city = e($data['city']); $model->price = e($data['price']); $model->quantity = e($data['quantity']); $model->slug = $hashslug; $model->content = e($data['content']); $model->meta_title = e($data['title']); $model->meta_description = e($data['title']); $model->meta_keywords = e($data['title']); if ($model->save()) { // Add success return Redirect::back() ->with('success', '<strong>'.$this->resourceName.'post success</strong>'); } else { // Add fail return Redirect::back() ->withInput() ->with('error', '<strong>'.$this->resourceName.'add fail</strong>'); } } else { // Verification fail return Redirect::back()->withInput()->withErrors($validator); } } /** * Resource edit view * GET /resource/{id}/edit * @param int $id * @return Response */ public function edit($id) { $data = $this->model->find($id); $categoryLists = ProductCategories::lists('name', 'id'); $product = Product::where('slug', $data->slug)->first(); return View::make($this->resourceView.'.edit')->with(compact('data', 'categoryLists', 'product')); } /** * Resource edit action * PUT/PATCH /resource/{id} * @param int $id * @return Response */ public function update($id) { // Get all form data. $data = Input::all(); // Create validation rules $rules = array( 'title' => 'required', 'content' => 'required', 'price' => 'required|numeric', 'quantity' => 'required|integer', 'category' => 'exists:product_categories,id', 'province' => 'required', ); // Custom validation message $messages = $this->validatorMessages; // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { // Verification success // Update resource $model = $this->model->find($id); $model->user_id = Auth::user()->id; $model->category_id = $data['category']; $model->title = e($data['title']); $model->province = e($data['province']); $model->city = e($data['city']); $model->price = e($data['price']); $model->quantity = e($data['quantity']); $model->content = e($data['content']); $model->meta_title = e($data['title']); $model->meta_description = e($data['title']); $model->meta_keywords = e($data['title']); if ($model->save()) { // Update success return Redirect::back() ->with('success', '<strong>'.$this->resourceName.'update success'); } else { // Update fail return Redirect::back() ->withInput() ->with('error', '<strong>'.$this->resourceName.'update fail</strong>'); } } else { // Verification fail return Redirect::back()->withInput()->withErrors($validator); } } /** * Resource destory action * DELETE /resource/{id} * @param int $id * @return Response */ public function destroy($id) { $data = $this->model->find($id); if (is_null($data)) return Redirect::back()->with('error', 'Can\'t'.$this->resourceName); elseif ($data) { $model = $this->model->find($id); $thumbnails = $model->thumbnails; File::delete(public_path('uploads/product_thumbnails/'.$thumbnails)); $data->delete(); return Redirect::back()->with('success', $this->resourceName.'delete success'); } else return Redirect::back()->with('warning', $this->resourceName.'delete fail'); } /** * Action: Add resource images * @return Response */ public function postUpload($id) { $input = Input::all(); $rules = array( 'file' => 'image|max:3000', ); $validation = Validator::make($input, $rules); if ($validation->fails()) { return Response::make($validation->errors->first(), 400); } $file = Input::file('file'); $destinationPath = 'uploads/products/'; $ext = $file->guessClientExtension(); // Get real extension according to mime type $fullname = $file->getClientOriginalName(); // Client file name, including the extension of the client $hashname = date('H.i.s').'-'.md5($fullname).'.'.$ext; // Hash processed file name, including the real extension $picture = Image::make($file->getRealPath()); // crop the best fitting ratio and resize image $picture->fit(1024, 683)->save(public_path($destinationPath.$hashname)); $picture->fit(585, 347)->save(public_path('uploads/product_thumbnails/'.$hashname)); $model = $this->model->find($id); $oldThumbnails = $model->thumbnails; $model->thumbnails = $hashname; $model->save(); File::delete(public_path('uploads/product_thumbnails/'.$oldThumbnails)); $models = new ProductPictures; $models->filename = $hashname; $models->product_id = $id; $models->user_id = Auth::user()->id; $models->save(); if( $models->save() ) { return Response::json('success', 200); } else { return Response::json('error', 400); } } /** * Action: Delete resource images * @return Response */ public function deleteUpload($id) { // Only allows you to share pictures on the cover of the current resource being deleted $filename = ProductPictures::where('id', $id)->where('user_id', Auth::user()->id)->first(); $oldImage = $filename->filename; if (is_null($filename)) return Redirect::back()->with('error', 'Can\'t find picture'); elseif ($filename->delete()) { File::delete( public_path('uploads/products/'.$oldImage) ); return Redirect::back()->with('success', 'Delete success'); } else return Redirect::back()->with('warning', 'Delete fail'); } /** * View: My comments * @return Response */ public function comments() { $comments = ProductComment::where('user_id', Auth::user()->id)->paginate(15); return View::make($this->resourceView.'.comments')->with(compact('comments')); } /** * Action: Delete my comments * @return Response */ public function deleteComment($id) { // Delete operations only allow comments to yourself $comment = ProductComment::where('id', $id)->where('user_id', Auth::user()->id)->first(); if (is_null($comment)) return Redirect::back()->with('error', 'Can\'t find that comments'); elseif ($comment->delete()) return Redirect::back()->with('success', 'Delete success'); else return Redirect::back()->with('warning', 'Delete fail'); } /** * View: Product * @return Respanse */ public function getIndex() { $product = Product::orderBy('created_at', 'desc')->where('quantity', '>', '0')->paginate(12); $categories = ProductCategories::orderBy('sort_order')->paginate(6); return View::make('product.index')->with(compact('product', 'categories', 'data')); } /** * Resource list * @return Respanse */ public function category($category_id) { $product = Product::where('category_id', $category_id)->orderBy('created_at', 'desc')->paginate(6); $categories = ProductCategories::orderBy('sort_order')->get(); $current_category = ProductCategories::where('id', $category_id)->first(); return View::make('product.category')->with(compact('product', 'categories', 'category_id', 'current_category')); } /** * Resource show view * @param string $slug Slug * @return response */ public function show($slug) { $product = Product::where('slug', $slug)->first(); is_null($product) AND App::abort(404); $categories = ProductCategories::orderBy('sort_order')->get(); if (Auth::check()) { $inCart = ShoppingCart::where('buyer_id', Auth::user()->id)->where('product_id', $product->id)->first(); } else { $inCart = false; } return View::make('product.show')->with(compact('product', 'categories', 'inCart')); } /** * View: Customer shopping cart * @return Response */ public function cart() { // Get sort conditions $orderColumn = Input::get('sort_up', Input::get('sort_down', 'created_at')); $direction = Input::get('sort_up') ? 'asc' : 'desc' ; // Get search conditions switch (Input::get('target')) { case 'title': $title = Input::get('like'); break; } // Construct query statement $query = ShoppingCart::orderBy($orderColumn, $direction)->where('buyer_id', Auth::user()->id)->paginate(15); isset($title) AND $query->where('title', 'like', "%{$title}%"); $datas = $query; $payment = ShoppingCart::where('buyer_id', Auth::user()->id)->sum('payment'); $resource = 'myproduct'; $resourceName = 'Shopping cart'; return View::make($this->resourceView.'.cart')->with(compact('datas', 'resource', 'resourceName', 'payment')); } /** * Action: Delete goods in customer shopping cart * @return Response */ public function destroyGoods($id) { $data = ShoppingCart::find($id); if (is_null($data)) return Redirect::back()->with('error', 'Can\'t find'.$this->resourceName); elseif ($data) { $data->delete(); return Redirect::back()->with('success', $this->resourceName.'Delete success'); } else return Redirect::back()->with('warning', $this->resourceName.'Delete fail'); } /** * Action: Show page post action * @return Response */ public function postAction($slug) { $postComment = e(Input::get('postComment')); if($postComment) { // Get comment $content = e(Input::get('content')); // Check word if (mb_strlen($content)<3) return Redirect::back()->withInput()->withErrors($this->messages->add('content', 'Too short')); // Find article $product = Product::where('slug', $slug)->first(); // Create comment $comment = new ProductComment; $comment->content = $content; $comment->product_id = $product->id; $comment->user_id = Auth::user()->id; if ($comment->save()) { // Create success // Updated comments $product->comments_count = $product->comments->count(); $product->save(); // Return success return Redirect::back()->with('success', 'Post success'); } else { // Create fail return Redirect::back()->withInput()->with('error', 'Post fail'); } } else { $data = Input::all(); $rules = array( 'quantity' => 'required|integer', 'product_id' => 'required', 'price' => 'required', 'seller_id' => 'required', 'inventory' => 'required', ); if (e($data['inventory'])<e($data['quantity'])) { return Redirect::back() ->with('error', '<strong>Please fill correct'.$this->resourceName.'quantity</strong>'); } elseif (Auth::user()->id==e($data['seller_id'])) { return Redirect::back() ->with('error', '<strong>You can\'t by it, because yourself on sale</strong>'); } else { // Custom validation message $messages = $this->validatorMessages; // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { // Verification success // Add recource $model = new ShoppingCart; $model->buyer_id = Auth::user()->id; $model->quantity = e($data['quantity']); $model->product_id = e($data['product_id']); $model->price = e($data['price']); $model->payment = e($data['quantity']) * e($data['price']); $model->seller_id = e($data['seller_id']); $model->save(); if ($model->save()) { // Add success return Redirect::back() ->with('success', '<strong>'.$this->resourceName.'add success</strong>'); } else { // Add fail return Redirect::back() ->withInput() ->with('error', '<strong>'.$this->resourceName.'add fail</strong>'); } } else { // Verification fail return Redirect::back()->withInput()->withErrors($validator); } } } } // ... } This file should be saved as app/controllers/ProductOrderController.php <?php class ProductOrderController extends BaseController { /** * Resource view directory * @var string */ protected $resourceView = 'account.order'; /** * Model name of the resource, after initialization to a model instance * @var string|Illuminate\Database\Eloquent\Model */ protected $model = 'ProductOrder'; /** * Resource identification * @var string */ protected $resource = 'order'; /** * Resource database tables * @var string */ protected $resourceTable = 'product_orders'; /** * Resource name * @var string */ protected $resourceName = 'Order'; protected $validatorMessages = array( 'customer_name.required' => 'Please fill your name', 'customer_address.required' => 'Please fill your address', 'customer_phone.required' => 'Please fill your mobile phone number', 'customer_phone.numeric' => 'Please fill correct mobile phone number', ); /** * View: Customer order index * @return Response */ public function index() { // Get sort conditions $orderColumn = Input::get('sort_up', Input::get('sort_down', 'created_at')); $direction = Input::get('sort_up') ? 'asc' : 'desc' ; // Get search conditions switch (Input::get('target')) { case 'title': $title = Input::get('like'); break; } // Construct query statement $unpayment_order = ProductOrder::orderBy($orderColumn, $direction)->where('customer_id', Auth::user()->id)->where('is_payment', 0)->paginate(15); $payment_order = ProductOrder::orderBy($orderColumn, $direction)->where('customer_id', Auth::user()->id)->where('is_payment', 1)->where('is_checkout', 0)->paginate(15); $checkout_order = ProductOrder::orderBy($orderColumn, $direction)->where('customer_id', Auth::user()->id)->where('is_checkout', 1)->paginate(15); isset($title) AND $query->where('title', 'like', "%{$title}%"); $resourceName = 'Order'; $resource = 'order'; return View::make($this->resourceView.'.index')->with(compact('unpayment_order', 'payment_order', 'checkout_order', 'resourceName', 'resource')); } /** * View: Customer order details * @return Response */ public function customerOrderDetails($id) { $data = ProductOrder::where('customer_id', Auth::user()->id)->where('id', $id)->first(); $resourceName = 'Order'; $resource = 'order'; return View::make($this->resourceView.'.customerOrderDetails')->with(compact('data', 'resourceName', 'resource')); } /** * View: Seller order details * @return Response */ public function sellerOrderDetails($id) { $data = ProductOrder::where('seller_id', Auth::user()->id)->where('id', $id)->first(); $resourceName = 'Order'; $resource = 'order'; return View::make($this->resourceView.'.sellerOrderDetails')->with(compact('data', 'resourceName', 'resource')); } /** * View: Generate order details * @return Response */ public function order($id) { $data = ShoppingCart::where('buyer_id', Auth::user()->id)->where('id', $id)->first(); $resourceName = 'Order'; $resource = 'order'; $product_id = Input::input('product_id'); $product = Product::where('id', $data->product_id)->first()->title; $seller = User::where('id', $data->seller_id)->first()->nickname; return View::make($this->resourceView.'.order')->with(compact('data', 'resourceName', 'resource', 'product_id', 'product', 'seller')); } /** * Action: Payment after add goods in shopping cart * @return Response */ public function payment() { $resourceName = 'Order'; $resource = 'order'; // Get all form data. $data = Input::all(); $rules = array( 'product_id' => 'required|', 'customer_name' => 'required', 'customer_address' => 'required', 'customer_phone' => 'required|numeric', ); // Custom validation message $messages = array( 'customer_name.required' => 'Please fill your name', 'customer_address.required' => 'Please fill your address', 'customer_phone.required' => 'Please fill your mobile phone number', 'customer_phone.numeric' => 'Please fill correct mobile phone number', ); // Begin verification $validator = Validator::make($data, $rules, $messages); // Save user real name if ( Auth::user()->username == NULL ) { $user = Auth::user(); $user->username = Input::get('customer_name'); $user->save(); }; // Save user mobile phone number if ( Auth::user()->phone == NULL ) { $user = Auth::user(); $user->phone = Input::get('customer_phone'); $user->save(); }; // Vrification Success if ($validator->passes()) { $product_id = Input::input('product_id'); $product = Product::where('id', $product_id)->first(); $data = ShoppingCart::where('buyer_id', Auth::user()->id)->where('product_id', $product_id)->first(); $order_id = md5(date('his')).$product_id.Auth::user()->id; $seller_id = $data->seller_id; $seller_alipay = User::where('id', $seller_id)->first()->alipay; $order_name = 'Payment:'.$product->title; $payment = $data->payment; $goods_show = 'http://www.example.com/product/'.$product->slug; $customer_name = Input::input('customer_name'); $customer_address = Input::input('customer_address'); $customer_phone = Input::input('customer_phone'); // Create product order $product_order = new ProductOrder; $product_order->order_id = $order_id; $product_order->seller_id = $seller_id; $product_order->product_id = $product_id; $product_order->customer_id = Auth::user()->id; $product_order->customer_address = $customer_address; $product_order->quantity = $data->quantity; $product_order->price = $data->price; $product_order->payment = $payment; $product_order->save(); // Destroy goods in shopping cart $data->delete(); // Alipay API require_once( app_path('api/alipay/alipay.config.php' )); require_once( app_path('api/alipay/lib/alipay_submit.class.php' )); // Request parameters $payment_type = "1"; // Payment type (required, don't modify) $notify_url = route('order.tradeNotify'); // Server asynchronous notification page URL (start with http://, don't use http://localhost/ or add ?id=123) $return_url = route('order.tradeReturn'); // Synchronization notification page URL (start with http://, don't use http://localhost/ or add ?id=123) $seller_email = $seller_alipay; // Saller Alipay ID (required) $out_trade_no = $order_id; // Order ID (required) $subject = $order_name; // Order name (required) $price = $payment; // Order payment (required) $quantity = "1"; // Goods quantity (default is 1) $logistics_fee = "0.00"; // Express payment (required) $logistics_type = "EXPRESS"; // Express type: EXPRESS, POST or EMS $logistics_payment = "SELLER_PAY"; // Express payment type (require:SELLER_PAY customer pay or BUYER_PAY saller pay) $body = $goods_show; // Order describe $show_url = $goods_show; // Goods show page (URL start with http://) $receive_name = $customer_name; // Customer name $receive_address = $customer_address; // Customer address $receive_zip = NULL; // Customer zip (code such as:123456) $receive_phone = NULL; // Custome telephone number (such as:0571-88158090) $receive_mobile = $customer_phone; // Customer mobile phone numer (such as:13312341234) // Constructs an array of arguments to request, no need to change $parameter = array( "service" => "trade_create_by_buyer", "partner" => trim($alipay_config['partner']), "payment_type" => $payment_type, "notify_url" => $notify_url, "return_url" => $return_url, "seller_email" => $seller_email, "out_trade_no" => $out_trade_no, "subject" => $subject, "price" => $price, "quantity" => $quantity, "logistics_fee" => $logistics_fee, "logistics_type" => $logistics_type, "logistics_payment" => $logistics_payment, "body" => $body, "show_url" => $show_url, "receive_name" => $receive_name, "receive_address" => $receive_address, "receive_zip" => $receive_zip, "receive_phone" => $receive_phone, "receive_mobile" => $receive_mobile, "_input_charset" => trim(strtolower($alipay_config['input_charset'])) ); // Establish require $alipaySubmit = new AlipaySubmit($alipay_config); $html_text = $alipaySubmit->buildRequestForm($parameter,"get", "Payment"); echo $html_text; } else{ return Redirect::back()->withInput()->withErrors($validator); } } /** * Action: Payment for order in customer order list * @return Response */ public function rePayment() { $resourceName = 'Order'; $resource = 'order'; // Get all form data. $data = ProductOrder::where('id', Input::get('order_id'))->first(); if ($data) { $product_id = $data->product_id; $product = Product::where('id', $product_id)->first(); $order_id = $data->order_id; $seller_id = $data->seller_id; $seller_alipay = User::where('id', $seller_id)->first()->alipay; $order_name = 'Payment:'.$product->title; $payment = $data->payment; $goods_show = 'http://www.example.com/product/'.$product->slug; $customer_name = Auth::user()->username; $customer_address = $data->customer_address; $customer_phone = Auth::user()->phone; // Alipay API require_once( app_path('api/alipay/alipay.config.php' )); require_once( app_path('api/alipay/lib/alipay_submit.class.php' )); // Request parameters $payment_type = "1"; // Payment type (required, don't modify) $notify_url = route('order.tradeNotify'); // Server asynchronous notification page URL (start with http://, don't use http://localhost/ or add ?id=123) $return_url = route('order.tradeReturn'); // Synchronization notification page URL (start with http://, don't use http://localhost/ or add ?id=123) $seller_email = $seller_alipay; // Saller Alipay ID (required) $out_trade_no = $order_id; // Order ID (required) $subject = $order_name; // Order name (required) $price = $payment; // Order payment (required) $quantity = "1"; // Goods quantity (default is 1) $logistics_fee = "0.00"; // Express payment (required) $logistics_type = "EXPRESS"; // Express type: EXPRESS, POST or EMS $logistics_payment = "SELLER_PAY"; // Express payment type (require:SELLER_PAY customer pay or BUYER_PAY saller pay) $body = $goods_show; // Order describe $show_url = $goods_show; // Goods show page (URL start with http://) $receive_name = $customer_name; // Customer name $receive_address = $customer_address; // Customer address $receive_zip = NULL; // Customer zip (code such as:123456) $receive_phone = NULL; // Custome telephone number (such as:0571-88158090) $receive_mobile = $customer_phone; // Customer mobile phone numer (such as:13312341234) // Constructs an array of arguments to request, no need to change $parameter = array( "service" => "trade_create_by_buyer", "partner" => trim($alipay_config['partner']), "payment_type" => $payment_type, "notify_url" => $notify_url, "return_url" => $return_url, "seller_email" => $seller_email, "out_trade_no" => $out_trade_no, "subject" => $subject, "price" => $price, "quantity" => $quantity, "logistics_fee" => $logistics_fee, "logistics_type" => $logistics_type, "logistics_payment" => $logistics_payment, "body" => $body, "show_url" => $show_url, "receive_name" => $receive_name, "receive_address" => $receive_address, "receive_zip" => $receive_zip, "receive_phone" => $receive_phone, "receive_mobile" => $receive_mobile, "_input_charset" => trim(strtolower($alipay_config['input_charset'])) ); // Establish require $alipaySubmit = new AlipaySubmit($alipay_config); $html_text = $alipaySubmit->buildRequestForm($parameter,"get", "Payment"); echo $html_text; } else{ return Redirect::back()->with('error', 'Can\'t find'.$this->resourceName.'。'); } } /** * Action: Delete customer order * @return Response */ public function destroyOrder($id) { $data = ProductOrder::find($id); if (is_null($data)) return Redirect::back()->with('error', 'Can\'t find'.$this->resourceName.'。'); elseif ($data) { $data->delete(); return Redirect::back()->with('success', $this->resourceName.'Delete success'); } else return Redirect::back()->with('warning', $this->resourceName.'Delete fail'); } /** * Action: Aplipay trade return * @return Response */ public function tradeReturn() { // Alipay Dualfun API require_once( app_path('api/alipay/alipay.config.php' )); require_once( app_path('api/alipay/lib/alipay_notify.class.php' )); $alipayNotify = new AlipayNotify($alipay_config); $verify_result = $alipayNotify->verifyReturn(); if($verify_result) { $out_trade_no = $_GET['out_trade_no']; // Order ID $trade_no = $_GET['trade_no']; // Alipay order ID $trade_status = $_GET['trade_status']; // Alipay trade status $product_order = ProductOrder::where('order_id', $out_trade_no)->first(); $product_order->is_payment = true; $product_order->alipay_trade = $trade_no; $product_order->save(); $product = Product::where('id', $product_order->product_id)->first(); $product->quantity = $product->quantity - $product_order->quantity; $product->save(); return Redirect::route('order.customerOrderDetails', $product_order->id)->with('success', 'Payment success'); } else { return Redirect::route('order.index')->with('error', 'Payment fail'); } } /** * Action: Aplipay trade notify * @return Response */ public function tradeNotify() { require_once( app_path('api/alipay/alipay.config.php' )); require_once( app_path('api/alipay/lib/alipay_notify.class.php' )); // Get verification result $alipayNotify = new AlipayNotify($alipay_config); $verify_result = $alipayNotify->verifyNotify(); if($verify_result) { $out_trade_no = $_GET['out_trade_no']; // Order ID $trade_no = $_GET['trade_no']; // Alipay order ID $trade_status = $_GET['trade_status']; // Alipay trade status $product_order = ProductOrder::where('order_id', $out_trade_no)->first(); $product_order->is_payment = true; $product_order->alipay_trade = $trade_no; $product_order->save(); $product = Product::where('id', $product_order->product_id)->first(); $product->quantity = $product->quantity - $product_order->quantity; $product->save(); } else { // Verification fail return Redirect::route('order.index')->with('error', 'Payment fail'); } } /** * View: Seller order list * @return Response */ public function seller() { // Get sort conditions $orderColumn = Input::get('sort_up', Input::get('sort_down', 'created_at')); $direction = Input::get('sort_up') ? 'asc' : 'desc' ; // Get search conditions switch (Input::get('target')) { case 'title': $title = Input::get('like'); break; } // Construct query statement $trading_order = ProductOrder::orderBy($orderColumn, $direction)->where('seller_id', Auth::user()->id)->where('is_payment', 1)->where('is_checkout', 0)->paginate(15); $checkout_order = ProductOrder::orderBy($orderColumn, $direction)->where('seller_id', Auth::user()->id)->where('is_checkout', 1)->paginate(15); isset($title) AND $query->where('title', 'like', "%{$title}%"); $resourceName = 'Order'; $resource = 'order'; return View::make($this->resourceView.'.seller')->with(compact('trading_order', 'checkout_order', 'resourceName', 'resource')); } /** * Action: Seller send goods with express * @return Response */ public function sendGoods() { // Get all form data. $data = Input::all(); $rules = array( 'id' => 'required|', 'express_name' => 'required', 'invoice_no' => 'required', ); // Custom validation message $messages = array( 'express_name.required' => 'Please fill express company name', 'invoice_no.required' => 'Please fill express billing number', ); // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { $product_order = ProductOrder::find(Input::get('id')); $product_order->is_express = true; $product_order->express_name = Input::get('express_name'); $product_order->invoice_no = Input::get('invoice_no'); $product_order->save(); // Alipay Dualfun API require_once( app_path('api/alipay/alipay.config.php' )); require_once( app_path('api/alipay/lib/alipay_submit.class.php' )); $trade_no = ProductOrder::where('id', Input::get('id'))->first()->alipay_trade; // Alipay trade number (required) $logistics_name = Input::get('express_name'); // Express company name (required) $invoice_no = Input::get('invoice_no'); // Express billing number $transport_type = "EXPRESS"; // Express type: POST, EXPRESS or EMS // Constructs an array of arguments to request, no need to change $parameter = array( "service" => "send_goods_confirm_by_platform", "partner" => trim($alipay_config['partner']), "trade_no" => $trade_no, "logistics_name" => $logistics_name, "invoice_no" => $invoice_no, "transport_type" => $transport_type, "_input_charset" => trim(strtolower($alipay_config['input_charset'])) ); // Establish require $alipaySubmit = new AlipaySubmit($alipay_config); $html_text = $alipaySubmit->buildRequestHttp($parameter); $doc = new DOMDocument(); $doc->loadXML($html_text); return Redirect::back()->with('success', '<strong>Send goods success</strong>'); } else { return Redirect::back()->withInput()->withErrors($validator); } } /** * Action: Customer checkout order * @return Response */ public function checkout() { if (Input::get('id')) { $product_order = ProductOrder::find(Input::get('id')); $product_order->is_checkout = true;; $product_order->save(); return Redirect::back()->with('success', 'Success, enjoy it!'); } else { return Redirect::back()->with('error', 'Fail'); } } // ... } This file should be saved as app/controllers/Admin/ProductCategoriesResource.php <?php class Admin_ProductResource extends BaseResource { /** * Resource view directory * @var string */ protected $resourceView = 'admin.product'; /** * Model name of the resource, after initialization to a model instance * @var string|Illuminate\Database\Eloquent\Model */ protected $model = 'Product'; /** * Resource identification * @var string */ protected $resource = 'product'; /** * Resource database tables * @var string */ protected $resourceTable = 'products'; /** * Resource name * @var string */ protected $resourceName = 'Goods'; /** * Custom validation message * @var array */ protected $validatorMessages = array( 'title.required' => 'Please fill goods name', 'price.required' => 'Please fill goods price', 'price.numeric' => 'Goods price must be a number', 'quantity.required' => 'Please fill quantity of goods', 'quantity.integer' => 'Quantity of good must be a integer', 'province.required' => 'Please select province and city', 'content.required' => 'Please fill content of goods', 'category.exists' => 'Please select a category of this goods', ); /** * Resource list view * GET /resource * @return Response */ public function index() { // Get sort conditions $orderColumn = Input::get('sort_up', Input::get('sort_down', 'created_at')); $direction = Input::get('sort_up') ? 'asc' : 'desc' ; // Get search conditions switch (Input::get('target')) { case 'title': $title = Input::get('like'); break; } // Construct query statement $query = $this->model->orderBy($orderColumn, $direction); isset($title) AND $query->where('title', 'like', "%{$title}%"); $datas = $query->paginate(15); return View::make($this->resourceView.'.index')->with(compact('datas')); } /** * Resource create view * GET /resource/create * @return Response */ public function create() { if(Auth::user()->alipay==NULL){ return Redirect::route('account.settings') ->with('info', 'Notice: you neet to set Alipay account befor salle goods at here'); } else { $categoryLists = ProductCategories::lists('name', 'id'); return View::make($this->resourceView.'.create')->with(compact('categoryLists')); } /** * Resource create action * POST /resource * @return Response */ public function store() { // Get all form data. $data = Input::all(); // Create validation rules $unique = $this->unique(); $rules = array( 'title' => 'required|'.$unique, 'price' => 'required|numeric', 'quantity' => 'required|integer', 'content' => 'required', 'category' => 'exists:product_categories,id', 'province' => 'required', ); $slug = Input::input('title'); $hashslug = date('H.i.s').'-'.md5($slug).'.html'; // Custom validation message $messages = $this->validatorMessages; // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { // Verification success // Add recource $model = $this->model; $model->user_id = Auth::user()->id; $model->category_id = $data['category']; $model->title = e($data['title']); $model->province = e($data['province']); $model->city = e($data['city']); $model->price = e($data['price']); $model->quantity = e($data['quantity']); $model->slug = $hashslug; $model->content = e($data['content']); $model->meta_title = e($data['title']); $model->meta_description = e($data['title']); $model->meta_keywords = e($data['title']); if ($model->save()) { // Add success return Redirect::back() ->with('success', '<strong>'.$this->resourceName.'post success</strong>'); } else { // Add fail return Redirect::back() ->withInput() ->with('error', '<strong>'.$this->resourceName.'post fail</strong>'); } } else { // Verification fail return Redirect::back()->withInput()->withErrors($validator); } } /** * Resource edit view * GET /resource/{id}/edit * @param int $id * @return Response */ public function edit($id) { $data = $this->model->find($id); $categoryLists = ProductCategories::lists('name', 'id'); $product = Product::where('slug', $data->slug)->first(); return View::make($this->resourceView.'.edit')->with(compact('data', 'categoryLists', 'product')); } /** * Resource edit action * PUT/PATCH /resource/{id} * @param int $id * @return Response */ public function update($id) { // Get all form data. $data = Input::all(); // Create validation rules $rules = array( 'title' => 'required', 'content' => 'required', 'slug' => 'required|'.$this->unique('slug', $id), 'category' => 'exists:product_categories,id', 'province' => 'required', ); $model = $this->model->find($id); $oldSlug = $model->slug; // Custom validation message $messages = $this->validatorMessages; // Begin verification $validator = Validator::make($data, $rules, $messages); if ($validator->passes()) { // Verification success // Update resource $model = $this->model->find($id); $model->user_id = Auth::user()->id; $model->category_id = $data['category']; $model->title = e($data['title']); $model->province = e($data['province']); $model->city = e($data['city']); $model->slug = e($data['slug']); $model->content = e($data['content']); $model->meta_title = e($data['title']); $model->meta_description = e($data['title']); $model->meta_keywords = e($data['title']); if ($model->save()) { // Update success return Redirect::back() ->with('success', '<strong>'.$this->resourceName.'update success</strong>'); } else { // Update fail return Redirect::back() ->withInput() ->with('error', '<strong>'.$this->resourceName.'update fail</strong>'); } } else { // Verification fail return Redirect::back()->withInput()->withErrors($validator); } } /** * Resource destory action * DELETE /resource/{id} * @param int $id * @return Response */ public function destroy($id) { $data = $this->model->find($id); if (is_null($data)) return Redirect::back()->with('error', 'Can\'t find '.$this->resourceName.'。'); elseif ($data) { $model = $this->model->find($id); $thumbnails = $model->thumbnails; File::delete(public_path('uploads/product_thumbnails/'.$thumbnails)); $data->delete(); return Redirect::back()->with('success', $this->resourceName.'Delete success'); } else return Redirect::back()->with('warning', $this->resourceName.'Delete fail'); } /** * Action: Add resource images * @return Response */ public function postUpload($id) { $input = Input::all(); $rules = array( 'file' => 'image|max:3000', ); $validation = Validator::make($input, $rules); if ($validation->fails()) { return Response::make($validation->errors->first(), 400); } $file = Input::file('file'); $destinationPath = 'uploads/products/'; $ext = $file->guessClientExtension(); // Get real extension according to mime type $fullname = $file->getClientOriginalName(); // Client file name, including the extension of the client $hashname = date('H.i.s').'-'.md5($fullname).'.'.$ext; // Hash processed file name, including the real extension $picture = Image::make($file->getRealPath()); // crop the best fitting ratio and resize image $picture->fit(1024, 683)->save(public_path($destinationPath.$hashname)); $picture->fit(585, 347)->save(public_path('uploads/product_thumbnails/'.$hashname)); $model = $this->model->find($id); $oldThumbnails = $model->thumbnails; $model->thumbnails = $hashname; $model->save(); File::delete(public_path('uploads/product_thumbnails/'.$oldThumbnails)); $models = new ProductPictures; $models->filename = $hashname; $models->product_id = $id; $models->user_id = Auth::user()->id; $models->save(); if( $models->save() ) { return Response::json('success', 200); } else { return Response::json('error', 400); } } /** * Action: Delete resource images * @return Response */ public function deleteUpload($id) { // Only allows you to share pictures on the cover of the current resource being deleted $filename = ProductPictures::where('id', $id)->where('user_id', Auth::user()->id)->first(); $oldImage = $filename->filename; if (is_null($filename)) return Redirect::back()->with('error', 'Can\'t find picture'); elseif ($filename->delete()) { File::delete( public_path('uploads/products/'.$oldImage) ); return Redirect::back()->with('success', 'Delete success'); } else return Redirect::back()->with('warning', 'Delete fail'); } // ... } ### Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Content Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API Views tree . ├── CONTRIBUTING.md ├── app │   ├── api │   │   ├── alipay │   │   │   ├── alipay.config.php │   │   │   ├── alipayapi.php │   │   │   ├── cacert.pem │   │   │   ├── images │   │   │   │   ├── alipay.gif │   │   │   │   └── new-btn-fixed.png │   │   │   ├── index.php │   │   │   ├── lib │   │   │   │   ├── alipay_core.function.php │   │   │   │   ├── alipay_md5.function.php │   │   │   │   ├── alipay_notify.class.php │   │   │   │   └── alipay_submit.class.php │   │   │   ├── log.txt │   │   │   ├── notify_url.php │   │   │   ├── readme.txt │   │   │   └── return_url.php │   ├── commands │   ├── config │   ├── controllers │   ├── database │   ├── filters.php │   ├── functions.php │   ├── lang │   ├── models │   ├── routes.php │   ├── ... │   ├── storage │   ├── views │   ├── account │   │   ├── order │   │   │   ├── customerOrderDetails.blade.php │   │   │   ├── index.blade.php │   │   │   ├── order.blade.php │   │   │   ├── return.blade.php │   │   │   ├── seller.blade.php │   │   │   └── sellerOrderDetails.blade.php │   │   ├── product │   │   │   ├── cart.blade.php │   │   │   ├── comments.blade.php │   │   │   ├── create.blade.php │   │   │   ├── edit.blade.php │   │   │   ├── index.blade.php │  │ │ └── ... │   │   ├── ... │   ├── admin │   │   ├── product │   │   │   ├── create.blade.php │   │   │   ├── edit.blade.php │   │   │   └── index.blade.php │   │   ├── product_categories │   │   │   ├── create.blade.php │   │   │   ├── edit.blade.php │   │   │   ├── index.blade.php │  │ │  └── ... │   │   └── users │   │   ├── create.blade.php │   │   ├── edit.blade.php │   │   ├── index.blade.php │  │ └── ... │   ├── product │   │   ├── content.blade.php │   │   ├── gallery.blade.php │   │   ├── index.blade.php │   │   ├── show.blade.php │   │   └── sidebar.blade.php │   ├── ... ├── artisan ├── public ├── ... Routes This file should be saved as app/routes.php <?php // ... /* |-------------------------------------------------------------------------- | User Routes |-------------------------------------------------------------------------- | */ Route::group(array('prefix' => 'account', 'before' => 'auth'), function () { $Account = 'AccountController@'; // ... # Product Route::group(array('prefix' => 'myproduct'), function () { $resource = 'myproduct'; $controller = 'ProductController@'; Route::get( '/', array('as' => $resource.'.index' , 'uses' => $controller.'index' )); Route::get( 'create', array('as' => $resource.'.create' , 'uses' => $controller.'create' )); Route::post( '/', array('as' => $resource.'.store' , 'uses' => $controller.'store' )); Route::get( '{id}/edit', array('as' => $resource.'.edit' , 'uses' => $controller.'edit' )); Route::put( '{id}', array('as' => $resource.'.update' , 'uses' => $controller.'update' )); Route::post( '{id}', array('as' => $resource.'.postUpload' , 'uses' => $controller.'postUpload' )); Route::delete( '{id}/edit', array('as' => $resource.'.deleteUpload' , 'uses' => $controller.'deleteUpload' )); Route::delete( '{id}', array('as' => $resource.'.destroy' , 'uses' => $controller.'destroy' )); Route::get( 'cart', array('as' => $resource.'.cart' , 'uses' => $controller.'cart' )); Route::delete( '{id}', array('as' => $resource.'.destroyGoods' , 'uses' => $controller.'destroyGoods' )); Route::get( 'comments', array('as' => $resource.'.comments' , 'uses' => $controller.'comments' )); Route::delete('comments/{id}', array('as' => $resource.'.deleteComment' , 'uses' => $controller.'deleteComment' )); }); # Order Route::group(array('prefix' => 'order'), function () { $resource = 'order'; $controller = 'ProductOrderController@'; Route::get( '/', array('as' => $resource.'.index' , 'uses' => $controller.'index' )); Route::get( '{id}/order', array('as' => $resource.'.order' , 'uses' => $controller.'order' )); Route::get('{id}/customerOrderDetails', array('as' => $resource.'.customerOrderDetails' , 'uses' => $controller.'customerOrderDetails' )); Route::get( '{id}/sellerOrderDetails', array('as' => $resource.'.sellerOrderDetails' , 'uses' => $controller.'sellerOrderDetails' )); Route::delete( '{id}', array('as' => $resource.'.destroyOrder' , 'uses' => $controller.'destroyOrder' )); Route::post( 'payment', array('as' => $resource.'.payment' , 'uses' => $controller.'payment' )); Route::post( 'rePayment', array('as' => $resource.'.rePayment' , 'uses' => $controller.'rePayment' )); Route::post( 'trade-notify', array('as' => $resource.'.tradeNotify' , 'uses' => $controller.'tradeNotify' )); Route::get( 'trade-return', array('as' => $resource.'.tradeReturn' , 'uses' => $controller.'tradeReturn' )); Route::get( 'seller', array('as' => $resource.'.seller' , 'uses' => $controller.'seller' )); Route::post( 'send-goods', array('as' => $resource.'.sendGoods' , 'uses' => $controller.'sendGoods' )); Route::post( 'checkout', array('as' => $resource.'.checkout' , 'uses' => $controller.'checkout' )); }); # Update basic information Route::get('settings' , array('as' => 'account.settings' , 'uses' => $Account.'getSettings' )); Route::put('settings' , $Account.'putSettings'); // ... }); /* |-------------------------------------------------------------------------- | Admin Dashboard |-------------------------------------------------------------------------- */ Route::group(array('prefix' => 'admin', 'before' => 'auth|admin'), function () { $Admin = 'AdminController@'; // ... # Product category management Route::group(array('prefix' => 'product-categories'), function () { $resource = 'product_categories'; $controller = 'Admin_ProductCategoriesResource@'; Route::get( '/', array('as' => $resource.'.index' , 'uses' => $controller.'index' )); Route::get( 'create', array('as' => $resource.'.create' , 'uses' => $controller.'create' )); Route::post( '/', array('as' => $resource.'.store' , 'uses' => $controller.'store' )); Route::get( '{id}/edit', array('as' => $resource.'.edit' , 'uses' => $controller.'edit' )); Route::put( '{id}', array('as' => $resource.'.update' , 'uses' => $controller.'update' )); Route::post( '{id}', array('as' => $resource.'.postUpload' , 'uses' => $controller.'postUpload' )); Route::delete('{id}/edit', array('as' => $resource.'.deleteUpload' , 'uses' => $controller.'deleteUpload')); Route::delete( '{id}', array('as' => $resource.'.destroy' , 'uses' => $controller.'destroy' )); }); # Product management Route::group(array('prefix' => 'product'), function () { $resource = 'product'; $controller = 'Admin_ProductResource@'; Route::get( '/', array('as' => $resource.'.index' , 'uses' => $controller.'index' )); Route::get( 'create', array('as' => $resource.'.create' , 'uses' => $controller.'create' )); Route::post( '/', array('as' => $resource.'.store' , 'uses' => $controller.'store' )); Route::get( '{id}/edit', array('as' => $resource.'.edit' , 'uses' => $controller.'edit' )); Route::put( '{id}', array('as' => $resource.'.update' , 'uses' => $controller.'update' )); Route::post( '{id}', array('as' => $resource.'.postUpload' , 'uses' => $controller.'postUpload' )); Route::delete('{id}/edit', array('as' => $resource.'.deleteUpload' , 'uses' => $controller.'deleteUpload')); Route::delete( '{id}', array('as' => $resource.'.destroy' , 'uses' => $controller.'destroy' )); }); // ... }); // ... ### Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Content Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API This file should be saved as app/models/User.php <?php use Illuminate\Auth\UserInterface; use Illuminate\Auth\Reminders\RemindableInterface; class User extends BaseModel implements UserInterface, RemindableInterface { public function getRememberToken() { return $this->remember_token; } public function setRememberToken($value) { $this->remember_token = $value; } public function getRememberTokenName() { return 'remember_token'; } /** * Database table (without prefix) * @var string */ protected $table = 'users'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * The attributes excluded from the model's JSON form. * * @var array */ protected $hidden = array('password'); /** * Get the unique identifier for the user. * * @return mixed */ public function getAuthIdentifier() { return $this->getKey(); } /** * Get the password for the user. * * @return string */ public function getAuthPassword() { return $this->password; } /** * Get the e-mail address where password reminders are sent. * * @return string */ public function getReminderEmail() { return $this->email; } /** * Visitor: Get friendly signin date * @return string */ public function getFriendlySigninAtAttribute() { if (is_null($this->signin_at)) return 'New account'; else return friendly_date($this->signin_at); } /** * Ajuster: Password * @param string $value Not handle password string * @return void */ public function setPasswordAttribute($value) { If the string passed in Hash encryption have been made, then no processing is repeated $this->attributes['password'] = Hash::needsRehash($value) ? Hash::make($value) : $value; } /** * Object: Orders * @return void */ public function orders() { return $this->hasMany("Order"); } // ... } This file should be saved as app/models/Product.php <?php use \Michelf\MarkdownExtra; /** * Product */ class Product extends BaseModel { /** * Database table (without prefix) * @var string */ protected $table = 'products'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * ORM (Modeling object relationships): Product category * @return object Category */ public function category() { return $this->belongsTo('ProductCategories', 'category_id'); } /** * ORM (Modeling object relationships): Seller * @return object User */ public function user() { return $this->belongsTo('User', 'user_id'); } /** * ORM (Modeling object relationships): Comments of goods * @return object Illuminate\Database\Eloquent\Collection */ public function comments() { return $this->hasMany('ProductComment', 'product_id'); } /** * ORM (Modeling object relationships): Picture of goods * @return object Illuminate\Database\Eloquent\Collection */ public function pictures() { return $this->hasMany('ProductPictures', 'product_id'); } // ... } This file should be saved as app/models/ProductCategories.php <?php /** * Product categories */ class ProductCategories extends BaseModel { /** * Database table (without prefix) * @var string */ protected $table = 'product_categories'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * ORM (Modeling object relationships): goods in product category * @return object Illuminate\Database\Eloquent\Collection */ public function product() { return $this->hasMany('Product', 'category_id'); } // ... } This file should be saved as app/models/ProductComment.php <?php /** * Product comments */ class ProductComment extends BaseModel { /** * Database table (Not include prefix) * @var string */ protected $table = 'product_comments'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * ORM (Modeling object relationships): product owner * @return object Article */ public function product() { return $this->belongsTo('Product', 'product_id'); } /** * ORM (Modeling object relationships): comments author * @return object User */ public function user() { return $this->belongsTo('User', 'user_id'); } // ... } This file should be saved as app/models/ProductPictures.php <?php /** * Product Pictures */ class ProductPictures extends BaseModel { /** * Database table (without prefix) * @var string */ protected $table = 'product_pictures'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * Object-relational model: Vesting product * @return object Product */ public function product() { return $this->belongsTo('Product', 'product_id'); } // ... } This file should be saved as app/models/ProductOrder.php <?php use \Michelf\MarkdownExtra; /** * ProductOrder */ class ProductOrder extends BaseModel { /** * Database table (without prefix) * @var string */ protected $table = 'product_orders'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * ORM (Modeling object relationships): Order * @return object User */ public function order() { return $this->belongsTo('User', 'user_id'); } // ... } This file should be saved as app/models/ShoppingCart.php <?php use \Michelf\MarkdownExtra; /** * ShoppingCart */ class ShoppingCart extends BaseModel { /** * Database table (without prefix) * @var string */ protected $table = 'product_cart'; /** * Soft delete * @var boolean */ protected $softDelete = false; /** * ORM (Modeling object relationships): Seller * @return object User */ public function seller() { return $this->belongsTo('User', 'user_id'); } // ... } ### Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects This tutorial will show you how to create a online shop with alipay payment function in laravel framwork. Install Laravel and installing other dependencies. Content Laravel E-Commerce with Alipay Dualfun Develope Part I - Creating Database Objects Laravel E-Commerce with Alipay Dualfun Develope Part II - Creating Models Laravel E-Commerce with Alipay Dualfun Develope Part III - Views and Routes Laravel E-Commerce with Alipay Dualfun Develope Part IV - Controllers Laravel E-Commerce with Alipay Dualfun Develope Part V - Alipay Payment API Creating Database Objects For our online shop; we're going to need categories for products to be sorted into, products and accounts. We’ll also need orders and order items, to track which items have been sold. Creating Migrations $ php artisan migrate:make create_users_table $ php artisan migrate:make create_product_categories_table $ php artisan migrate:make create_products_table $ php artisan migrate:make create_product_pictures_table $ php artisan migrate:make create_product_comments_table $ php artisan migrate:make create_product_cart_table $ php artisan migrate:make create_product_order_table app/database/migrations/xxxx_xx_xx_xxxx_create_users_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateUsersTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('users', function(Blueprint $table) { $table->increments('id'); $table->string('email'); $table->string('password'); $table->string('username', 60)->nullable(); $table->string('nickname', 60)->nullable(); $table->string('sex',4)->nullable(); $table->text('bio')->nullable(); $table->string('born_year')->nullable(); $table->string('born_month')->nullable(); $table->string('born_day')->nullable(); $table->text('home_province')->nullable(); $table->text('home_city')->nullable(); $table->text('home_address')->nullable(); $table->string('alipay')->nullable(); $table->string('phone')->nullable(); $table->string('portrait')->nullable(); $table->string('remember_token')->nullable(); $table->boolean('is_admin')->default('0'); $table->timestamp('deleted_at')->nullable(); $table->timestamp('activated_at')->nullable(); $table->timestamp('signin_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('users'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_product_categories_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductCategoriesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('product_categories', function(Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('sort_order'); $table->text('content')->nullable(); $table->string('thumbnails')->nullable(); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('product_categories'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_products_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('products', function(Blueprint $table) { $table->increments('id'); $table->string('category_id'); $table->string('user_id'); $table->text('title', 100); $table->string('price',100); $table->string('quantity',100)->nullable(); $table->text('slug', 255); $table->text('province')->nullable(); $table->text('city')->nullable(); $table->text('content'); $table->string('thumbnails')->nullable(); $table->smallInteger('comments_count')->default('0'); $table->text('meta_title', 100); $table->text('meta_description', 255); $table->text('meta_keywords', 255); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('products'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_product_pictures_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductPicturesTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('product_pictures', function(Blueprint $table) { $table->increments('id'); $table->string('product_id')->nullable(); $table->string('user_id'); $table->text('filename', 255); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('product_pictures'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_product_comments_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductCommentsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('product_comments', function(Blueprint $table) { $table->increments('id'); $table->string('user_id')->nullable(); $table->string('product_id'); $table->text('content'); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('product_comments'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_product_cart_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductCartTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('product_cart', function(Blueprint $table) { $table->engine = "InnoDB"; $table->increments('id'); $table->integer('buyer_id'); $table->integer("seller_id"); $table->integer("product_id"); $table->integer("quantity"); $table->string("price", 60); $table->string("payment", 60); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('product_cart'); } } app/database/migrations/xxxx_xx_xx_xxxx_create_product_order_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateProductOrdersTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('product_orders', function(Blueprint $table) { $table->engine = "InnoDB"; $table->increments('id'); $table->text('order_id'); $table->text('alipay_trade')->nullable();; $table->integer('seller_id'); $table->integer("product_id"); $table->integer('customer_id'); $table->integer("quantity"); $table->string("price", 60); $table->string("payment", 60); $table->text("customer_address", 120); $table->text("express_name", 120)->nullable(); $table->text("invoice_no", 120)->nullable(); $table->boolean('is_payment')->default('0'); $table->boolean('is_express')->default('0'); $table->boolean('is_checkout')->default('0'); $table->timestamp('deleted_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('product_orders'); } } $ php artisan migrate ### Setup VPN With PPTP on Linux Wiki One of the commonly asked questions from our users is how to add another IP address to their server. You can assign your own private IP address to your droplet by creating a VPN tunnel. Whether you want to build your own Virtual Private Network (VPN), or assign an SSL certificate to that IP address, you have several options. From all of the possible options, the most optimal ones are between PPTP and OpenVPN. A Point-To-Point Tunneling Protocol (PPTP) allows you to implement your own VPN very quickly, and is compatible with most mobile devices. Even though PPTP is less secure than OpenVPN, it is also faster and uses less CPU resources. PPTP Installation You will have to select one server to be responsible for handling out IPs to others and authenticating all of your servers into your VPN. This will become your PPTP Server. On CentOS $ sudo rpm -i http://poptop.sourceforge.net/yum/stable/rhel6/pptp-release-current.noarch.rpm $ sudo yum -y install pptpd On Ubuntu $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install pptpd Now you should edit /etc/pptpd.conf and add the following lines localip 10.0.0.1 remoteip 10.0.0.100-200 Where localip is IP address of your server and remoteip are IPs that will be assigned to clients that connect to it. Next, you should setup authentication for PPTP by adding users and passwords. Simply add them to /etc/ppp/chap-secrets # Secrets for authentication using CHAP # client server secret IP addresses box1 pptpd password * Where client is the username, server is type of service – pptpd for our example, secret is the password, and IP addresses specifies which IP address may authenticate. By setting * in IP addresses field, you specify that you would accept username/password pair for any IP. Add DNS servers to /etc/ppp/pptpd-options ms-dns 8.8.8.8 ms-dns 8.8.4.4 Now you can start PPTP daemon $ sudo service pptpd restart Verify that it is running and accepting connections $ sudo netstat -alpn | grep :1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 15442/pptpd Setup Forwarding It is important to enable IP forwarding on your PPTP server. This will allow you to forward packets between public IP and private IPs that you setup with PPTP. Simply edit /etc/sysctl.conf and add the following line if it doesn’t exist there already: net.ipv4.ip_forward = 1 To make changes active, run $ sudo sysctl -p Create a NAT rule for iptables $ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE && iptables-save If you would also like your PPTP clients to talk to each other, add the following iptables rules $ sudo iptables --table nat --append POSTROUTING --out-interface ppp0 -j MASQUERADE $ sudo iptables -I INPUT -s 10.0.0.0/8 -i ppp0 -j ACCEPT $ sudo iptables --append FORWARD --in-interface eth0 -j ACCEPT Now your PPTP server also acts as a router. If you would like to restrict which servers can connect to your droplets, you can setup an iptables rule that restricts TCP connects to port 1723. Setup Clients on OS X Open System Preferences → Network → Create a new service → Interface: VPN → VPN Type: PPTP → Create → Fill in Service Address and Account Name → Authentication Settings... → Fill in Password → Advanced... → Check in Send all traffic over VPN connection → OK → Apply → Connect This post reference How To Setup Your Own VPN With PPTP I hope you like this tutorial. Share this tutorials on your favorite media to show your appreciation. Thanks! ### Clean Ubuntu System Linux doesn't produce unwanted junk files, but after upgrading, Linux will not automatically delete those files. Clean Cache $ sudo apt-get autoclean # Clean old version application's cache $ sudo apt-get clean # Clean all application's cache $ sudo apt-get autoremove # Remove the isolated software systems no longer in use Cleaning up isolated software packages: $ sudo apt-get install deborphan -y Clean Kernel Check current kernel: $ uname -a Check all kernel: $ dpkg --get-selections | grep linux Remove useless kernel: $ sudo apt-get remove linux-image-x.x.xx-xx-generic $ sudo apt-get remove linux-headers-x.x.xx-xx $ sudo apt-get remove linux-headers-x.x.xx-xx-generic ### KBTalking Pure Pro 60% Cherry MX Blue Mechanical Keyboard KBtalKing, abbreviated KBT, is a Chinese-language keyboard community. KBtalKing is home to the notable keyboard collector and photographer alps.tw. Cherry Corporation was founded in the United States in 1953 and started producing keyboards in 1967, making them the oldest keyboard manufacturer in the world that’s still in business. The company was moved to Germany in 1967 and bought by ZF Friedrichshafen AG in 2008, but keyboards and mechanical switches are still produced under the Cherry brand. Their most popular line of switches, the Cherry MX series, was introduced around 1985. These switches are usually referenced by their physical colour, with each colour denoting the switch’s handling characteristics – whether it is clicky, whether it is tactile, and how much force is required to actuate the switch, in centi-Newtons (cN) or grams (g). The Cherry MX Blue is the most common clicky switch, and was first made available in Filco keyboards in 2007. Blue switches are favoured by typists due to their tactile bump and audible click, but can be less suitable for gaming as the weighting is relatively high – 50 cN – and it is a bit harder to double tap, as the release point is above the actuation point. Blue switches are noticeably louder than other mechanical switches, which are already louder than rubber domes, so these switches can be a bit disruptive in close working conditions. ### Simple VPS Configuration and Management Recently, I change my web hosting from Godaddy to DigitalOcean, I chose New York Region(nyc2), ping latency in Harbin, China, about 300ms, web pages open faster also. The website hosting on DigitalOcean until June 2022, on Akamai (Linode) from June 2022 to May 2024. This article simply recorded the basic configuration process after the new VPS. Environment Service Providers : DigitalOcean Operating System : Ubuntu 14.04 LTS Trusty Tahr 64-bit Processor : 1 Core Memory : 512MB Hard Disk : 20GB SSD Transfer : 1TB Basic Configuration Determine virtualization technology $ sudo virt-what Update and Upgrade $ sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y && sudo apt-get autoremove -y && sudo apt-get autoclean For security, new installed VPS need to do some simple settings, disable the root account login, and new user. $ sudo adduser username Add new user in sudo rules $ sudo adduser username sudo Edit sshd_config file, disable root account login $ vim /etc/ssh/sshd_config Modify ... PermitRootLogin yes ... to ... PermitRootLogin no Restart SSH Service $ sudo service ssh restart Firewall Install UFW and enable: $ sudo apt-get install ufw Allow custom ports and HTTP(s) services: $ sudo ufw allow 22 $ sudo ufw allow http $ sudo ufw allow https Enable the firewall: $ sudo ufw enable Check the status of the firewall: $ sudo ufw status verbose Build Web Server LAMP reference Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu or run command in terminal $ sudo tasksel LNMP reference Installation LNMP ( Nginx+MySQL+PHP ) on Ubuntu Install and Config Website Apache $ sudo vim /etc/apache2/apache.config ... add ... ServerName localhost ... and comment out ... <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride None Require all granted </Directory> Add new Website $ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/newsite.conf $ sudo vim /etc/apache2/sites-available/newsite.conf Example <VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerAdmin email[at]example.com DocumentRoot /home/newsite ServerName example.com ServerAlias www.example.com <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /home/newsite> # Anti crawlers SetEnvIfNoCase User-Agent ".*(^$|FeedDemon|JikeSpider|Indy Library|Alexa Toolbar|AskTbFXTV|AhrefsBot|CrawlDaddy|CoolpadWebkit|Java|Feedly|UniversalFeedParser|ApacheBench|Microsoft URL Control|Swiftbot|ZmEu|oBot|jaunty|Python-urllib|lightDeckReports Bot|YYSpider|DigExt|YisouSpider|HttpClient|MJ12bot|heritrix|EasouSpider|Ezooms)" BADBOT deny from env=BADBOT # Options Indexes FollowSymLinks MultiViews Options FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet Enable Website $ sudo a2ensite newsite $ sudo service apache2 reload FTP Server $ sudo apt-get install vsftpd $ sudo service vsftpd start System Testing Install Apache Stress Testing Tool $ sudo apt-get install apache2-utils Performance Testing, Optimization and Monitoring Reference Setting Up Email Alerts for Network Monitoring with Nagios Nagios – Server Monitoring Scheme Install the Cacti Server Monitor on Ubuntu Server Routine Maintenance and Backup Security SSH login without password Install SSH service on remote server by command: $ sudo apt-get install openssl-server openssh-client Log in on local as user a and generate a pair of authentication keys. Do not enter a passphrase: $ ssh-keygen -t rsa Upload local public key $ scp -P 22 ~/.ssh/id_rsa.pub user@\[::1\]:~/.ssh/localuser_id_rsa.pub Finally append local new public key to remote:.ssh/authorized_keys on remote server: $ cat ~/.ssh/localuser_id_rsa.pub >> ~/.ssh/authorized_keys From now on you can log into remote server without password. Note that depending on your version of SSH you might also have to do the following changes: Put the public key in .ssh/authorized_keys2 Change the permissions of .ssh to 700 Change the permissions of .ssh/authorized_keys2 to 640 Disable SSH login with password Edit /etc/ssh/sshd_config file, modify PasswordAuthentication yes to PasswordAuthentication no and restart ssh service: $ sudo service ssh restart Change all files and folders permissions of a directory to 644/755 For directories $ find * -type d -print0 | sudo xargs -0 chmod 0755 For files $ find . -type f -print0 | sudo xargs -0 chmod 0644 ### Use Tencent QQ API in Laravel Signin Tencent QQ Connect, get App ID and App Key, reference PHP SDK. The main principle is after authority on Tencent QQ account, store openid, access_token and other information in local database. Determine the access_token at user next signin. Create users table I prefer use migration, run in terminal: $ php artisan migrate:make create_users_table File: app/database/migrations/2014_07_19_1024_create_users_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateUsersTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('users', function(Blueprint $table) { $table->increments('id'); $table->string('email'); $table->string('password'); $table->string('username', 60)->nullable(); // ... $table->timestamp('deleted_at')->nullable(); $table->timestamp('activated_at')->nullable(); $table->timestamp('signin_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('users'); } } Download and extracting PHP SDK, rename floder as api and place at app/api/qq: ├── app │   ├── api │   │   ├── qq │   │   │   ├── class │   │   │   │   ├── ErrorCase.class.php │   │   │   │   ├── Oauth.class.php │   │   │   │   ├── QC.class.php │   │   │   │   ├── Recorder.class.php │   │   │   │   └── URL.class.php │   │   │   ├── comm │   │   │   │   ├── config.php │   │   │   │   ├── inc.php │   │   │   │   └── utils.php │   │   │   ├── qqConnectAPI.php │   │   │   └── storage │   │   ├── weibo │   │   ├── ... │   ├── commands │   ├── config │   ├── controllers │   ├── database │   ├── lang │   ├── ... ├── bootstrap ├── public ├── readme ├── vendor ├── ... ... File: app/api/qq/comm/inc.php <?php die('forbidden'); ?> {"appid":"xxxxxxxxx","appkey":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","callback":"yourdomain/auth/oauth-qq","scope":"get_user_info,add_share,list_album,add_album,upload_pic,add_topic,add_one_blog,add_weibo,check_page_fans,add_t,add_pic_t,del_t,get_repost_list,get_info,get_other_info,get_fanslist,get_idolist,add_idol,del_idol,get_tenpay_addr","errorReport":true,"storageType":"file","host":"localhost","user":"root","password":"root","database":"test"} File: route.php // ... /* |-------------------------------------------------------------------------- | Basic Competence (Signin and Signup Routes) |-------------------------------------------------------------------------- | */ Route::group(array('prefix' => 'auth'), function () { $Authority = 'AuthorityController@'; // ... # Route Group Route::group(array('before' => 'guest'), function () use ($Authority) { // ... # Oauth QQ Signup Route::get('oauth-qq', array('as' => 'oauth-qq', 'uses' => $Authority.'getOauthQQ')); // ... }); }); // ... File: AuthorityController.php // ... /** * Action: Oauth QQ * @return Response */ public function getOauthQQ() { include_once( app_path('api/qq/qqConnectAPI.php' )); $qc = new QC(); $callback = $qc->qq_callback(); $openid = $qc->get_openid(); $qc = new QC($callback,$openid); $access_token = $qc->get_access_token(); $arr = $qc->get_user_info(); $nickname = $arr["nickname"]; $credentials = array('email' => $openid, 'password' => $access_token); if (Auth::attempt($credentials)) { // Signin Success return Redirect::intended(); } else { $user = new User; $user->email = $openid; $user->password = $access_token; $user->nickname = $nickname; $user->save(); return View::make('authority.oauthQQ'); } } // ... File: app/views/authority/signup.blade.php // ... <a href="{{ home('route') }}/oauth-qq.php"> <button type="button"> <i class="fa fa-qq fa-2x"></i>Connect Tencent QQ </button> </a> // ... File: app/views/authority/oauthQQ.blade.php // ... <p class="center">Authorization Successful</p> <p> <a href="{{ route('home') }}/oauth-qq.php"> <button type="button"> <i class="fa fa-qq fa-2x"></i> Login with your QQ account </button> </a> </p> // ... File: public/ouath-qq.php <?php require_once("../app/api/qq/qqConnectAPI.php"); $qc = new QC(); $qc->qq_login(); Result Related Links Use Sina Weibo OAuth 2.0 Authorize in Laravel I hope you like this tutorial. Share this tutorials on your favorite media to show your appreciation. Thanks! ### Use Sina Weibo OAuth 2.0 Authorize in Laravel Signin Sina Weibo Open Platform, get App Key and App Secret, reference PHP SDK. The main principle is after authority on Sina Weibo, store uid, access_token and other information in local database. Determine the access_token at user next signin. Create users table I prefer use migration, run in terminal: $ php artisan migrate:make create_users_table File: app/database/migrations/2014_07_17_0000_create_users_table.php <?php use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateUsersTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('users', function(Blueprint $table) { $table->increments('id'); $table->string('email'); $table->string('password'); $table->string('username', 60)->nullable(); // ... $table->timestamp('deleted_at')->nullable(); $table->timestamp('activated_at')->nullable(); $table->timestamp('signin_at')->nullable(); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop('users'); } } Download and extracting PHP SDK, rename floder as api and place at app/api ├── app │   ├── api │   │   ├── callback.php │   │   ├── config.php │   │   ├── index.php │   │   ├── saetv2.ex.class.php │   │   └── weibolist.php │   ├── commands │   ├── config │   ├── controllers │   ├── database │   ├── lang │   ├── ... ├── bootstrap ├── public ├── readme ├── vendor ├── ... ... File: app/api/config.php Fill in your App Key, App Secret and set callback url. <?php header('Content-Type: text/html; charset=UTF-8'); define( "WB_AKEY" , 'xxxxxxxxxx' ); define( "WB_SKEY" , 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' ); define( "WB_CALLBACK_URL" , route('oauth-signup') ); File: route.php // ... /* |-------------------------------------------------------------------------- | Basic Competence (Signin and Signup Routes) |-------------------------------------------------------------------------- | */ Route::group(array('prefix' => 'auth'), function () { $Authority = 'AuthorityController@'; // ... # Route Group Route::group(array('before' => 'guest'), function () use ($Authority) { // ... # Oauth Signup Route::get('oauth-signup' , array('as' => 'oauth-signup' , 'uses' => $Authority.'getOauthSignup' )); # Oauth Success Route::get('oauth-success', array('as' => 'oauth-success' , 'uses' => $Authority.'getOauthSuccess', 'before' => 'auth' )); // ... }); }); // ... File: AuthorityController.php // ... /** * Action:Oauth 2.0 Signup * @return Response */ public function getOauthSignup() { header("Content-type:text/html;charset=utf-8"); session_start(); include_once( app_path('api/config.php') ); include_once( app_path('api/saetv2.ex.class.php') ); $o = new SaeTOAuthV2( WB_AKEY , WB_SKEY ); if (isset($_REQUEST['code'])) { $keys = array(); $keys['code'] = $_REQUEST['code']; $keys['redirect_uri'] = WB_CALLBACK_URL; try { $token = $o->getAccessToken( 'code', $keys ) ; } catch (OAuthException $e) { } } if ($token) { $_SESSION['token'] = $token; setcookie( 'weibojs_'.$o->client_id, http_build_query($token) ); $c = new SaeTClientV2( WB_AKEY , WB_SKEY , $_SESSION['token']['access_token'] ); $ms = $c->home_timeline(); // Done $uid_get = $c->get_uid(); $uid = $uid_get['uid']; $user_message = $c->show_user_by_id($uid);// Get user info by UID $nickname = $user_message['screen_name']; $password = $_SESSION['token']['access_token']; $credentials = array('email' => $uid, 'password' => $password); if (Auth::attempt($credentials)) { // Signin Success return Redirect::intended(); } else { $user = new User; $user->email = $uid; $user->password = $_SESSION['token']['access_token']; $user->nickname = $nickname; $user->save(); return View::make('authority.oauthSuccess'); } } else { return View::make('signup') ->withErrors(array('add' => 'Signup Fail'));; } } /** * View: Oauth Success * @param string * @return Response */ public function getOauthSuccess() { return View::make('authority.oauthSuccess'); } // ... File: app/views/authority/signup.blade.php <?php header("Content-type:text/html;charset=utf-8"); session_start(); include_once( app_path('api/config.php') ); include_once( app_path('api/saetv2.ex.class.php') ); $o = new SaeTOAuthV2( WB_AKEY , WB_SKEY ); $code_url = $o->getAuthorizeURL( WB_CALLBACK_URL ); ?> // ... <a href="<?php echo $code_url ?>"> <button type="button"> <i class="fa fa-weibo fa-2x"></i>Connect Sina Weibo </button> </a> // ... File: app/views/authority/oauthSuccess.blade.php <?php header("Content-type:text/html;charset=utf-8"); include_once( app_path('api/config.php') ); include_once( app_path('api/saetv2.ex.class.php') ); $o = new SaeTOAuthV2( WB_AKEY , WB_SKEY ); $code_url = $o->getAuthorizeURL( WB_CALLBACK_URL ); ?> // ... <p class="center">Authorization Successful</p> <p> <a href="<?php echo $code_url ?>"> <button type="button"> <i class="fa fa-weibo fa-2x"></i> Login with your Weibo account </button> </a> </p> // ... Result Issues ErrorException Undefined variable: token Only can signin with your own account, if you encounter this situation, try signin Sina Weibo Open Platform → Management Center → My Wesite → Click your website → Website Information → Test Account → Association Test Account. Notice: curl_setopt(): CURLOPT_SSL_VERIFYHOST no longer accepts the value 1, value 2 will be used instead in saetv2.ex.class.php on line 357 Open saetv2.ex.class.php, comment out the code about line 356. curl_setopt($ci, CURLOPT_SSL_VERIFYHOST, 1); Install Curl on Ubuntu Server $ sudo apt-get install php5-curl After install, don't forgot restart/reload web service. Related Links Use Tencent QQ API in Laravel I hope you like this tutorial. Share this tutorials on your favorite media to show your appreciation. Thanks! ### Install Webmin on Ubuntu Wiki Webmin is a web-based system configuration tool for Unix-like systems, although recent versions can also be installed and run on Windows. With it, it is possible to configure operating system internals, such as users, disk quotas, services or configuration files, as well as modify and control open source apps, such as the Apache HTTP Server, PHP or MySQL. Webmin is largely based on Perl, running as its own process and web server. It defaults to TCP port 10000 for communicating, and can be configured to use SSL if OpenSSL is installed with additional required Perl Modules. It is built around modules, which have an interface to the configuration files and the Webmin server. This makes it easy to add new functionality. Due to Webmin's modular design, it is possible for anyone who is interested to write plugins for desktop configuration. Webmin also allows for controlling many machines through a single interface, or seamless login on other webmin hosts on the same subnet or LAN. Webmin is primarily coded by Australian Jamie Cameron and released under the BSD license. Official Website www.webmin.com To get started, login your remote server and follow the steps below. Run below command in terminal: $ sudo apt-get update $ sudo apt-get install libnet-ssleay-perl libauthen-pam-perl libio-pty-perl libmd5-perl $ cd /tmp $ wget http://prdownloads.sourceforge.net/webadmin/webmin_1.690_all.deb $ sudo dpkg -i webmin_1.690_all.deb Finally in your client's web browser go to the webmin login page https://ubuntu-serverip:10000 ### Flush the DNS Cache on OS X Flushing the DNS cache in OS X is easy, but the commands to do so are unique to different versions of OS X. We'll list the commands for the different versions of OS X. You'll need to know which version of OS X you're running on your Mac, so click the Apple logo in the upper left hand corner of your Desktop, and then click "About This Mac" in the pull-down menu. A window will pop up on your desktop listing the version of Mac OS X you are currently running. Make a note of the version, and close the window. Flush DNS Caches in OS X Yosemite 10.10.4 and OS X El Capitan 10.11 From OS X 10.10.4 moving onward, including 10.11, Apple has ditched discoveryd and has replaced it (or rather, reverted back to) mDNSResponder. Thus, to clear DNS caches in OS X Yosemite 10.10.4, and 10.11 El Capitan, and presumably onward, the command string is as follows: $ sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder;say cache flushed Flush DNS Caches in OS X Yosemite (10.10, 10.10.1, 10.10.2, 10.10.3) $ sudo discoveryutil mdnsflushcache; sudo discoveryutil udnsflushcaches;say flushed Flush DNS Caches OS X 10.9 Mavericks The latest version of OS X requires the following commands entered in the Terminal in order to flush DNS Caches: $ dscacheutil -flushcache; sudo killall -HUP mDNSResponder You will need to enter the administrator password for this task to complete. These commands combine killing mDNSResponder with the standard dscacheutil, making it a two step process to first, flush the cache, then reload the DNS handling in OS X to make the changes take effect. Flush DNS Caches in OS X 10.8 Mountain Lion and 10.7 Lion To flush DNS Cache in OS X Lion (10.7) and OS X Mountain Lion (10.8), launch Terminal and enter the following command. (You will need to enter an administrative password.) $ sudo killall -HUP mDNSResponder Flush DNS Cache in Mac OS X 10.6 and Mac OS X 10.5 Launch Terminal and issue the following command: $ dscacheutil -flushcache ### Custom Pagination in Laravel Views // paginator.blade.php <?php $presenter = new Illuminate\Pagination\BootstrapPresenter($paginator); $trans = $environment->getTranslator(); if ($paginator->getLastPage() > 1): /* How many pages need to be shown before and after the current page */ $showBeforeAndAfter = 3; /* Current Page */ $currentPage = $paginator->getCurrentPage(); $lastPage = $paginator->getLastPage(); /* Check if the pages before and after the current really exist */ $start = $currentPage - $showBeforeAndAfter; /* Check if first page in pagination goes below 1, and substract that from $showBeforeAndAfter var so the pagination won't start with page 0 or below */ if($start < 1) { $diff = $start - 1; $start = $currentPage - ($showBeforeAndAfter + $diff); } $end = $currentPage + $showBeforeAndAfter; if($end > $lastPage) { $diff = $end - $lastPage; $end = $end - $diff; } ?> <div class="pager"> <div class="puls previous"> <?php echo $presenter->getPrevious('← Previous'); ?> </div> <ul class="pagination"> <?php echo $presenter->getPageRange($start, $end); ?> </ul> <div class="puls next"> <?php echo $presenter->getNext('Next →'); ?> </div> </div> <?php endif; ?> Style Sheet .pager { border-top: 1px solid #DEDEDE; margin: 0; padding-top: 30px; font-size: 13px; } .pager .puls li > a, .pager .pagination li a { -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; } .pager .puls { display: inline; } .pager .puls li > a { color: #fff; border: 0; background: #222; padding: 5px 10px; } .pager .puls li > a:hover { color: #fff; background: none repeat scroll 0% 0% #0098F9; } .pager .previous li > a, .pager .previous > span { float: left; } .pager .previous li.disabled > span { float: left; border: solid #ddd 1px; -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; padding: 5px 10px; } .pager .next li > a, .pager .next > span { float: right; } .pager .next .disabled > span { float: right; border: solid #ddd 1px; -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; } .pager .pagination { margin: 0; } .pager .pagination li { display: inline-block; margin: 2.5px; } .pager .pagination li a { background: #DEDEDE; margin: 0; border: 0; padding: 7px 10px; margin: 0; color: #6e6e6e; } .pager .pagination li a:hover { color: #fff; } .pager .pagination>.active>span { background: #222; color: #fff; border: 0; padding: 7px 10px; margin: 0; border-radius: 2px; } Preview Related Article Laravel Ajax Pagination with jQuery Laravel Ajax Multi Pagination in a Single Page ### Assembly Binary Convert Using assembly implementation convert binary to octal, decimal and hexadecimal. Source Code ;conver.asm DATA SEGMENT MENU db'Please Input your Choice:',0dh,0ah db'1:Bin-->Oct8',0dh,0ah db'2:Bin-->Dec10',0dh,0ah db'3:Bin-->Hex16',0dh,0ah db'0:Exit',0dh,0ah,'$' mess1 db'Please Input a Bin number:$' mess2 db'Output Oct is:$' mess3 db'Output Dec is:$' mess4 db'Output Hex is:$' error db'Error!Please Input your Choice again:$' TABLE DW L0,L1,L2,L3 DATA ENDS CODE SEGMENT ASSUME CS:CODE,DS:DATA START: MOV AX,DATA MOV DS,AX L: LEA DX,MENU ;Display MENU MOV AH,09H INT 21H MOV AH,01H INT 21H SUB AL,30H CMP AL,3 JBE DONE ;If less than 4 jump to DONE LEA DX,error MOV AH,09H INT 21H JMP L DONE: LEA SI,TABLE MOV AH,0 SHL AX,1 ADD SI,AX JMP [SI] L1: CALL INBIN CALL CRLF CALL OUTOCT CALL CRLF JMP L L2: CALL INBIN CALL CRLF CALL OUTDEC CALL CRLF JMP L L3: CALL INBIN CALL CRLF CALL OUTHEX CALL CRLF JMP L L0: MOV AH,4CH INT 21H CRLF PROC MOV AH,02H MOV DL,0DH INT 21H MOV DL,0AH INT 21H RET CRLF ENDP INBIN PROC NEAR LEA DX,mess1 MOV AH,09H INT 21H MOV BX,0 BINL1: MOV AH,01H INT 21H CMP AL,0DH JZ BINL2 AND AL,01H SHL BX,1 OR BL,AL JMP BINL1 BINL2: RET INBIN ENDP OUTOCT PROC NEAR LEA DX,mess2 MOV AH,09H INT 21H OCT1: ROL BX,1 MOV AX,BX AND AL,01H ADD AL,30H MOV DL,AL MOV AH,02H INT 21H MOV CH,5 OCT2: MOV CL,3 ROL BX,CL MOV AL,BL AND AL,07H ADD AL,30H MOV DL,AL MOV AH,2 INT 21H DEC CH JNZ OCT2 RET OUTOCT ENDP OUTDEC PROC LEA DX,mess3 MOV AH,09H INT 21H MOV CX,10000 CALL DECL1 MOV CX,1000 CALL DECL1 MOV CX,100 CALL DECL1 MOV CX,10 CALL DECL1 MOV CX,1 CALL DECL1 RET OUTDEC ENDP DECL1 PROC NEAR MOV AX,BX MOV DX,0 DIV CX MOV BX,DX MOV DL,AL ADD DL,30H MOV AH,2 INT 21H RET DECL1 ENDP OUTHEX PROC LEA DX,mess4 MOV AH,09H INT 21H MOV CX,4 MOV AL,0 HEXL1: ROL BX,1 RCL AL,1 ROL BX,1 RCL AL,1 ROL BX,1 RCL AL,1 ROL BX,1 RCL AL,1 MOV DL,AL AND DL,0FH ADD DL,30H CMP DL,39H JBE HEXL2 ADD DL,7 HEXL2: MOV AH,02H INT 21H LOOP HEXL1 RET OUTHEX ENDP CODE ENDS END START C:\MASM\masm>masm conver Microsoft (R) Macro Assembler Version 5.00 Copyright (C) Microsoft Corp 1981-1985, 1987. All rights reserved. Object filename [conver.OBJ]: Source listing [NUL.LST]: Cross-reference [NUL.CRF]: 50778 + 450678 Bytes symbol space free 0 Warning Errors 0 Severe Errors C:\MASM\masm>link conver Microsoft (R) Overlay Linker Version 3.65 Copyright (C) Microsoft Corp 1983-1988. All rights reserved. Run File [CONVER.EXE]: List File [NUL.MAP]: Libraries [.LIB]: LINK : warning L4021: no stack segment C:\MASM\masm>conver.exe Please Input your Choise: 1:Bin-->Oct8 2:Bin-->Dec10 3:Bin-->Hex16 0:Exit _ Download Integrated Assembly Compiler v1.2 filename : visualasm.zip Size : 1.6MB SHA1 : e0f905f3d3b104d3fa3f27ac8a779243f01ca9f2 Last Update : 18 June, 2014 ### Set WordPress Category and Articles Diff Style In this post let's talk about how to set different categories using different categories list style, the article under Different categories use different articles style in WordPress. First we should to know two function of Wordpress is_category and in_category, please refer to official documents. In short, these two functions main difference is categories page determine categories should use is_category, content page determine categories should use in_category. WordPress category list in different categories using different style Create a independent ID or alias style file Wordpress category determine ID call style file of theme, if can't find that, will use archive.php, and if archive.php doesn't exist, will use index.php. If your category's ID is 2 and alias is demo, create category-2.php or category-demo.php for this category. Determine ID to call different style file Make category ID as 2 use thumb_list.php style, other use title_list.php style: <php if (is_category(2)) { include(TEMPLATEPATH . '/thumb_list.php'); } else { include(TEMPLATEPATH . '/title_list.php'); } Make category ID as 2, 3, 4 use thumb_list.php style, other use title_list.php style: <php if (is_category(array(2,3,4))) { include(TEMPLATEPATH . '/thumb_list.php'); } else { include(TEMPLATEPATH . '/title_list.php'); } Make category ID as 2, 3, 4 use thumb_list.php style, category ID as 5, 6, 7 use img_list.php, other use title_list.php style: <php if (is_category(array(2,3,4))) { include(TEMPLATEPATH . '/thumb_list.php'); } elseif (is_category(array(5,6,7))){ include(TEMPLATEPATH . '/img_list.php'); } else { include(TEMPLATEPATH . '/title_list.php'); } WordPress artices in different categories using different style Make post in category ID as 2, 3, 4 use single1.php style, post in category ID as 5, 6, 7 use single2.php, other use single3.php style: <php if ( in_category(array(2,3,4)) ) { include(TEMPLATEPATH . '/single1.php'); } elseif ( in_category(array(5,6,7))){ include(TEMPLATEPATH . '/single2.php'); } else { include(TEMPLATEPATH . '/single3.php'); } ### Laravel Ajax Multiple Image Upload and Preview This is post talk about ajax multiple image upload and preview with Dropzone.js in Laravel 4, and store file info in database. Dropzone.js comes with nice CSS and Javascript that make it a breeze to work with HTML 5 file upload API. It provides you with a file input container that users can drag and drop files unto or just click the container to select multiple files from the file system. From the plugin docs: Dropzone.js works in: Chrome 7+ Firefox 4+ IE 10+ Opera 12+ Safari 5+ For all the other browsers, dropzone provides an oldschool file input fallback. Ajax Image Upload Features Ajax image upload without page refrshing using laravel Instant uploaded image preview Image validation and showing error messages Note: For validation you need to enable php_fileinfo otherwise it returns exception. Now start the implementation At first we need to set route for display and post form data. File: routes.php Route::group(array('prefix' => 'articles'), function () { $resource = 'articles'; $controller = 'Admin_ArticleResource@'; // ... Route::post('{id}', array('as' => $resource.'.postUpload', 'uses' => $controller.'postUpload' )); // ... }); Routes are fine. Then we need to process the form data. We can create a postUpload Controller, but at here I create postUpload function in my ArticleResource Controller. File: ArticleResource.php // ... public function postUpload($id){ $input = Input::all(); $rules = array( 'file' => 'image|max:3000', ); $validation = Validator::make($input, $rules); if ($validation->fails()) { return Response::make($validation->errors->first(), 400); } $file = Input::file('file'); $destinationPath = 'uploads'; // Get real extension according to mime type $ext = $file->guessClientExtension(); // Client file name, including the extension of the client $fullname = $file->getClientOriginalName(); // Hash processed file name, including the real extension $hashname = date('H.i.s').'-'.md5($fullname).'.'.$ext; $upload_success = Input::file('file')->move($destinationPath, $hashname); $models = new Picture; $models->filename = $hashname; $models->article_id = $id; $models->user_id = Auth::user()->id; $models->save(); if( $upload_success ) { return Response::json('success', 200); } else { return Response::json('error', 400); } } // ... Create uploads folder in your application public directory (public/uploads/) then you must give 777 folder permission. Picture Models File: Picture.php <?php /** * Article Pictures */ class Picture extends BaseModel { /** * Database table name (without prefix) * @var string */ protected $table = 'article_pictures'; /** * Soft delete * @var boolean */ protected $softDelete = true; /** * Object-relational model: Vesting article * @return object Article */ public function article() { return $this->belongsTo('Article', 'article_id'); } } View blade file: article/edit.blade.php // ... {{ script('dropzone') }} // Require Dropzone.js // ... <form action="{{ route($resource.'.postUpload', $data->id) }}" class="dropzone" id="upload"> <input type="hidden" name="_token" value="{{ csrf_token() }}" /> </form> // ... Note: I've define dropzone.js in app/config/extend.php File: extend.php <?php /* |-------------------------------------------------------------------------- | Extend File Configuration |-------------------------------------------------------------------------- | */ return array( /** * Static resource files alias configuration */ 'webAssets' => array( // ... 'jsAliases' => array( // Script alias configuration // ... 'dropzone' => 'assets/dropzone-3.10.2/lib/dropzone.js', // ... ), ), ); Note: If you get the error Dropzone already attached. when creating the Dropzone, try to turn off autoDiscover globally like this: Dropzone.autoDiscover = false; ... or turn off autoDiscover of specific elements like this: Dropzone.options.myAwesomeDropzone = false; I hope you like this tutorial. Share this tutorials on your favorite media to show your appreciation. Thanks! ### ABBYY FineReader OCR Pro for Mac Wiki Easily transform paper documents, PDFs and digital photos of text into editable and searchable files with ABBYY FineReader Pro for Mac. No more manual retyping or reformatting. Instead, you can edit, search, share, archive, and copy information from documents for reuse and quotation — saving your time, effort and hassles. FineReader Pro combines unmatched OCR quality, accurate document formatting reconstruction and unsurpassed language support with an intuitive interface. Official Website www.abbyy.com View in App Store Download ABBYY FineReader OCR Pro for Mac V12.0.3 ( Powered by Baidu ) Download Key: 7rup File : FineReader.dmg Size : 538.5MB SHA1 : 02f914d1c24ddd0e95171e90b7d81d689fe37b8e Update : June 2nd, 2014 Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### OS X Related Books OS X Mountain Lion 高手进阶 Publication Date: April, 2013 | ISBN: 9787121197321 | Edition: 1 作者简介 以作家身份曾著有《苹果达人 Mac OS X玩家秘技》。现为Macworld中文版专栏作者、Macworld Expo Asia 独立顾问。同时,作为一名独立开发者,于2009年编写的第一款OS X 免费系统工具类软件 mfTunekit迄今已累计下载10万余次。而后,陆续有iChecker/ezText/DesktopMonitor/DesktopForecasting等多款软件在Mac App Store中上架。另外,也是Ath Exam,SNMP JMeter Plugin的作者。目前在创业之中,是北京源一盛世软件开发有限公司 (www.ioriginal.com.cn)和北京盛山创想科技有限公司 (www.crazybox.cn) 的联合创始人。 内容简介 2012年7月发布的OS X Mountain Lion,是OS X历史上最受欢迎的版本。超过上百项的新特性,以及更为稳定的内核,受到OS X用户的欢迎。也是从这一版本开始,OS X变得更为易用,从而使那些习惯了 iPhone,iPad等iOS设备的用户也纷纷考虑转到OS X平台来工作。所以,本书并不着重于讲解本身已经非常简单直观的OS X系统本身,而是从 OS X平台的应用层面出发,帮助读者了解这个强大的系统,以及为读者如何使用此系统来完成各类日常工作提供指导和帮助。 如果你对OS X MountainLion或以往版本已经比较熟悉,并希望进一步了解系统的高级应用、排错及优秀第三方软件的使用信息和方法,则你适合阅读本书。 ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2014 Date: Thursday, May 29, 2014 Time: 8:30 PM Place: Schönbrunn Palace, Palace Gardens (in front of the Neptune Fountain) 1130 Vienna Conductor: Christoph Eschenbach Soloist: Lang Lang, Piano Program Hector Berlioz Le carnaval romain. Ouvertüre caractéristique, op. 9 Franz Liszt Mazeppa. Symphonic Poem Richard Strauss Burlesque for Piano and Orchestra in D minor Hector Berlioz Overture to the Opera "Benvenuto Cellini" Richard Strauss "Till Eulenspiegel's Merry Pranks", op. 28 The Vienna Philharmonic performs its annual Summer Night Concert Schönbrunn, an open air event with free admission, in the unique ambience of the gardens of Schönbrunn Palace on Thursday, May 29, 2014. The Summer Night Concert will be conducted for the first time by Christoph Eschenbach. The soloist is Lang Lang. With this open air concert in Schönbrunn, the Philharmonic wishes to provide all Viennese, as well as visitors to the city, with a special musical experience in the impressive setting of Schönbrunn Palace and its beautiful baroque gardens, a UNESCO World Cultural Heritage Site. Because there is no admission charge for this concert, no reservations are possible. Christoph Eschenbach Born in Breslau, Germany, Christoph Eschenbach overcame the most difficult of circumstances to become one of the finest pianists and conductors of the late 20th century. In demand as a distinguished guest conductor with the finest orchestras and opera houses throughout the world, Christoph Eschenbach began his tenure in September 2010 as Music Director of the National Symphony Orchestra. Christoph Eschenbach's successful international career as a pianist began when he won the First Prize at the Clara Haskil Competition in Lucerne in 1965. He celebrated his concert debut in the USA in 1969 with the Cleveland Orchestra under Georg Szell. Szell was impressed with his musicianship and gave him lessons in conducting, starting a close relationship that lasted until Szell's death in 1970. Eschenbach's conducting debut took place in 1972. Since that time, Christoph Eschenbach has directed numerous prestigious orchestras such as the Zurich Tonhalle Orchestra, the Houston Symphony Orchestra, the NDR Symphony Hamburg, the Orchestre de Paris and the Philadelphia Orchestra. A prolific recording artist over five decades, Christoph Eschenbach has an impressive discography as both a conductor and a pianist on a number of prominent labels. His recordings include works ranging from J.S. Bach to music of our time, and reflect his commitment to not just canonical works but also the music of the late-20th and early-21st century. Lang Lang Lang Lang inspires millions around the world with his open-hearted, emotive piano playing, whether it be in intimate recitals or on the grandest of stages. He started playing piano aged three, won the Shenyang Competition and gave his first public recital by the time he was five. He entered Beijing’s Central Music Conservatory aged nine, won First Prize at the Tchaikovsky International Young Musicians’ Competition and played the complete Chopin Etudes at the Beijing Concert Hall at 13. His international breakthrough came at the age of 17 in a performance of the Tchaikovsky Concerto with the Chicago Symphony Orchestra. Lang Lang has been featured on every major TV network and in magazines worldwide. He has performed for international dignitaries and in 2013 he was designated by the Secretary General of the United Nations as a Messenger of Peace focusing on global education. Lang Lang is a role model who has helped to encourage more than 40 million Chinese children to learn to play the piano. In 2011, Lang Lang Music World was launched, a multi-functioning arts complex in Shenzhen and Chongqing, China, where children can go to receive piano education, participate in master classes and competitions, attend concerts and purchase educational products. Lang Lang himself continues to give master classes regularly throughout the world at the invitation of the most prestigious musical institutions. Tens of thousands of people have enjoyed Lang Lang’s performances in open air concerts in parks and venues around the globe. One of these performances was in 2008 when Lang Lang appeared with the Vienna Philharmonic at Schönbrunn Palace with Zubin Mehta conducting. Download Wiener Philharmoniker Summer Night Concert Schönbrunn 2014.ts[720P] 9.97GB MediaInfo General ID : 8900 (0x22C4) Complete name : 3sat HD_Sommernachtskonzert Schonbrunn 2014.ts Format : MPEG-TS File Size : 9.97 GiB Duration : 1h 41mn Overall bit rate mode : Variable Overall bit rate : 14.1 Mbps Video ID : 6510 (0x196E) Menu ID : 11150 (0x2B8E) Format : AVC Format / Info : Advanced Video Codec Format Profile : High @L4.0 Format settings, CABAC : Yes Format settings, reframes : 6 frames Format settings, GOP : M = 4, N = 29 Codec ID : 27 Duration : 1h 41mn Bit rate : 12.3 Mbps Width : 1 280 pixels Height : 720 pixels Display Aspect Ratio : 16:9 Frame rate : 50.000 fps Color space : YUV Chroma subsampling : 4:2 : 0 Bit depth : 8 bits Scan type : Progressive Bits / (Pixel * Frame) : 0.266 Stream Size : 8.69 GiB (87%) Color primaries : BT.709 Transfer characteristics : BT.709 Matrix Coefficients : BT.709 Audio # 1 ID : 6520 (0x1978) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 41mn Bit rate mode : Constant Bit rate : 256 Kbps Channel (s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -510ms Stream Size: 186 MiB (2%) Language: German Audio # 2 ID : 6521 (0x1979) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 41mn Bit rate mode : Constant Bit rate : 192 Kbps Channel (s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -510ms Stream Size : 139 MiB (1%) Audio # 3 ID : 6522 (0x197A) Menu ID : 11150 (0x2B8E) Format : AC-3 Format / Info : Audio Coding 3 Mode extension : CM (Complete main) Format settings, endianness : Big Codec ID : 6 Duration : 1h 41mn Bit rate mode : Constant Bit rate : 448 Kbps Channel (s) : 5 channels Channel positions : Front: LCR, Side: LR Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Delay relative to video : -481ms Stream Size : 325 MiB (3%) Language : German Audio # 4 ID : 6523 (0x197B) Menu ID : 11150 (0x2B8E) Format : MPEG Audio Format version : Version 1 Format profile : Layer 2 Codec ID : 3 Duration : 1h 41mn Bit rate mode : Constant Bit rate : 192 Kbps Channel (s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Delay relative to video : -550ms Stream Size : 139 MiB (1%) Language : Multiple languages ### Solve phpMyAdmin Wrong permissions on configuration file, should not be world writable! Introduction phpMyAdmin want config.inc.php to be not world writable, it's sound simple, but not if you run phpMyAdmin from an NTFS filesystem (in my case, linux server who mount a Windows folder) it's not that simple. There are two way to solve this problem: Simple Way, Linux / Unix Server, Linux / Unix Filesystem Just change permission of file: $ sudo chmod 755 config.inc.php Linux or Windows Server, NTFS Filesystem If like me you run a Linux Virtual Machine who mount a Windows folder where phpMyAdmin sources are located, edit config.inc.php and add this line: $cfg['CheckConfigurationPermissions'] = false; This will tell phpMyAdmin to stop checking for fileperm because they can't be properly detected. ### Set Multi Core Browser Priority Use WebKit Rendering Engine Add following code to <header> lable <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> ### Command Line Split or Combine DMG File Split DMG File $ hdiutil segment -o <outputfile> -segmentCount <num> [options] <imagefile> $ hdiutil segment -o <outputfile> -segmentSize <size> [options] <imagefile> For example $ hdiutil segment -o ~/Desktop/New -segmentSize 4000m ~/Desktop/My.dmg Will be generated 4000 MB limit files, likes this New.dmg, New.002.dmgpart, New.003.dmgpart, ... Size Parameter b - Byte k - KB m - MB g - GB Combine DMG File $ hdiutil convert ~/Desktop/New.dmg -format UDRO -o My.dmg If you are a loyal user of the GUI, you can use DMGConverter to complete the work. Of course, this tool has a lot of other functions. Wiki DMGConverter is a Disk Image (.dmg, .cdr, .iso) creation and conversion tool which is simple and easy to use. It is a very useful tool for the collective processing of more than one file and every folder, and it can deal with all the fundamental processing with the drag & drop of the file/folder. Download DMGConverter from MacUpdate Convert cdr to iso Format $ hdiutil makehybrid -iso -joliet -o My.iso My.cdr ### Accessing Apple HFS+ Volumes on Windows This is the Read-Only HFS+ driver that comes with Boot Camp 4. It can be installed on any Windows XP, Vista, 7, 8 or 8.1 machine without the need to install the Boot Camp package (and also easily removed if necessary). Install Apple HFS+ Driver Uninstall MacDrive/Paragon Copy AppleHFS.sys & AppleMNT.sys to C:\Windows\System32\drivers\ Merge Add_AppleHFS.reg Restart Remove Apple HFS+ Driver Browse to C:\Windows\System32\drivers\ Rename AppleHFS.sys & AppleMNT.sys to AppleHFS.sys & AppleMNT.sys Restart Browse to C:\Windows\System32\drivers\ Delete AppleHFS.sys & AppleMNT.sys Merge Remove_AppleHFS.reg Note: x86 folder is for 32-bit Windows x86_64 folder is for 64-bit Windows Download Apple_HFS_Read_Only_Driver_v3.2.0.2.zip Description : Read-Only HFS+ driver from Boot Camp 4 (Build 4131) Size : 80,229 bytes SHA1 : 04316784e9dad5508c628df0a0a3fe6faef86aa0 Download AppleHFS_3.0.1.6_Installer-DDJ.zip Size : 75,349 bytes SHA1 : 85a829712d7329d01af5576cc4d66c8582db44b4 If you want to read and write Apple HFS+ partition, you can install TransMac. Official Website www.acutesystems.com Features Open Mac format disk drives, flash drives, CD/DVD/Blu-ray media, dmg, dmgpart, sparsebundle and sparseimage files Copy files to Mac disks and dmg images. Format for Mac and restore disks and flash drives. Create, compress and expand (convert to iso) dmg files. Built in burner functionality to burn ISO and dmg files directly to CD/DVD/Blu-Ray. Read Mac multisession and hybrid CDs. Download TransMac v10.4 installer.zip Description : TransMac v10.4 Tnstaller Size : 1,716,654 bytes SHA1 : d67b2bdf7d231e6820bddf9e8441f0e3d2954af5 Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### OS X PAC Global Automatic Proxy Settings Go to System Preferences → Network → Advanced → agent, then in the "Agent Configuration File" column fill PAC file URL. Wiki PAC file is a plain text format, in fact, JavaScript files. Chrome/Chromium extension SwitchSharp the "Auto Switch Mode" function is actually simple to create and maintain a PAC file, but the function is relatively weak. PAC file FindProxyForURL function PAC file must contain a function: FindProxyForURL (URL, host). URL parameter is the user input URL, URL parameter host is the host name. For example, the URL http://www.twitter.com/, then the host is www.twitter.com A simple PAC file like this: function FindProxyForURL(url, host) { return "DIRECT"; } The PAC file actually didn't do anything to any URL, will be DIRECT (Direct Internet). PAC file the return value type In addition to return DIRECT outside, there are two common ways: PROXY proxysample.com:8080 http proxy host and port, the host can also be represented by IP SOCKS5 socks5sample.com:1080 socks5 proxy host and port, the host can also be represented by IP Then, specify a http proxy using PAC should be written function FindProxyForURL(url, host) { return "PROXY 127.0.0.1:8087"; } You can even specify multiple agents function FindProxyForURL(url, host) { return "DIRECT; PROXY 127.0.0.1:8087; SOCKS5 proxysample.com:8080"; } The meaning of the phrase statement is All URL, are directly connected; If you can not connect directly, then use the http proxy 127.0.0.1:8087 connection; If you still can not connect, then use proxysample.com:8080 This socks5 proxy connection. Consensus sequence using different connections and statements in the order, you can change according to their actual situation. Maybe you know exactly which sites can not directly connected, you must connect with PROXY or SOCKS5, you can specify the proxy configuration on the site function FindProxyForURL(url, host) { if (shExpMatch(url,"*.google.com/*")) { return "PROXY 127.0.0.1:8087"; } if (shExpMatch(url, "*.wikipedia.com:*/*")) { return "SOCKS5 proxysample.com:8080"; } if (isInNet(host, "10.0.0.0", "255.0.0.0")){ return "DIRECT"; } return "DIRECT; PROXY 127.0.0.1:8087; SOCKS5 proxysample.com:8080"; } Introduces two new functions this PAC file, but the literal meaning, we can probably guess the meaning of the code: When the URL is *google.com/*, automatically using PROXY proxy; When the URL is *wikipedia.cm/*, automatically using SOCKS5 proxy; When the host is within the 10.0.0.0/255.0.0.0 subnet automatically when directly connected; If you do not match, then followed by DIRECT, PROXY, order SOCKS5 attempt. shExpMatch function is used to match the URL or host, matching a similar with DOS wildcard. For example used earlier *.Google.com/* can match any included .Google.com/ string. PAC file Chrome/Chromium extension SwitchSharp also created a custom function that can be used to match a regular expression, but personally think that usually not need to use complex regular expression matching the URL. isInNet function is used to return the requested host is in a specified region. Notably, the second parameter must be isInNet IP, can not be a host name. Hence the need to convert host names into IP. For example, isInNet (host, dnsResolve (www.google.com), 255.255.255.0). ### iSpring Suite - Convert PPT to Flash Wiki iSpring Suite - is a Rapid E-Learning authoring toolkit for Microsoft Windows, that allows creating Flash-based e-learning courses with audio and video narrations, branched and randomized quizzes and surveys, interaction. It also offers the facility to add third-party generated Flash movies and protection settings. The software generates the content compatible with SCORM/AICC and has the options to publish courses especially for BlackBoard LMS. The software makes it possible to save PowerPoint presentation as a solid SWF file, EXE file, HTML page and ZIP archive. Official Website www.ispringsolutions.com Download iSpring Suite iSpring_Suite_6.2.0_3432_x64_x86.zip ( Powered by Baidu ) Download Key: 3pva Size : 200.3MB CRC32 : 0B2F6DB9 MD5 : a3dfb2cb69766c4160b5e640020eb133 SHA : 474784c3c1ece7702ade79b69c156b1f08ef4624 SHA1 : 7ab55be314a1dc15b86135f3f8e2304d4c91cb72 Update : 5 May 2014 Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Print2Flash - Print & Publish on the Web Wiki Print2Flash lets you easily convert any printable document (e.g. Microsoft Word, or Excel, or PowerPoint document) to an Adobe ® Flash ® file (file with SWF extension). This file can be shared with users who don't have the software that created the original document (for example, Microsoft Word). You can view such files with Adobe ® Flash ® Player that is available across a number of operating systems. And these files can be easily published on the Web as well for fast and easy access to them from anywhere across the globe. Official Website print2flash.com Download Print2Flash.zip ( Powered by Baidu ) Download Key: 8wsv Size : 7.2MB CRC32 : D544D699 MD5 : 96b635709fd2e973492ce756bdabdf44 SHA : d18ea522bdf61b5127116f279ce070ad060fdb33 SHA1 : f400ee1b43d269cd313bd1a8a189fb3b43382f63 Update : 5 May 2014 Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### TISSOT T064.210.22.016.00 TISSOT CERA T064.210.22.016.00 Cera is a girls' name meaning "fiery ones". That makes it the perfect tag for a watch made from ceramic, a material fired at very high temperatures to ensure its finish and robustness. The Tissot Cera is a black and white line reminiscent of ebony and ivory piano-keys with sultry black and innocent white elements. Ceramic bezels and bracelet links surround a choice of square or round dial with an optical-illusion effect. The watches are very comfortable to wear and make a confident statement on their wearers’ behalf – in this case, the choice really is black and white. TECHNICAL SPECIFICATIONS Size Lady Case material Stainless Steel, Ceramic Water resistance 3 bar (30 m / 100 ft) Glass material Sapphire Crystal Dial color White Dial type Index Diamonds Bracelet material Stainless Steel, Ceramic Bracelet color Grey,white Movement caliber F03.111 Movement type Quartz Battery model Renata364 Weigelt 84g Width 28mm Thickness 6.4mm Height 28mm Diamond Color Top Wesselton Clarity VS/SI Cut 8/8 single cut Carat 0.036 carat Quantity 8 diamonds TISSOT Official Website www.tissot.ch The Brand Tissot, with its signature 'Innovators by Tradition', has been pioneering craftsmanship and innovation since its foundation in 1853. Today Tissot is a member of the Swatch Group, the world's largest watch producer and distributor. The company has always had its home in the Swiss watchmaking town of Le Locle in the Jura Mountains but now also has a presence in 160 countries. The Tissot innovation leadership is enabled by the development of high-tech products, special materials and advanced functionality. With a broader, more versatile range of high-quality timepieces at an attractive price than any other Swiss watch brand, Tissot also expresses its commitment to making excellence accessible. As official timekeeper and partner of FIBA, AFL, MotoGP™, FIM Superbike and the World Championships of cycling, fencing and ice hockey, Tissot is committed to respecting tradition, underlining its core values of performance, precision and setting new standards. ### Common Social Sharing URL Link Address Facebook <a href="https://www.facebook.com/sharer.php?u=" title="Share on Facebook" target="_blank"></a> Twitter <a href="http://twitter.com/share?url=" title="Share on Twitter" target="_blank"></a> Google+ <a href="https://plus.google.com/share?url=" title="Share on Google+" target="_blank"></a> Digg <a href="http://www.digg.com/submit?url=" title="Share on Digg" target="_blank"></a> Reddit <a href="http://reddit.com/submit?url=" title="Share on Reddit" target="_blank"></a> LinkedIn <a href="http://www.linkedin.com/shareArticle?mini=true&url=" title="Share on LinkedIn" target="_blank"></a> Pinterest <a href="javascript:void((function()vare=document.createElement('script');e.setAttribute('type','text/javascript');e.setAttribute('charset','UTF-8');e.setAttribute('src','http://assets.pinterest.com/js/pinmarklet.js?r='+Math.random()*99999999);document.body.appendChild(e))());" title="Share on Pinterest" target="_blank"></a> StumberUpon <a href="http://www.stumbleupon.com/submit?url=" title="Share on StumbeUpon" target="_blank"></a> Flatter <a href="https://flattr.com/submit/auto?\user_id=" title="Share on Flatter" target="_blank"></a> Buffer <a href="https://bufferapp.com/add?url=" title="Share on Buffer" target="_blank"></a> Tumblr <a href="http://www.tumblr.com/share/link?url=" title="Share on Tumblr" target="_blank"></a> Sina Weibo <a href="http://service.weibo.com/share/share.php?appkey=&url=" title="Share on Sina Weibo" target="_blank"></a> Tencent Microblog <a href="http://v.t.qq.com/share/share.php?url=" title="Share on Tencent Microblog" target="_blank"></a> QZone <a href="http://sns.qzone.qq.com/cgi-bin/qzshare/cgi_qzshare_onekey?url=" title="Share on QZone" target="_blank"></a> China Kaixin <a href="http://www.kaixin001.com/repaste/share.php?rurl=" title="Share on China Kaixin" target="_blank"></a> China Renren <a href="http://share.renren.com/share/buttonshare.do?link=" title="Share on China Renren" target="_blank"></a> ### Clean up OS X Hard Disk Space Command Line Disable Safe Sleep Sleep mode - can save a 4GB-16GB of space The following command can turn off the OS X native sleep function, which is SafeSleep. This sleep mode when the battery Mac will not sleep or the contents of memory stored on sleepimage files on the hard disk. sleepimage files with Mac computers as large memory, which means that if your memory is 4GB, the file will have 4GB, if it is 16GB, the file will have 16GB. Close SafeSleep state can not let the system automatically create the file, the disadvantage is that when the Mac is not the battery, you can not revert to the previous. However, we can use OS X's auto-save feature to save his job at the time the battery will be depleted. If you want to disable SafeSleep function, we need to enter the following command in a terminal: $ sudo pmset -a hibernatemode 0 Then navigate to /private/var/vm/ delete files that already exist sleep image $ cd /private/var/vm/ Use the following command to delete the file $ sudo rm sleepimage Finally, we want to prevent OS X continues to create the file, so we need the following command generates an empty file can not be replaced $ sudo touch sleepimage $ sudo chmod 000 /private/var/vm/sleepimage Of course, if you want to re-open Safe Sleep feature, just below command. $ sudo pmset -a hibernatemode 3 $ sudo rm /private/var/vm/sleepimage Remove the system voice files - you can save the hard disk space 500MB-3GB+ If you are not applicable text-to-speech feature, then you certainly would not use the OS X built-in voice files. You can delete these files to regain disk space. In the terminal application, use the following command to first locate the folder where the file $ cd /System/Library/Speech/ Then the delete command to delete all the voice files $ sudo rm -rf Voices/* If you execute the command, then you will not be able to use the system's text-to-speech function. Delete all system logs - can save a 100MB-2GB hard disk space As you use your Mac more and more time, system log files are also more and more, based on the amount of computer errors and services, these documents will be more and more. The system log file is used to debug and troubleshooting, if you feel there is no use, you can use the following command to remove $ sudo rm -rf /private/var/log/* The system will continue to generate log files, so you can regularly execute this command. Delete cache files generated by quickly preview - you can save a 100MB-300MB hard disk space Quick preview feature is the built-in OS X system file preview function, you can click the box after selecting any file in the Finder to see the details of the file. But Quick View feature relies on caching feature can be more smooth, and these cache files will always increase, remove the cache using the following command $ sudo rm -rf /private/var/folders/ Delete temporary files - you can save 500MB-5GB hard disk space /private/var/tmp/ storage system cache is a folder, it will clear when the system reboots normally, but sometimes it does not. And if you do not close a long time Mac, do not reboot, then the cache files will be more and more. Use the following command to clear these temporary files $ cd /private/var/tmp/ $ rm -rf TM* Clear the cache file - can save 1GB-10GB of hard disk space There are many cache files, such as web browsing history, application meta data and so on. The capacity of the cache files with the applications users exactly how much to use, but also with Mac restart frequency. In addition, many online music player app will generate a lot of cache files, we can delete these cache files using the following command $ cd ~/Library/Caches/ $ rm -rf ~/Library/Caches/* Note that some methods have some risks, so please pay attention when using. ### Reslove Homebrew Error: SHA256 mismatch Today, I got Error: SHA256 mismatch when update php ==> Upgrading php55 ==> Downloading http://www.php.net/get/php-5.5.11.tar.bz2/from/this/mirror ######################################################################## 100.0% Error: SHA256 mismatch Expected: 60e14c255f2a461a7a26639b84a2fc448cc2f91c8dead0e9fd00cd8ba27a2e96 Actual: be28ddc5b7276c3b39d645d2df09ab44f07ec9103c77c5a9b710be30bfc3c9fc Archive: /Library/Caches/Homebrew/php55-5.5.11 To retry an incomplete download, remove the file above. Download link appears to be an error, editing the download URL to use a http://uk1.php.net instead of http://www.php.net in this file (around line 60): /usr/local/Library/Taps/josegonzalez/homebrew-php/Formula/abstract-php-version.rb modify like this module Php55Defs PHP_SRC_TARBALL = 'http://uk1.php.net/get/php-5.5.11.tar.bz2/from/this/mirror' PHP_GITHUB_URL = 'https://github.com/php/php-src.git' PHP_VERSION = '5.5.11' PHP_BRANCH = 'PHP-5.5' PHP_CHECKSUM = { :md5 => 'c8c9ad960bae0265054b5879423f7a75', :sha1 => '58d660e5455b3f2abdfe3205182ffe3a58b5d1cd', :sha256 => '60e14c255f2a461a7a26639b84a2fc448cc2f91c8dead0e9fd00cd8ba27a2e96', } end ### Change MySQL root Password in OS X or Linux If you have forgot the MySQL root password, can't remember or want to break in... you can reset the mysql database password from the command line in either OS X or Linux as long as you know the root user password of the following you are on Stop MySQL $ sudo /usr/local/mysql/support-files/mysql.server stop Start it in safe mode $ sudo mysqld_safe --skip-grant-tables This will be an ongoing command until the process is finished so open another shell/terminal window, log in without a password $ mysql -u root UPDATE mysql.user SET authentication_string=PASSWORD('password') WHERE User='root'; Change the lowercase password to what you want – with single quotes. $ FLUSH PRIVILEGES; \q Start MySQL $ sudo /usr/local/mysql/support-files/mysql.server start ### Removing MySQL From OS X $ sudo rm /usr/local/mysql $ sudo rm -rf /usr/local/mysql* $ sudo rm -rf /Library/StartupItems/MySQLCOM $ sudo rm -rf /Library/PreferencePanes/My* $ rm -rf ~/Library/PreferencePanes/My* $ sudo rm -rf /Library/Receipts/mysql* $ sudo rm -rf /Library/Receipts/MySQL* $ sudo rm -rf /var/db/receipts/com.mysql.* # Edit the following file, removing the line `MYSQLCOM=-YES-`. # you may need sudo for write privileges to edit the file # TIP: when using vim, use `dd` to delete the line and then `:wq` to save # the file $ sudo vim /etc/hostconfig # remove the line MYSQLCOM=-YES- ### Gentoo Linux Quick Install Guide Wiki Gentoo Linux is a computer operating system based on the Linux kernel and built using the Portage package management system. It is distributed as free and open source software. Unlike a binary software distribution, the source code is compiled locally according to the user's preferences and is often optimized for the specific type of computer. Precompiled binaries are available for some very large packages and for packages whose source code has not been released. The "Gentoo" name comes from the fast-swimming Gentoo penguin. It was chosen to reflect the potential speed improvements of machine-specific optimization. Gentoo package management is designed to be modular, portable, easy to maintain, and flexible. Gentoo is sometimes described as a meta-distribution, "because of its near-unlimited adaptability", in that the majority of users have configurations and sets of installed programs which are unique to themselves. Gentoo Linux Official Website www.gentoo.org Pure text manually configure system parameters, the kernel, and the GRUB boot, enables personal customization. I have a more profound understanding Linux installation. Built-in emerge installation tool, it's very similar to other Linux family systems apt-get and yum and other automated installation tools, it's an online source automatically find the source and its dependencies installed fully automatic compilation tool, eliminates various problems associated with the novice compiler installation process in. All file in system root directory and systems software can be download online, customize it more convenient. Environment VMware Fusion Professional Version 6.0.3 (1747349) Part 1 Preparation Download Gentoo desired boot image and system files archive. install-amd64-minimal-20140403.iso stage3-amd64-20140403.tar.bz2 portage-latest.tar.bz2 Download From Sohu(China) Mirror http://mirror.sohu.com/gentoo/releases/amd64/current-iso/ http://mirror.sohu.com/gentoo/releases/snapshots/current/ http://mirror.sohu.com/gentoo/releases/amd64/current-iso/ Configure VMware Virtual Machine Boot System Image Select Other Linux 3.x kernel Part 2 Installation System Into System Boot Interface by press Enter: gentoo dopcmcia Into Livecd Temporary Boot In this system, we can simple configure the system that we wanna installed, such as a simple network configuration connection, the system partition, compile the kernel driver, system boot production, etc. Configure Root Account livecd ~ # passwd root Open SSH Service livecd ~ # /etc/init.d/sshd start Check Network Configure livecd ~ # ifconfig Because I install Gentoo in VMware virtual machine, so the network interface names is different with common interfaces eth0, please note here. SSH Login Shell Last login: Fri Apr 25 18:07:24 on ttys000 xuris-MacBook-Pro:~ xuri$ ssh -l root 172.16.184.146 Password: Welcome to the Gentoo Linux Minimal Installation CD! The root password on this system has been auto-scrambled for security. If any ethernet adapters were detected at boot, they should be auto-configured if DHCP is available on your network. Type "net-setup eth0" to specify eth0 IP address settings by hand. Check /etc/kernels/kernel-config-* for kernel configuration(s). The latest version of the Handbook is always available from the Gentoo web site by typing "links http://www.gentoo.org/doc/en/handbook/handbook.xml". To start an ssh server on this system, type "/etc/init.d/sshd start". If you need to log in remotely as root, type "passwd root" to reset root's password to a known value. Please report any bugs you find to http://bugs.gentoo.org. Be sure to include detailed information about how to reproduce the bug you are reporting. Thank you for using Gentoo Linux! livecd ~ # Disk Partition 20GB virtual machine hard disk partition. /dev/sda1 /Boot 200M /dev/sda2 swap 1024M /dev/sda3 / Remaining Space Use fdisk Partition livecd ~ # fdisk -l Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes livecd ~ # fdisk /dev/sda Welcome to fdisk (util-linux 2.22.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x0673cff2. Command (m for help): p Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0673cff2 Device Boot Start End Blocks Id System Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-41943039, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +200M Partition 1 of type Linux and of size 200 MiB is set Command (m for help): n Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default p): p Partition number (1-4, default 2): 2 First sector (411648-41943039, default 411648): Using default value 411648 Last sector, +sectors or +size{K,M,G} (411648-41943039, default 41943039): +1024M Partition 2 of type Linux and of size 1 GiB is set Command (m for help): n Partition type: p primary (2 primary, 0 extended, 2 free) e extended Select (default p): p Partition number (1-4, default 3): 3 First sector (2508800-41943039, default 2508800): Using default value 2508800 Last sector, +sectors or +size{K,M,G} (2508800-41943039, default 41943039): Using default value 41943039 Partition 3 of type Linux and of size 18.8 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. Reload Partition Table livecd ~ # partx -a /dev/sda Formatted as ext4 File System livecd ~ # mkfs.ext4 /dev/sda1 mke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 51200 inodes, 204800 blocks 10240 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67371008 25 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done livecd ~ # mkfs.ext4 /dev/sda3 mke2fs 1.42.7 (21-Jan-2013) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 1234576 inodes, 4929280 blocks 246464 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 151 block groups 32768 blocks per group, 32768 fragments per group 8176 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done Formatting & Activate Swap Partition ivecd ~ # mkswap /dev/sda2 Setting up swapspace version 1, size = 1048572 KiB no label, UUID=a0f1720d-ffdc-4ed9-aaf2-19fa6601a692 livecd ~ # swapon /dev/sda2 livecd ~ # swapon -s Filename Type Size Used Priority /dev/sda2 partition 1048572 0 -1 Create System Temporary Mount Point livecd ~ # mount /dev/sda3 /mnt/gentoo livecd ~ # mkdir /mnt/gentoo/boot livecd ~ # mount /dev/sda1 /mnt/gentoo/boot Set System Time livecd ~ # date 042518482014 Fri Apr 25 18:48:00 UTC 2014 Download System Compiler Package to /mnt/gentoo Directory Here I use the host SCP upload. xuris-MacBook-Pro:Downloads xuri$ scp /Volumes/Western\ Digital/Dow nloads/portage-latest.tar.bz2 root@172.16.184.146:/mnt/gentoo Password: portage-latest.tar.bz2 100% 66MB 22.2MB/s 00:03 xuris-MacBook-Pro:Downloads xuri$ scp /Volumes/Western\ Digital/Dow nloads/stage3-amd64-20140403.tar.bz2 root@172.16.184.146:/mnt/gentoo Password: stage3-amd64-20140403.tar.bz2 100% 177MB 19.6MB/s 00:09 After Download livecd ~ # cd /mnt/gentoo livecd gentoo # ls boot lost+found portage-latest.tar.bz2 stage3-amd64-20140403.tar.bz2 Unzip File livecd ~ # tar jxvf stage3-amd64-20140403.tar.bz2 livecd ~ # tar jxvf portage-latest.tar.bz2 -C /mnt/gentoo/usr Switching Systems /dev/sda3 Root Partition & Update System Environment Variables livecd gentoo # mount -t proc none /mnt/gentoo/proc livecd gentoo # mount -o bind /dev /mnt/gentoo/dev livecd gentoo # chroot /mnt/gentoo /bin/bash livecd / # env-update >>> Regenerating /etc/ld.so.cache... livecd / # source /etc/profile livecd / # export PS1="(chroot) $PS1" Set Timezone (chroot) livecd / # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime Set Hostname (chroot) livecd / # sed -i -e's/hostname.*/hostname="gentoo.example.com"/' /etc /conf.d/hostname (chroot) livecd / # echo "127.0.0.1 gentoo.example.com localhost" > /etc/hosts Set DNS Server (chroot) livecd / # echo "nameserver 172.16.184.2" >> /etc/resolv.conf Kernel Configuration Installing Kernel Source (chroot) livecd / # emerge gentoo-sources Installation Automatic Compile Kernel Tools (chroot) livecd / # emerge genkernel Copy the installation disc configuration file to the default location of the configuration file genkernel search. (chroot) livecd / # zcat /proc/config.gz > /usr/share/genkernel/arch/x86_64/ker nel-config Run genkernel (chroot) livecd / # cd /usr/src (chroot) livecd src # mv linux linux.bak (chroot) livecd src # ln -s linux-3.12.13-gentoo linux (chroot) livecd src # genkernel all If compiled successfully, you can see info like this ... * >> Appending base_layout cpio data... * >> Appending auxilary cpio data... * >> Copying keymaps * >> Appending busybox cpio data... * >> Appending modules cpio data... * >> Appending blkid cpio data... * >> Appending modprobed cpio data... * >> Appending linker cpio data... * >> Finalizing cpio... * >> Compressing cpio data (.xz)... * * Kernel compiled successfully! * * Required Kernel Parameters: * root=/dev/$ROOT * * Where $ROOT is the device node for your root partition as the * one specified in /etc/fstab * (chroot) livecd src # * tell your bootloader to use the provided INITRAMFS file. * WARNING... WARNING... WARNING... * Additional kernel cmdline arguments that *may* be required to boot properly... * With support for several ext* filesystems available, it may be needed to * add "rootfstype=ext3" or "rootfstype=ext4" to the list of boot parameters. * Do NOT report kernel bugs as genkernel bugs unless your bug * is about the default genkernel configuration... * * Make sure you have the latest ~arch genkernel before reporting bugs. (chroot) livecd src # Check Kernel & initrd Version (chroot) livecd src # ls /boot/kernel* /boot/initramfs* /boot/initramfs-genkernel-x86_64-3.12.13-gentoo /boot/kernel-genkernel-x86_64-3.12.13-gentoo Install Vim Editor (chroot) livecd src # emerge vim Modify fstab File (chroot) livecd src # vim /etc/fstab Add the following /dev/sda1 /boot ext4 defaults 1 2 /dev/sda2 swap swap defaults 0 0 /dev/sda3 / ext4 defaults 1 1 Configure Network Because I install Gentoo in VMware virtual machine, so the network interface names is different with common interfaces eth0, please note here. (chroot) livecd src # vi /etc/conf.d/net config_eno16777736="172.16.184.146" routes_eno16777736="default via 172.16.184.1" (chroot) livecd src # ln -s /etc/init.d/net.lo /etc/init.d/net.eno16777736 Boot Load Network Port (chroot) livecd src # rc-update add net.eno16777736 default * service net.eno16777736 added to runlevel default Configure DNS (chroot) livecd src # echo "nameserver 172.16.184.2" >> /etc/resolv.conf Boot Load sshd Service (chroot) livecd src # rc-update add sshd default * service sshd added to runlevel default Setup New Install Gentoo System Root Password (chroot) livecd src # passwd root New password: BAD PASSWORD: it is based on a dictionary word Retype new password: passwd: password updated successfully Configure Timezone (chroot) livecd src # vi /etc/conf.d/clock Add the following TIMEZONE = "Asia/Shanghai" Install Common System Tools System Log Management (chroot) livecd src # emerge syslog-ng Log Formatting Tools (chroot) livecd src # emerge logrotate Timer Planning Services (chroot) livecd src # emerge vixie-cron DHCP (chroot) livecd src # emerge dhcpcd Quick Index (chroot) livecd src # emerge mlocate Start Service (chroot) livecd src # rc-update add syslog-ng default * service syslog-ng added to runlevel default (chroot) livecd src # rc-update add vixie-cron default * service vixie-cron added to runlevel default Set rontab Default Editor (chroot) livecd src # echo "export EDITOR='/usr/bin/vim'" >> /etc/profile Configure GRUB Boot Loader Install GRUB (chroot) livecd src # emerge grub Configure GRUB (chroot) livecd src # grub2-install --no-floppy /dev/sda Installation finished. No error reported. (chroot) livecd src # grub2-mkconfig -o /boot/grub/grub.cfg Generating grub.cfg ... Found linux image: /boot/kernel-genkernel-x86_64-3.12.13-gentoo Found initrd image: /boot/initramfs-genkernel-x86_64-3.12.13-gentoo done (chroot) livecd src # reboot After Install Remove stage3 tarball gentoo ~ # rm /stage3-*.tar.bz2* Remove Portage Snapshot files gentoo ~ # rm /portage-latest.tar.bz2* ### OS X Localhost Mail Server Settings Wiki Postfix is a free and open-source mail transfer agent (MTA) that routes and delivers electronic mail, intended as an alternative to the widely used Sendmail MTA. Postfix is released under the IBM Public License 1.0 which is a free software licence. Originally written in 1997 by Wietse Venema at the IBM Thomas J. Watson Research Center and first released in December 1998, Postfix continues as of 2014 to be actively developed by its creator and other contributors. The software is also known by its former names VMailer and IBM Secure Mailer. Postfix Official Website www.postfix.org Modify the main PHP configuration file php.ini file, open the PHP mail() function to send mail function in Unix-like systems $ sudo vim /usr/local/etc/php/5.5/php.ini Modify ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ; http://php.net/sendmail-path ;sendmail_path = ... to ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ; http://php.net/sendmail-path sendmail_path = "env -i /usr/sbin/sendmail -t -i" Then restart the Apache server or Nginx server. Open Mail Service Send E-mail mainly by the Mail Transfer Agent (MTA) mail transfer agent to achieve, there are common MTA Postfix, Sendmail, etc. OS X system has been pre-installed Postfix service, we do not need to install once, just to start it. $ sudo postfix start postfix/postfix-script: starting the Postfix mail system Check Postfix Running Status ( whether the process created ) $ ps -ef | grep postfix 0 3131 1 0 1:15PM ?? 0:00.02 /usr/libexec/postfix/master 501 3268 1828 0 1:24PM ttys000 0:00.00 grep postfix Check Postfix Install Path $ which postfix /usr/sbin/postfix Test Send Mail $ date | mail -s testing name[at]example.com View Send Queue $ mailq Mail queue is empty Open your inbox to check if there testing the title of the message. PHP Mail() Function Test Mail <?php $send = mail('name[at]example.com', 'My Subject', 'The test mail'); if ($send) { echo 'true'; } else { echo 'false'; } ### Install Apache Tomcat on Ubuntu Server Apache Tomcat Apache Tomcat (or simply Tomcat, formerly also Jakarta Tomcat) is an open source web server and servlet container developed by the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the JavaServer Pages (JSP) specifications from Sun Microsystems, and provides a "pure Java" HTTP web server environment for Java code to run in. In the simplest config Tomcat runs in a single operating system process. The process runs a Java virtual machine (JVM). Every single HTTP request from a browser to Tomcat is processed in the Tomcat process in a separate thread. Apache Tomcat includes tools for configuration and management, but can also be configured by editing XML configuration files. OpenJDK OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). It is the result of an effort Sun Microsystems began in 2006. The implementation is licensed under the GNU General Public License (GNU GPL) with a linking exception. Were it not for the GPL linking exception, components that linked to the Java class library would be subject to the terms of the GPL license. OpenJDK is the official Java SE 7 reference implementation. Apache Tomact Official Website tomcat.apache.org OpenJDK Official Website openjdk.java.net Install OpenJDK $ sudo apt-get update $ sudo apt-get install openjdk-7-jre Verify Installation $ java -version java version "1.7.0_51" OpenJDK Runtime Environment (IcedTea 2.4.4) (7u51-2.4.4-0ubuntu0.12.04.2) OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode) Set JAVA_HOME Environment Variable Add in ~/.bashrc $ export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/ Download & Install Apache Tomcat $ wget http://apache.tradebit.com/pub/tomcat/tomcat-8/v8.0.5/bin/apache-tomcat-8.0.5.tar.gz $ sudo tar xvzf apache-tomcat-8.0.5.tar.gz $ sudo mv apache-tomcat-8.0.5 /usr/local/tomcat $ sudo chmod -R 755 /usr/local/tomcat Set Apache Tomcat Automatically Run at System Startup $ sudo vim /etc/init.d/tomcat Content # Tomcat server auto-start script export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64/ case $1 in start) sh /usr/local/tomcat/bin/startup.sh ;; stop) sh /usr/local/tomcat/bin/shutdown.sh ;; restart) sh /usr/local/tomcat/bin/shutdown.sh sh /usr/local/tomcat/bin/startup.sh ;; esac exit 0 Set Executable Permissions $ sudo chmod 755 /etc/init.d/tomcat $ sudo ln -s /etc/init.d/tomcat /etc/rc1.d/K99tomcat $ sudo ln -s /etc/init.d/tomcat /etc/rc2.d/S99tomcat Start Apache Tomcat $ sudo sh /usr/local/tomcat/bin/startup.sh Using CATALINA_BASE: /usr/local/tomcat Using CATALINA_HOME: /usr/local/tomcat Using CATALINA_TMPDIR: /usr/local/tomcat/temp Using JRE_HOME: /usr Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar Tomcat started. Apache Tomcat Administrator Account Settings $ sudo vim /usr/local/tomcat/conf/tomcat-users.xml Add following code before <user username="username" password="password" roles="manager-gui"/> Restart Apache Tomact Service $ sudo service tomcat stop $ sudo service tomcat start Once that runs, Tomcat is up and ready on port 8080. You can visually verify that Tomcat is working by accessing your server page at your_ip_address:8080. It should look like this. ### Benchmarking MySQL with JMeter Wiki Apache JMeter may be used to test performance both on static and dynamic resources (Files, Web dynamic languages - PHP, Java, ASP.NET, etc. -, Java Objects, Data Bases and Queries, FTP Servers and more). It can be used to simulate a heavy load on a server, group of servers, network or object to test its strength or to analyze overall performance under different load types. You can use it to make a graphical analysis of performance or to test your server/script/object behavior under heavy concurrent load. Official Website jmeter.apache.org Download Last Releases MySQL Connectors JDBC Driver for MySQL (Connector/J) www.mysql.com/products/connector Environment Operating System : OS X 10.9.2 Mavericks Apache JMeter Version : 2.11 MySQL Version : 5.7.3 JDBC Driver for MySQL : mysql-connector-java-5.1.30 Java Version : 1.8.0_20-ea Java(TM) SE Runtime Environment (build 1.8.0_20-ea-b05) Java HotSpot(TM) 64-Bit Server VM (build 25.20-b05, mixed mode) Install Apache JMeter Download and unzip apache-jmeter-2.11_src.zip. Install the JDBC Driver for MySQL In order for JMeter to access a MySQL database as a client, it needs a JDBC database driver. For MySQL this is called MySQL Connector/J and is available at http://www.mysql.com/products/connector/j/. Download Connector/J and unzip the file. This will produce a directory containing the Connector/J files. In this directory you will find the connector jar file (mysql-connector-java-version-ga-bin.jar). Copy this file to the JMeter lib directory. Open apache-jmeter-2.11/bin/ApacheJMeter.jar, File → Templates... → Select Template [JDBC Load Test] → Create. Configure the JDBC Sampler Start JMeter and under the test plan node create a Thread Group, and under that create a JDBC Request Sampler. For MySQL the Driver Class should be com.mysql.jdbc.Driver. The JDBC URL is of this form: jdbc:mysql://server/databaseName?user=username&password=password The SQL Query String is the SQL Query you want to benchmark. Start with something simple: create a table in the test database and execute a desc query against it. Add JDBC Request Open Thread Group → JDBC Request → SQL Query select * from rl_user where username like '%xuri%' Add Assertion Right-click on the Thread Group → Assertions → Response Assertion → Apply to [Main sample and sub-samples] → Patterns to Test [Add] xuri Add Listener Right-click on the Thread Group → Listener → Graph Results Right-click on the Thread Group → Listener → Summary Report Set the Number of Threads(users), Ramp-Up Perios(in seconds) and Loop Count in Thread Group. Benchmark Click on the menu bar Run → Start Here are the results ### Hexo - Node.js Blog Framework Wiki hexo - A fast, simple & powerful blog framework, powered by Node.js. Official Website hexo.io Prepare the environment Install Node Visit Node.js official website to download the latest version for the platform, all the way to installation. I use node-v0.10.26.pkg. Git Configuration & GitHub Account Register Git software configuration based on SSH key production First configure their own account created on Github Git software in the user name and E-mail address. If not you can go to GitHub registered first. Establish correspondence with your username warehouse, warehouse name must be your_user_name.github.com Adding SSH public key to the "Account settings → SSH Keys → Add SSH Key" $ git config --global user.name "Your Name Here" # Sets the default name for git to use when you commit $ git config --global user.email "your_email@youremail.com" # Sets the default email for git to use when you commit Then if already have a SSH key checking system $ cd ~/.ssh # Checks to see if there is a directory named ".ssh" in your user directory If "No such file or directory", then there is no need to re-establish. If so, you can back up your key and then remove the old. $ ls # Lists all the subdirectories in the current directory # config id_rsa id_rsa.pub known_hostsmkdir key_backup # Makes a subdirectory called "key_backup" in the current directory $ cp id_rsa* key_backup # Copies the id_rsa keypair into key_backup $ rm id_rsa* # Deletes the id_rsa keypair Then generate a new SSH key. When prompted to Enter a file in which to save the key when you can just press Enter to keep the default settings, namely the presence of their own home directory (you Ubuntu replaced their account name, the same below). E-mail address with front and Github account is the same: $ ssh-keygen -t rsa -C "your_email@youremail.com" # Creates a new ssh key using the provided email # Generating public/private rsa key pair. # Enter file in which to save the key (/home/you/.ssh/id_rsa): Press Enter will be prompted to enter a password and confirm password: Enter passphrase (empty for no passphrase): [Type a passphrase] # Enter same passphrase again: [Type passphrase again] Then see a message similar to that showed the success of key generation Your identification has been saved in /home/you/.ssh/id_rsa. # Your public key has been saved in /home/you/.ssh/id_rsa.pub. # The key fingerprint is: # 01:0f:f4:3b:ca:85:d6:17:a1:7d:f0:68:9d:f0:a2:db your_email@youremail.com In Github SSH key and enter the server detects the connection success of. Log in to your Github account, enter "Account settings / SSH Keys", you can enter just generated key. Then you can try to detect what Github SSH server to connect to the git. $ ssh -T git@github.com # Attempts to ssh to github Might see such a warning, but no relationship, enter yes and press Enter The authenticity of host 'github.com (207.97.227.239)' can't be established. # RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. # Are you sure you want to continue connecting (yes/no)? Finally, you can see such a successful connection message Hi username! You've successfully authenticated, but GitHub does not # provide shell access. Installation After Node and Git are installed, run the following command to install hexo $ sudo npm install -g hexo Initialization Then, run the init command to initialize hexo to the directory you specify hexo init You can also cd to the target directory, execute hexo init. Well, so far, all of the installation work has been completed! Generate Static Page cd into your init directory, execute the following command to generate static pages to hexo\ public\ directory. hexo generate Init command must be executed in the directory, otherwise unsuccessful, but it is not an error. When you modify the contents of the article Tag or not properly re-generated content, you can delete hexo\ retry after db.json, not enough on the public directory delete the corresponding files regenerat. Local Launch Run the following command to start local services, carried the article preview debugging $ hexo server Browser visit http://localhost:4000 can see the effect. Writing articles Execute new commands, specify the name of the article to generate hexo \source\_posts\postName.md. hexo new [layout] "postName" # New post Which layout is an optional parameter, the default value post. What layout do, go to the next scaffolds catalog view, the file name is the layout of these names. Of course, you can add your own layout, is to add a file to, and you can also edit an existing layout, such as the layout post default hexo\scaffolds\post.md. title: { { title } } date: { { date } } tags: --- Please note that between the braces and braces I pay more a space, otherwise it will be an escape, not display properly. I want to add categories, so as not to be entered manually each time, you only need to modify this file to add a line, as title: { { title } } date: { { date } } categories: tags: --- postName is the name of the .md file, and also appears in the URL of your article, postName if it contains spaces, you must use "its surrounds, postName can be Chinese. Note that all files: behind must have a space, or error. Just look at the generated files hexo\source\_posts\postName.md, which reads as follows title: postName # Display name on the article page, can be modified, does not appear in the URL date: 2014-04-07 10:03:16 # Articles generation time, generally do not change, of course, can be modified categories: # Article categories, can be empty, note: there is a space behind tags: # Article Tags can be empty, please use the multi-label format [tag1, tag2, tag3], note: there is a space behind --- Using markdown format of the input text. Next, you can enjoy with your favorite editor to write your article. About markdown syntax, you can refer to Markdown Example. ### Install Oracle Database 12c on Windows Wiki The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system produced and marketed by Oracle Corporation. Official Website www.oracle.com Download Link www.oracle.com/technetwork/database/enterprise-edition/downloads Operating System Version : Microsoft Windows 8.1 Pro 64-bit Build 9600 Oracle Database Version : Database 12c Release 1 Standard Edition, Standard Edition One, and Enterprise Edition (12.1.0.1.0) Install Download install file and extract two zip file to the same path Locate the executable installation file to install, double-click setup.exe Locate the executable installation file to install double-click [setup.exe] This is just for testing, skipping updates Finally, it finished Verify the Installation Start → SQL Plus, if you got ERROR : ORA-01033: ORACLE initialization or shutdown in progress It may be the case, the database did not stop correctly. Try out following, open Dos command prompt: c:> sqlplus /nolog sql> connect sys/manager as sysdba sql>shutdown immediate sql> startup sql> exit While startup it may show error message like some of the parameter is missing. Note down that parameter and update in initSID.ora file, if you got error: ORA-01017 invalid username/password;logon denied Try to run follow command in SQL Plus: SQL>conn/as sysdba # Operating system authentication login SQL>alter user system identified by system; # User "system" password changes to "system" If the password is a string of numbers to change, need to use quotation marks, like this: SQL>alter user system identified by "12345678"; Open Oracle SQL Developer, if you got message like this Enter the full pathname for java.exe C:\app\Administrator\product\12.1.0\dbhome_1\jdk\bin\java.exe Visit in browser https://127.0.0.1:5500/em/login ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS Wiki The Oracle Database (commonly referred to as Oracle RDBMS or simply as Oracle) is an object-relational database management system produced and marketed by Oracle Corporation. Official Website www.oracle.com Download Link www.oracle.com/technetwork/database/enterprise-edition/downloads Operating System Version : Ubuntu Desktop 12.04.4 64-bit Oracle Database Version : Database 12c Release 1 Standard Edition, Standard Edition One, and Enterprise Edition (12.1.0.1.0) Original author of this article Installing Oracle 12c on Ubuntu 12.04 (64 bit)..a hard journey, but it’s worth it! This post will document the steps needed to overcome the installation issues one can encounter when installing the latest release of the Oracle Database 12c. Some of the errors encountered look like the ones on 11c, but since there are quite a few more now, I decided to document these for others as a reference. Disclaimer: When I set out to tackle the installation errors, my goal was to "make it work, one way or the other". I'm not running this installation on a production environment, not in the least since Ubuntu 12.04 is not supported, but I needed to have a local database to fiddle with on my local laptop. The solution provided can make a Linux Admin cringe, and if not landing one in a grave, at least make one turn itself over in his/her grave. I won't delve into the specific system configuration packages and required settings, since these are documented in abundance either on the OTN, or on other sites. Basically these kind of issues are easy to resolve by just installing the missing package and re-run the installer. The issues discussed below are the errors one will encounter when the linker starts doing it's thing. However, for ease of getting started, I'll list a quick listing of needed packages and needed settings, from a "freshly" installed ubuntu 12.04 installation. As root root@ubuntu:~# apt-get install elfutils libaio1 libaio-dev libstdc++6-4.4-dev numactl pdksh sysstat unixODBC-dev unixODBC build-essential libaio1 gawk ksh libmotif3 alien libtool Create and prepare Oracle user root@ubuntu:~# addgroup oinstall root@ubuntu:~# addgroup dba root@ubuntu:~# useradd -g oinstall -G dba -p password -d /home/oracle -s /bin/bash oracle root@ubuntu:~# mkdir /home/oracle root@ubuntu:~# chown -R oracle:dba /home/oracle Create and prepare location to install the oracle binaries root@ubuntu:~# mkdir -p /oracle root@ubuntu:~# chown -R oracle:dba /oracle Edit and add to the sysctl.conf root@ubuntu:~# gedit /etc/sysctl.conf # Oracle 12c entries fs.aio-max-nr=1048576 fs.file-max=6815744 kernel.shmall=2097152 kernel.shmmni=4096 kernel.sem=250 32000 100 128 net.ipv4.ip_local_port_range=9000 65500 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048586 kernel.shmmax=1073741824 Save the file. Note: kernel.shmmax = max possible value, e.g. size of physical memory in bytes. This machine has 8GB, but I want to keep the memory allocation low, so we are specifying 1GB. Adjust the parameter as needed. Load new kernel parameters, by running following as root user root@ubuntu:~# sysctl -p Edit and adjust the /etc/security/limits.conf file root@ubuntu:~# gedit /etc/security/limits.conf #Oracle 12c oracle soft nproc 2048 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 Edit /etc/sudoers file, and modify # User privilege specification root ALL=(ALL:ALL) ALL to # User privilege specification root ALL=(ALL:ALL) ALL oracle ALL=(ALL:ALL) ALL Prerequisites Prepare the system settings. Install all the packages needed for the Oracle software as usual. Download and install the Oracle Database software, make sure you have the correct bit version (in my case: 64 bit). Unpack the software in a location with ample room to hold the unpacked software. For sake of documentation, let's assume I've unpacked this in the /tmp/Oracle12c directory. Navigate to /location_of_unpacked_software/database and run the command oracle@ubuntu:~$ cd /tmp/Oracle12c/database oracle@ubuntu:/tmp/Oracle12c/database$ ./runInstaller Starting Oracle Universal Installer... Checking Temp space: must be greater than 500 MB. Actual 115797 MB Passed Checking swap space: must be greater than 150 MB. Actual 3813 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-04-04_07-51-39PM. Please wait ... A pop-up appears while the installer loads the files. Choose whether you like to be spammed by OTN. I choose not to. Press Next. I'm pretty sure about this, so press Yes. This is not a test run for patching, so for now I choose to skip software updates (seems to be the default, anyway). Press Next. This pop-up can appear, and is to be expected since we are not running a supported environment. I choose to ignore this, since I have a running installation of Oracle 11c on this system. One can take the prerequisites of this Oracle version as a guideline of which packages to install. Choose to install the database software only, since if this is succeeding, we can install a database later to test if the installation is successful. Press Next. A single installation will do for now. Press Next. I prefer English, so I press Next. Select the option most applicable here, I choose Enterprise so I get to test all the components to my hearts delight! Press Next. Enter the location where the database software (aka oracle binaries) need to be installed. I deviate from any form of standard and place it for ease of reference in /oracle and below. Be sure to grant the oracle user ownership en access permissions on this location. Press Next. Enter the location of the Oracle Inventory directory. In this case it is in a new location, but a previous run has shown me you can perfectly use an existing 11g inventory location. Press Next. Select the preferred groups, I keep everything on the "dba" group, since the oracle user is taking care of all the Oracle stuff. Press Next. Check the configuration parameters, edit where necessary. When done, press Install. The PREPARE phase should not give much issues..not at all actually. The COPY phase should also pass, since this is just extracting and well..copying..If this fails, check the bit version of the download, of the extracted files..or disk space ... The linking phase...now here is where the fun starts! Expect to see a couple of these pop-ups, but no need to fret, we are going to take them on one at a time! As the pop-up suggest, open the log file on the console. Note: in the console where the ./runinstaller.sh was started, you can find the location of the log file also. In this case the log file is located at /oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log Open a separate window and put a 'tail' on this log file, since we need to keep on top of this now, and use this a lot. oracle@ubuntu:~$ tail -f /oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log What we are looking for is this INFO: Start output from spawned process: INFO: ---------------------------------- INFO: INFO: /oracle/base/dbhome_1/bin/genclntsh INFO: /usr/bin/ld: cannot find /usr/lib64/libpthread_nonshared.a inside INFO: collect2: ld returned 1 exit status INFO: genclntsh: Failed to link libclntshcore.so.12.1 INFO: make: *** [client_sharedlib] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'client_sharedlib' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 This line looks interesting INFO: /usr/bin/ld: cannot find /usr/lib64/libpthread_nonshared.a inside We seem to have missed a lib..let's fix this root@ubuntu:~# locate libpthread_nonshared /usr/lib/x86_64-linux-gnu/libpthread_nonshared.a Ah, the lib is on the system, but not on the location the installer expects it. root@ubuntu:~# mkdir /usr/lib64 root@ubuntu:~# ln -s /usr/lib /usr/lib/lib64 root@ubuntu:~# ln /usr/lib/x86_64-linux-gnu/libpthread_nonshared.a /usr/lib64/libpthread_nonshared.a Note: The system used is pure 64 bit, but oracle seems not to be aware of this. Ergo: the symlink. Now: hit the "Retry" button of the installer. Again: an error. As expected. Look in the tail of the log file INFO: /oracle/base/dbhome_1/bin/genclntsh INFO: /usr/bin/ld: cannot find /usr/lib64/libc_nonshared.a inside collect2: INFO: ld returned 1 exit status INFO: genclntsh: Failed to link libclntshcore.so.12.1 make: *** [client_sharedlib] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'client_sharedlib' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Another lib we seem to miss. Let's repeat our previous trick root@ubuntu:~# locate libc_nonshared.a /usr/lib/x86_64-linux-gnu/libc_nonshared.a root@ubuntu:~# ln -s /usr/lib/x86_64-linux-gnu/libc_nonshared.a /usr/lib64 root@ubuntu:~# ls -l /usr/lib64/libc_nonshared.a lrwxrwxrwx 1 root root 42 Apr 4 20:11 /usr/lib64/libc_nonshared.a -> /usr/lib/x86_64-linux-gnu/libc_nonshared.a Now: hit the "retry" button of the installer. The installer should continue for a bit with the linking part..until... Let's check the log file again INFO: - Linking password utility (orapwd) INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/orapwd INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/orapwd -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/s0kuzr.o /oracle/base/dbhome_1/rdbms/lib/kuzrclsr.o -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnz12 INFO: -lzt12 -lztkg12 -lztkg12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lmm -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -ln INFO: ls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib -lasmclntsh12 -lcell12 -lskgxp12 -lskgxn2 -lhasgen12 -locr12 -lclsra12 -ldbcfg12 INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libclntsh.so: undefined reference to symbol 'ztucxtb' /usr/bin/ld: note: 'ztucxtb' is defined in DSO /oracle/base/dbhome_1/lib//libnnz12.so so try adding it to the linker command line /oracle/base/dbhome_1/lib//libnnz12.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/orapwd] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ah, we seem to have problems with building ORAPWD. This line looks promising /oracle/base/dbhome_1/lib//libnnz12.so: could not read symbols: Invalid operation Let's investigate this lib oracle@ubuntu:~$ cd /oracle/base/dbhome_1/lib/ oracle@ubuntu:/oracle/base/dbhome_1/lib$ ldd libnnz12.so linux-vdso.so.1 => (0x00007fff5dbc5000) libclntshcore.so.12.1 => not found libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f38629f5000) /lib64/ld-linux-x86-64.so.2 (0x00007f38634dd000) Weird, this lib is an oracle lib..and the installer is looking for it in the wrong location. I'll link this lib in the general lib location where it always can be found: /usr/lib. root@ubuntu:~# ln -s /oracle/base/dbhome_1/lib/libclntshcore.so.12.1 /usr/lib/ root@ubuntu:~# ls -l /usr/lib/libclntshcore.so.12.1 lrwxrwxrwx 1 root root 47 Apr 4 20:13 /usr/lib/libclntshcore.so.12.1 -> /oracle/base/dbhome_1/lib/libclntshcore.so.12.1 Check if the linker can resolve the library oracle@ubuntu:~$ ldd /oracle/base/dbhome_1/lib/libnnz12.so linux-vdso.so.1 => (0x00007fffaf748000) libclntshcore.so.12.1 => /usr/lib/libclntshcore.so.12.1 (0x00007fe059479000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe0590b9000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe058eb4000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe058bb8000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe05899b000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe058792000) /lib64/ld-linux-x86-64.so.2 (0x00007fe05a0f1000) Ok, that is one. But there is more! Due to how the GCC is configured in Ubuntu, we need to do some extra alterations.. Open the make file: /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(ORAPWD) : $(ALWAYS) $(ORAPWD_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking password utility (orapwd)" $(RMF) $@ $(ORAPWD_LINKLINE) Add the following $(ORAPWD) : $(ALWAYS) $(ORAPWD_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking password utility (orapwd)" $(RMF) $@ $(ORAPWD_LINKLINE) -lnnz12 Don't forget to save the file! Now: hit the "Retry" button of the installer. Again an error...the same? No! check the log: INFO: - Linking HS OTS agent INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/hsots INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/hsots -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/hormt.o /oracle/base/dbhome_1/rdbms/lib/homts.o /oracle/base/dbhome_1/rdbms/lib/hoat.o /oracle/base/dbhome_1/rdbms/lib/hsxaora.o /oracle/base/dbhome_1/rdbms/lib/xaondy.o -lagtsh -lpthread -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/sysliblist` - INFO: Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib -lnsgr12 INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libagtsh.so: undefined reference to symbol 'ssMemRealloc' /usr/bin/ld: note: 'ssMemRealloc' is defined in DSO /oracle/base/dbhome_1/lib//libclntshcore.so so try adding it to the linker command line /oracle/base/dbhome_1/lib//libclntshcore.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/hsots] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Let's see INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libagtsh.so: undefined A different library..Let's investigate and fix oracle@ubuntu:~$ ldd /oracle/base/dbhome_1/lib//libagtsh.so linux-vdso.so.1 => (0x00007fffd9151000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f50778cd000) libclntsh.so.12.1 => not found libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f507750d000) /lib64/ld-linux-x86-64.so.2 (0x00007f5077eaf000) ah, an missing lib again..Also an Oracle lib.. root@ubuntu:~# ln -s /oracle/base/dbhome_1/lib/libclntsh.so.12.1 /usr/lib root@ubuntu:~# ls -l /usr/lib/libclntsh.so.12.1 lrwxrwxrwx 1 root root 47 Apr 4 20:13 /usr/lib/libclntsh.so.12.1 -> /oracle/base/dbhome_1/lib/libclntsh.so.12.1 Check if this did the trick oracle@ubutu:~$ ldd /oracle/base/dbhome_1/lib//libagtsh.so linux-vdso.so.1 => (0x00007fff12137000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007faa87fca000) libclntsh.so.12.1 => /usr/lib/libclntsh.so.12.1 (0x00007faa852db000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007faa84f1b000) /lib64/ld-linux-x86-64.so.2 (0x00007faa885ac000) libnnz12.so => /oracle/base/dbhome_1/lib/libnnz12.so (0x00007faa84805000) libons.so => /oracle/base/dbhome_1/lib/libons.so (0x00007faa845c1000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007faa842c4000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007faa840a7000) libaio.so.1 => /lib/x86_64-linux-gnu/libaio.so.1 (0x00007faa83ea5000) libclntshcore.so.12.1 => /oracle/base/dbhome_1/lib/libclntshcore.so.12.1 (0x00007faa83954000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007faa8374c000) Perfect! Adjust the makefile again: /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(HSOTS) : $(ALWAYS) $(HSOTS_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking HS OTS agent" $(RMF) $@ $(HSOTS_LINKLINE) Add the following $(HSOTS) : $(ALWAYS) $(HSOTS_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking HS OTS agent" $(RMF) $@ $(HSOTS_LINKLINE) -lagtsh Save the file. For this error, the above fix doesn't seem to be enough, so let's put in a extra parameter somewhere to pacify the compiler: Open the environment file: /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines HSOTS_LINKLINE=$(LINK) $(OPT) $(HSOTSMAI) $(HSALIB_SUBSET1) \ $(HS_OTS_DRIVER) $(HS_XASWITCH_ORA) $(RDBMSLIB)$(XAD_OFF) \ $(HSALIB_SUBSET2) Add the following HSOTS_LINKLINE=$(LINK) $(OPT) -Wl,--no-as-needed $(HSOTSMAI) $(HSALIB_SUBSET1) \ $(HS_OTS_DRIVER) $(HS_XASWITCH_ORA) $(RDBMSLIB)$(XAD_OFF) \ $(HSALIB_SUBSET2) Save the file. Now: hit the "Retry" button of the installer. Again!? Yep. Again. Same error? No! Check the log file INFO: - Linking HS Distributed External Procedure agent INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/hsdepxa INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/hsdepxa -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/hormd.o /oracle/base/dbhome_1/rdbms/lib/homts.o /oracle/base/dbhome_1/rdbms/lib/hoax.o /oracle/base/dbhome_1/rdbms/lib/xaondy.o -lagtsh -lpthread -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm INFO: `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib -lnsgr12 INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libagtsh.so: undefined reference to symbol 'ssMemRealloc' /usr/bin/ld: note: 'ssMemRealloc' is defined in DSO /oracle/base/dbhome_1/lib//libclntshcore.so so try adding it to the linker command line /oracle/base/dbhome_1/lib//libclntshcore.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/hsdepxa] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 We already fixed the dynamic library dependency, so we just adjust the following: Open the environment file: /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines HSDEPXA_LINKLINE=$(LINK) $(OPT) $(HSDEPMAI) $(HSALIB_SUBSET1) \ $(HS_DEP_DRIVER) $(RDBMSLIB)$(XAD_OFF) \ $(HSALIB_SUBSET2) Change this to HSDEPXA_LINKLINE=$(LINK) $(OPT) -Wl,--no-as-needed $(HSDEPMAI) $(HSALIB_SUBSET1) \ $(HS_DEP_DRIVER) $(RDBMSLIB)$(XAD_OFF) \ $(HSALIB_SUBSET2) Save the file. Now: hit the "Retry" button of the installer. And again…an error..Let's see what the problem is.. check the log file INFO: ly+0x3296): undefined reference to `oss_cell_discovery_close' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoOpenFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCreateCtxExt' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoIO' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoFnameMax' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCommit2P' /oracle/base/dbhome_1/lib//libocrb12.so: undefined refere INFO: nce to `kgfoCheckHdl' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoRenameFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoDeleteFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCloseFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoErrorMessage' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoGetSize' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCreateFile' /oracl INFO: e/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoOpenDirty' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCheckMount' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoDestroyCtx' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoShrinkFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoControl' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCreate2P' INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/kfod] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ow. Ok..So..This is a completely different one.. This line though.. INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/kfod] Error 1 This is a not a dynamic executable, so we just open the env file again.. oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines KFOD_LINKLINE=$(LINK) $(S0MAIN) $(SSKFODED) $(SKFODPT) $(KFODOBJ) \ $(LIBGENERIC) $(LLIBDBTOOLS) \ $(LIBGENERIC) $(LLIBSAGE) $(LLIBSKGXP) $(LIBCORE) \ $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Change this to KFOD_LINKLINE=$(LINK) $(S0MAIN) -Wl,--no-as-needed $(SSKFODED) $(SKFODPT) $(KFODOBJ) \ $(LIBGENERIC) $(LLIBDBTOOLS) \ $(LIBGENERIC) $(LLIBSAGE) $(LLIBSKGXP) $(LIBCORE) \ $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Save the file. Now: hit the "Retry" button of the installer. Again! let's check the log INFO: ellScanInit': kfkl.c:(text.unlikely+0x3081): undefined reference to `oss_initialize' kfkl.c:(text.unlikely+0x30e9): undefined reference to `oss_cell_discovery_open' kfkl.c INFO: :(text.unlikely+0x3172): undefined reference to `oss_cell_discovery_fetch' /oracle/base/dbhome_1/lib//libasmclnt12.a(kfkl.o): In function `kfklCellScanNext': kfkl.c:(text.unlikely+0x3231): undefined reference to `oss_cell_discovery_fetch' /oracle/base/dbhome_1/lib//libasmclnt12.a(kfkl.o): In function `kfklCellScanTerm': kfkl.c:(text.unlikely+0x3296): undefined reference to `oss_cell_discovery_close' collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/amdu] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ah, the AMDU is the issue..again: we adjust the gcc parameter.. oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines AMDU_LINKLINE=$(LINK) $(S0MAIN) $(SSKFMUED) $(SKFMUPT) \ $(LLIBDBTOOLS) $(LLIBCORE) $(LLIBGENERIC) $(LLIBUNLSRTL) \ $(LLIBNLSRTL) $(LLIBCORE) $(LLIBSAGE) $(LLIBSKGXP) \ $(LLIBNLSRTL) $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Change these to AMDU_LINKLINE=$(LINK) $(S0MAIN) -Wl,--no-as-needed $(SSKFMUED) $(SKFMUPT) \ $(LLIBDBTOOLS) $(LLIBCORE) $(LLIBGENERIC) $(LLIBUNLSRTL) \ $(LLIBNLSRTL) $(LLIBCORE) $(LLIBSAGE) $(LLIBSKGXP) \ $(LLIBNLSRTL) $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Save the file. Now: hit the "Retry" button of the installer. Check, another one! Check the log. INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/kfed] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'all_no_orcl' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 ah, KFED..Also not dynamic. oracle@ubuntu~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines KFED_LINKLINE=$(LINK) $(S0MAIN) $(SSKFEDED) $(SKFEDPT) \ $(LLIBDBTOOLS) $(LLIBSAGE) \ $(LLIBSKGXP) $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Add the line below KFED_LINKLINE=$(LINK) $(S0MAIN) -Wl,--no-as-needed $(SSKFEDED) $(SKFEDPT) \ $(LLIBDBTOOLS) $(LLIBSAGE) \ $(LLIBSKGXP) $(CSSCLNTLIBS_SERVER) $(LLIBASMCLNT) $(LINKTTLIBS) Save the file. Now: hit the "Retry" button of the installer. Yep...It's for real..Another one..Check the log INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/plshprof INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/plshprof -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/s0khpf.o -lclient12 -lpls12 -lrt -lplp12 -lslax12 -lpls12 -lrt -lplp12 -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs INFO: 12 -ln12 -lnl12 -lnnz12 -lzt12 -lztkg12 -lztkg12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lmm -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls1 INFO: 2 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libclient12.a(kpue.o): undefined reference to symbol 'ons_subscriber_close' /usr/bin/ld: note: 'ons_subscriber_close' is defined in DSO /oracle/base/dbhome_1/lib/libons.so so try adding it to the linker command line /oracle/base/dbhome_1/lib/libons.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/plshprof] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'utilities' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 The fix is known by now, open the make file oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(PLSHPROF) : $(ALWAYS) $(PLSHPROF_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking hierarchical profiler utility (plshprof)" $(RMF) $@ $(PLSHPROF_LINKLINE) Explicitly add the libons.so file, to satisfy the compiler again $(PLSHPROF) : $(ALWAYS) $(PLSHPROF_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking hierarchical profiler utility (plshprof)" $(RMF) $@ $(PLSHPROF_LINKLINE) -lons Save the file. Now: hit the "Retry" button of the installer. Yes, indeed..The next one is popping up..Again: hit the log file.. INFO: - Linking recovery manager (rman) INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/rman INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/rman -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/lib/s0main.o /oracle/base/dbhome_1/rdbms/lib/sskrmed.o /oracle/base/dbhome_1/rdbms/lib/skrmpt.o -ldbtools12 -lclient12 -lsql12 -lpls12 -lrt -lplp12 -lsnls12 -lunls12 -lnls12 -lslax12 -lpls12 -lrt -lplp12 /oracle/base/dbhome_1/lib/libplc12.a -lclntsh -lclntshcore `cat /o INFO: racle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnz12 -lzt12 -lztkg12 -lztkg12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lmm -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/ INFO: lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm INFO: `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libclient12.a(kpue.o): undefined reference to symbol 'ons_subscriber_close' /usr/bin/ld: note: 'ons_subscriber_close' is defined in DSO /oracle/base/dbhome_1/lib/libons.so so try adding it to the linker command line /oracle/base/dbhome_1/lib/libons.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/rman] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'utilities' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ah, not an un-important one..Let's fix this quickly oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(RMAN) : $(ALWAYS) $(RMAN_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking recovery manager (rman)" $(RMF) $@ $(RMAN_LINKLINE) Change to $(RMAN) : $(ALWAYS) $(RMAN_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking recovery manager (rman)" $(RMF) $@ $(RMAN_LINKLINE) -lons Save the file. Now: hit the "Retry" button of the installer. Yes, again...But the end is nigh..Check the log files again.. INFO: - Linking dg4pwd utility INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/dg4pwd INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/dg4pwd -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/dg4pwd.o /oracle/base/dbhome_1/rdbms/lib/houzi.o -lpthread -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib INFO: /usr/bin/ld: /oracle/base/dbhome_1/rdbms/lib/houzi.o: undefined reference to symbol 'ztcsh' /usr/bin/ld: note: 'ztcsh' is defined in DSO /oracle/base/dbhome_1/lib/libnnz12.so so try adding it to the linker command line /oracle/base/dbhome_1/lib/libnnz12.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/dg4pwd] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'utilities' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ah, GCC is in a twitch again /oracle/base/dbhome_1/lib/libnnz12.so: could not read symbols: Invalid operation Edit the make file again oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(TG4PWD) : $(ALWAYS) $(TG4PWD_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking $(TG4DG4)pwd utility" $(RMF) $@ $(TG4PWD_LINKLINE) And add the following $(TG4PWD) : $(ALWAYS) $(TG4PWD_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) " - Linking $(TG4DG4)pwd utility" $(RMF) $@ $(TG4PWD_LINKLINE) -lnnz12 Save the file. Now: hit the "Retry" button of the installer. And again: Check the log file INFO: Linking external procedure agent (/oracle/base/dbhome_1/rdbms/lib/extproc) INFO: rm -f /oracle/base/dbhome_1/rdbms/lib/extproc INFO: /oracle/base/dbhome_1/bin/orald -o /oracle/base/dbhome_1/rdbms/lib/extproc -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/rdbms/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/rdbms/lib/hormc.o /oracle/base/dbhome_1/rdbms/lib/homts.o -lagtsh -lpthread -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/ INFO: lib -lnsgr12 INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libagtsh.so: undefined reference to symbol 'nsdisc' /usr/bin/ld: note: 'nsdisc' is INFO: defined in DSO /oracle/base/dbhome_1/lib//libclntsh.so so try adding it to the linker command line /oracle/base/dbhome_1/lib//libclntsh.so: could not read symbols: Invalid operation collect2: ld returned 1 exit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/extproc] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'utilities' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 And edit the make file to get the compilers satisfaction oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk Search for the lines $(EXTPROC): $(ALWAYS) $(EXTPROC_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) "Linking external procedure agent ($@)" $(RMF) $@ $(EXTPROC_LINKLINE) And change to $(EXTPROC): $(ALWAYS) $(EXTPROC_DEPS) $(SILENT)$(ECHO) $(SILENT)$(ECHO) "Linking external procedure agent ($@)" $(RMF) $@ $(EXTPROC_LINKLINE) -lagtsh Save the file. Now edit the env file oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Search for the lines EXTPROC_LINKLINE=$(LINK) $(OPT) $(EXTPMAI) $(PROD_EXTPROC_OPT) \ $(HSALIB_SUBSET1) $(HSALIB_SUBSET2) And add EXTPROC_LINKLINE=$(LINK) $(OPT) -Wl,--no-as-needed $(EXTPMAI) $(PROD_EXTPROC_OPT) \ $(HSALIB_SUBSET1) $(HSALIB_SUBSET2) Save the file. Now: hit the "Retry" button of the installer. Almost there! Check the log again INFO: - Linking tnslsnr INFO: rm -f tnslsnr INFO: /oracle/base/dbhome_1/bin/orald -o tnslsnr -m64 -z noexecstack -Wl,--disable-new-dtags -L/oracle/base/dbhome_1/network/lib/ -L/oracle/base/dbhome_1/lib/ -L/oracle/base/dbhome_1/lib/stubs/ /oracle/base/dbhome_1/network/lib/s0nsgl.o /oracle/base/dbhome_1/network/lib/snsglp.o -lclntsh -lclntshcore `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnz12 -lzt12 -lztkg12 -lztkg12 -lc INFO: lient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lmm -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /oracle/base/dbhome_1/lib/ldflags` -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls1 INFO: 2 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12 -lnnetd12 -lvsn12 -lcommon12 -lgeneric12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `cat /oracle/base/dbhome_1/lib/sysliblist` -Wl,-rpath,/oracle/base/dbhome_1/lib -lm `cat /oracle/base/dbhome_1/lib/sysliblist` -ldl -lm -L/oracle/base/dbhome_1/lib -lons -lnl12 -ln12 -lnlsnr12 -lnlsnrc12 -lnsgr12 -lncm12 -lnro12 -ln12 -lnl12 -lnlsnr12 -lnlsnrc12 -lncm12 -ln12 -l INFO: m INFO: /usr/bin/ld: /oracle/base/dbhome_1/lib//libclntsh.so: undefined reference to symbol 'ztcr2rnd' /usr/bin/ld: note: 'ztcr2rnd' is defined in DSO /oracle/base/dbhome_1/lib//libnnz12.so so try adding it to the linker command line /oracle/base/dbhome_1/lib//libnnz12.so: could not read symbols: Invalid operation INFO: collect2: ld returned 1 exit status INFO: make: *** [tnslsnr] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'install' of makefile '/oracle/base/dbhome_1/network/lib/ins_net_server.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Edit the make file (THIS IS NOT THE SAME AS BEFORE!) oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/network/lib/ins_net_server.mk look for tnslsnr: $(S0NSGL) $(SNSGLP) $(NSGLPNP) $(SILENT)$(ECHO) " - Linking $(TNSLSNR)" $(RMF) $@ $(TNSLSNR_LINKLINE) And add TWO libs tnslsnr: $(S0NSGL) $(SNSGLP) $(NSGLPNP) $(SILENT)$(ECHO) " - Linking $(TNSLSNR)" $(RMF) $@ $(TNSLSNR_LINKLINE) -lnnz12 -lons Save the file. Now: hit the "Retry" button of the installer. Yes, again..This is still expected..Well..You know where to look now, don't you? INFO: /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCloseFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoShrinkFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCommit2P' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoErrorMessage' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoGetSize' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoOpenDirty' /oracle/base/dbhome_1/lib//libo INFO: crb12.so: undefined reference to `kgfoIO' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCreateCtxExt' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCreate2P' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCheckHdl' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoDestroyCtx' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoRenameFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kg INFO: foCreateFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoDeleteFile' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoFnameMax' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoCheckMount' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoControl' /oracle/base/dbhome_1/lib//libocrb12.so: undefined reference to `kgfoOpenFile' /oracle/base/dbhome_1/lib//libocrutl12.so: undefined reference to `lfieno' collect2: ld returned 1 e INFO: xit status INFO: make: *** [/oracle/base/dbhome_1/rdbms/lib/oracle] Error 1 INFO: End output from spawned process. INFO: ---------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'irman ioracle' of makefile '/oracle/base/dbhome_1/rdbms/lib/ins_rdbms.mk'. See '/oracle/oraInventory/logs/installActions2014-04-05_08-51-51PM.log' for details. Exception Severity: 1 Ah, we have seen this before! A library is not feeling satisfied.. Fix it like this oracle@ubuntu:~$ ldd /oracle/base/dbhome_1/lib//libocrb12.so linux-vdso.so.1 => (0x00007fffa83e3000) libocrutl12.so => not found libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe62fe75000) /lib64/ld-linux-x86-64.so.2 (0x00007fe630520000) root@ubuntu:~# ln -s /oracle/base/dbhome_1/lib//libocrutl12.so /usr/lib root@ubuntu:~# ls -l /usr/lib/libocrutl12.so lrwxrwxrwx 1 root root 41 Sep 19 13:44 /usr/lib/libocrutl12.so -> /oracle/base/dbhome_1/lib//libocrutl12.so oracle$ ldd /oracle/base/dbhome_1/lib//libocrb12.so linux-vdso.so.1 => (0x00007fff3d7ff000) libocrutl12.so => /usr/lib/libocrutl12.so (0x00007fec9674c000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fec9638d000) /lib64/ld-linux-x86-64.so.2 (0x00007fec96c48000) Ok, the library is happy.. Then edit the file : /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk oracle@ubuntu:~$ gedit /oracle/base/dbhome_1/rdbms/lib/env_rdbms.mk Find out ORACLE_LINKLINE=$(ORACLE_LINKER) $(PL_FLAGS) $(ORAMAI) $(SSORED) $(TTCSOI) \ $(ORACLE_KERNEL_LIBS) $(LINKLDLIBS) Change to ORACLE_LINKLINE=$(ORACLE_LINKER) -Wl,--no-as-needed $(PL_FLAGS) $(ORAMAI) $(SSORED) $(TTCSOI) \ $(ORACLE_KERNEL_LIBS) $(LINKLDLIBS) Save the file. Now: hit the "Retry" button of the installer. Wait,WAIT!! ... Is it for real!? YES! We are PAST THE LINKER!! Now let the installer humm along.. AH! This looks familiar...well..let's just do this.. Execute the scripts AS ROOT root@ubuntu:~# /oracle/oraInventory/orainstRoot.sh Changing permissions of /oracle/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/oraInventory to oinstall. The execution of the script is complete. root@ubuntu:~# /oracle/base/dbhome_1/root.sh Performing root user operation for Oracle 12c The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /oracle/base/dbhome_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. And continue by pressing OK on the install screen. Indeed...Successful indeed…The base software is installed! Great job, well done! Now grab yourself a good cup of coffee and relish in this accomplishment! No mean feat at all! Of course, this is only the basis, we need to reap the benefits of all this work by installing (and at the same time do a sanity check to see if we really did our work correct) an database.. So: let's fire up the DBCA! Quickly setup the environment oracle@ubuntu:~$ export ORACLE_HOME=/oracle/base/dbhome_1 oracle@ubuntu:~$ export PATH=$PATH:$ORACLE_HOME/bin oracle@ubuntu:~$ dbca Press Next. Select advanced mode (we like control!) and press Next. A General purpose db with samples: perfect! Press Next. Of course we go for the new features, but let's play it cool, just create a container with just one PDB first.. Make the adjustments, and press Next. Due to lack of OEM Cloud…we stick to the EM Database express (also VERY nice to play with!) Press Next. One ring to rule them all…erhm..password that is..will do..Press Next after making the adjustments. One listener will suffice for now..Press Next. Just a simple setup for now WITH FRA and ARCHIVING enabled..we are going to backup this database before we "rough treat" it..Make the adjustments, and press Next. Yes, we love the examples…and we do need some data to play and display with..Press Next. I can live with these settings, and we change them later anyway..so press Next. Hmm, yes, please create a template for future reference, and yes, please create the database. Click the options and press Next. Check the settings…and press Finish (the moment of truth!).. Almost then...Click the pop-up..Press OK. The DBCA gets going.. And finishes! Note down this information and press OK. Press Close. And we are done. A new fresh database ready to use and abuse! A small test is, of course, mandatory oracle@ubuntu:~$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Fir Apr 4 21:18:00 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options oracle$ export ORACLE_SID=oraclepdb oracle$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.1.0 Production on Fir Apr 4 21:18:00 2014 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options SQL> select instance_name from gv$instance; INSTANCE_NAME ---------------- oracledb SQL>exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options And RMAN $ rman target / Recovery Manager: Release 12.1.0.1.0 - Production on Fir Apr 4 21:18:00 2014 Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved. connected to target database: ORACLEDB (DBID=415606655) RMAN> show all; using target database control file instead of recovery catalog RMAN configuration parameters for database with db_unique_name PACKARDDB are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/base/dbhome_1/dbs/snapcf_packarddb.f'; # default RMAN> It seems we are ready to roll! I hope this post was of use to you, and of course, I'm open to suggestions. Please correct me if I made wrong assumptions, or you have improvements. Happy Database'ing! ### PHP Merge Sort Wiki Merge sort was invented by John Von Neumann, a big name in computer science history. It is a divide and conquer algorithm. It sorts arrays by dividing them recursively into halves (divide) and then sorting and merging them back together (conquer). The good thing about this algorithm is it preserves order of equal elements (if you have two 25s in an array, merge sort will place the one that came earlier in the original array on the left side). Example (consisting of two functions: divide and conquer) <?php   $arr = array(); for ($i = 0; $i < 100; ++$i) {     $arr[] = $i; } shuffle($arr); $sortedArr = divide($arr); var_dump($sortedArr);   function divide(array $arr) {     if (1 === count($arr)) {         return $arr;     }     $left = $right = array();     $middle = round(count($arr)/2);     for ($i = 0; $i < $middle; ++$i) {         $left[] = $arr[$i];     }     for ($i = $middle; $i < count($arr); ++$i) {         $right[] = $arr[$i];     }     $left = divide($left);     $right = divide($right);     return conquer($left, $right); }   function conquer(array $left, array $right) {     $result = array();     while (count($left) > 0 || count($right) > 0) {         if (count($left) > 0 && count($right) > 0) {             $firstLeft = current($left);             $firstRight = current($right);             if ($firstLeft <= $firstRight) {                 $result[] = array_shift($left);             } else {                 $result[] = array_shift($right);             }         } else if (count($left) > 0) {             $result[] = array_shift($left);         } else if (count($right) > 0) {             $result[] = array_shift($right);         }     }     return $result; } Which will output array(100) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) [10]=> int(10) [11]=> int(11) [12]=> int(12) [13]=> int(13) [14]=> int(14) [15]=> int(15) [16]=> int(16) [17]=> int(17) [18]=> int(18) [19]=> int(19) [20]=> int(20) [21]=> int(21) [22]=> int(22) [23]=> int(23) [24]=> int(24) [25]=> int(25) [26]=> int(26) [27]=> int(27) [28]=> int(28) [29]=> int(29) [30]=> int(30) [31]=> int(31) [32]=> int(32) [33]=> int(33) [34]=> int(34) [35]=> int(35) [36]=> int(36) [37]=> int(37) [38]=> int(38) [39]=> int(39) [40]=> int(40) [41]=> int(41) [42]=> int(42) [43]=> int(43) [44]=> int(44) [45]=> int(45) [46]=> int(46) [47]=> int(47) [48]=> int(48) [49]=> int(49) [50]=> int(50) [51]=> int(51) [52]=> int(52) [53]=> int(53) [54]=> int(54) [55]=> int(55) [56]=> int(56) [57]=> int(57) [58]=> int(58) [59]=> int(59) [60]=> int(60) [61]=> int(61) [62]=> int(62) [63]=> int(63) [64]=> int(64) [65]=> int(65) [66]=> int(66) [67]=> int(67) [68]=> int(68) [69]=> int(69) [70]=> int(70) [71]=> int(71) [72]=> int(72) [73]=> int(73) [74]=> int(74) [75]=> int(75) [76]=> int(76) [77]=> int(77) [78]=> int(78) [79]=> int(79) [80]=> int(80) [81]=> int(81) [82]=> int(82) [83]=> int(83) [84]=> int(84) [85]=> int(85) [86]=> int(86) [87]=> int(87) [88]=> int(88) [89]=> int(89) [90]=> int(90) [91]=> int(91) [92]=> int(92) [93]=> int(93) [94]=> int(94) [95]=> int(95) [96]=> int(96) [97]=> int(97) [98]=> int(98) [99]=> int(99) } Best way to understand how it works is to apply it to a smaller array and add a little debugging inside the functions so we can see what’s happening. After a little tweaking: <?php   $arr = array(4, 5, 1, 3, 2); $sortedArr = divide($arr); var_dump($sortedArr);   function divide(array $arr) {     if (1 === count($arr)) {         return $arr;     }     $left = $right = array();     $middle = round(count($arr)/2);     for ($i = 0; $i < $middle; ++$i) {         $left[] = $arr[$i];     }     for ($i = $middle; $i < count($arr); ++$i) {         $right[] = $arr[$i];     }     $left = divide($left);     $right = divide($right);     echo "We are going to conquer these two arrays:\narray(",     implode(", ", $left), ")\narray(", implode(", ", $right), ")\n";     $conquered = conquer($left, $right);     echo "After conquering we get: array(", implode(", ", $conquered), ")\n\n";     return $conquered; }   function conquer(array $left, array $right) {     $result = array();     while (count($left) > 0 || count($right) > 0) {         if (count($left) > 0 && count($right) > 0) {             $firstLeft = current($left);             $firstRight = current($right);             if ($firstLeft 0) {             $result[] = array_shift($left);         } else if (count($right) > 0) {             $result[] = array_shift($right);         }     }     return $result; } We are going to conquer these two arrays: array(4) array(5) After conquering we get: array(4, 5) We are going to conquer these two arrays: array(4, 5) array(1) After conquering we get: array(1, 4, 5) We are going to conquer these two arrays: array(3) array(2) After conquering we get: array(2, 3) We are going to conquer these two arrays: array(1, 4, 5) array(2, 3) After conquering we get: array(1, 2, 3, 4, 5) array(5) { [0]=> int(1) [1]=> int(2) [2]=> int(3) [3]=> int(4) [4]=> int(5) } ### PHP Comb Sort Wiki Shellsort was basically a generalisation of the Insertion sort, Comb sort is a generalisation of the Bubble sort. In the same way Shellsort was decreasing a gap between elements for subarrays been sent as input to the Insertion sort function, Comb sort is also decreasing the gap after every iteration and sending increasingly lengthy subarrays to the Bubble sort function. Example <?php   define('SHRINK_FACTOR', 1.3);   $arr = array(); for ($i = 0; $i < 100; ++$i) {     $arr[] = $i; } shuffle($arr); $sortedArr = combSort($arr); var_dump($sortedArr);   function combSort(array $arr) {     $gap = floor(count($arr)/SHRINK_FACTOR);     while ($gap > 0) {         for ($i = 0; $i < count($arr)-$gap; ++$i) {             $arrWithGapsKeys = array();             $arrWithGaps = array();             $loop = true;             $j = $i;             while ($loop) {                 if (isset($arr[$j])) {                     $arrWithGapsKeys[] = (int)$j;                     $arrWithGaps[] = $arr[$j];                     $j += $gap;                 } else {                     $loop = false;                 }             }             $arrWithGapsOrdered = bubbleSort($arrWithGaps);             foreach ($arrWithGapsKeys as $key) {                 $arr[$key] = current($arrWithGapsOrdered);                 next($arrWithGapsOrdered);             }         }         $gap = floor($gap/SHRINK_FACTOR);     }     return $arr; }   function bubbleSort(array $arr) {     $sorted = false;     while (false === $sorted) {         $sorted = true;         for ($i = 0; $i < count($arr)-1; ++$i) {             $current = $arr[$i];             $next = $arr[$i+1];             if ($next < $current) {                 $arr[$i] = $next;                 $arr[$i+1] = $current;                 $sorted = false;             }         }     }     return $arr; } Will result in array(100) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) [10]=> int(10) [11]=> int(11) [12]=> int(12) [13]=> int(13) [14]=> int(14) [15]=> int(15) [16]=> int(16) [17]=> int(17) [18]=> int(18) [19]=> int(19) [20]=> int(20) [21]=> int(21) [22]=> int(22) [23]=> int(23) [24]=> int(24) [25]=> int(25) [26]=> int(26) [27]=> int(27) [28]=> int(28) [29]=> int(29) [30]=> int(30) [31]=> int(31) [32]=> int(32) [33]=> int(33) [34]=> int(34) [35]=> int(35) [36]=> int(36) [37]=> int(37) [38]=> int(38) [39]=> int(39) [40]=> int(40) [41]=> int(41) [42]=> int(42) [43]=> int(43) [44]=> int(44) [45]=> int(45) [46]=> int(46) [47]=> int(47) [48]=> int(48) [49]=> int(49) [50]=> int(50) [51]=> int(51) [52]=> int(52) [53]=> int(53) [54]=> int(54) [55]=> int(55) [56]=> int(56) [57]=> int(57) [58]=> int(58) [59]=> int(59) [60]=> int(60) [61]=> int(61) [62]=> int(62) [63]=> int(63) [64]=> int(64) [65]=> int(65) [66]=> int(66) [67]=> int(67) [68]=> int(68) [69]=> int(69) [70]=> int(70) [71]=> int(71) [72]=> int(72) [73]=> int(73) [74]=> int(74) [75]=> int(75) [76]=> int(76) [77]=> int(77) [78]=> int(78) [79]=> int(79) [80]=> int(80) [81]=> int(81) [82]=> int(82) [83]=> int(83) [84]=> int(84) [85]=> int(85) [86]=> int(86) [87]=> int(87) [88]=> int(88) [89]=> int(89) [90]=> int(90) [91]=> int(91) [92]=> int(92) [93]=> int(93) [94]=> int(94) [95]=> int(95) [96]=> int(96) [97]=> int(97) [98]=> int(98) [99]=> int(99) } ### PHP Insertion Sort Algorithm Wiki Insertion sort is another common sorting algorithm. To explain how it works, try to imagine this. There’s a couple of cards on the table and you want to sort them in a correct order. You will pick a card from the table (based on any criteria, it doesn’t really matter, you can just pick the cards up randomly) and put it in your left hand at a correct position. To determine the correct position, you will compare it with every other card in your left hand starting from the right. This is a similar algorithm you might unconsciously use when playing cards with your friends. Example <?php   $table = array(7, 3, 9, 6, 5, 1, 2, 0, 8, 4); $leftHand = insertionSort($table); var_dump($leftHand);   function insertionSort(array $table) {     $leftHand = array();     foreach ($table as $card) {         if (0 === count($leftHand)) {             $leftHand[] = $card;         } else {             $insertedCard = false;             $reindexedLeftHand = array();             for ($i = count($leftHand)-1; $i >= 0; --$i) {                 if ($card >= $leftHand[$i]) {                     for ($j = 0; $j <= $i; ++$j) {                         $reindexedLeftHand[$j] = $leftHand[$j];                     }                     $reindexedLeftHand[] = $card;                     for ($j = $i+1; $j < count($leftHand); ++$j) {                         $reindexedLeftHand[$j+1] = $leftHand[$j];                     }                     $insertedCard = true;                     break;                 }             }             if (false === $insertedCard) {                 $reindexedLeftHand[] = $card;                 foreach ($leftHand as $cardInLeftHand) {                     $reindexedLeftHand[] = $cardInLeftHand;                 }             }             $leftHand = $reindexedLeftHand;         }     }     return $leftHand; } Will result in array(10) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) } ### PHP Selection Sort Algorithm Wiki Selection sort is another common sorting algorithm. You iterate n times over your list of numbers where n is the number of items in the list. In every iteration you find the minimum value in the list and switch its position with the first item in the list. In every iteration you will be looking for a minimum value in a list with n – i length where n is the number of items in the list and i is the number of the current iteration. In other words, in the first iteration you will use the full list, in the second iteration you will use a list without the first item, in the third iteration you will use a list without first two items and so on. It’s the same thing as using array_unshift every time after you find a minimum and place it on the beginning of the array. Example <?php   $arr = array(7, 3, 9, 6, 5, 1, 2, 0, 8, 4); $sortedArr = selectionSort($arr); var_dump($sortedArr);   function selectionSort(array $arr) {     for ($i = 0; $i < count($arr); ++$i) {         $min = null;         $minKey = null;         for($j = $i; $j < count($arr); ++$j) {             if (null === $min || $arr[$j] < $min) {                 $minKey = $j;                 $min = $arr[$j];             }         }         $arr[$minKey] = $arr[$i];         $arr[$i] = $min;     }     return $arr; } Will result in array(10) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) } ### PHP Shellsort Wiki Shellsort, named after Donald Shell, it is basically a generalisation of simpler algorithms like Bubble sort or Insertion sort. It starts with far apart elements. The gap between elements is then decreased after every iteration. That means we start with small subarrays and then with every iteration the subarrays are getting bigger as we are decreasing the gap between elements. I chose to start with a gap equal to half of the array length rounded down. After every iteration I divide the gap by two and round down again until zero is reached. Example <?php $arr = array(); for ($i = 0; $i < 100; ++$i) { $arr[] = $i; } shuffle($arr); $sortedArr = shellSort($arr); var_dump($sortedArr); function shellSort(array $arr) { $gap = floor(count($arr)/2); while ($gap > 0) { for ($i = 0; $i < count($arr)-$gap; ++$i) { $arrWithGapsKeys = array(); $arrWithGaps = array(); $loop = true; $j = $i; while ($loop) { if (isset($arr[$j])) { $arrWithGapsKeys[] = (int)$j; $arrWithGaps[] = $arr[$j]; $j += $gap; } else { $loop = false; } } $arrWithGapsOrdered = insertionSort($arrWithGaps); foreach ($arrWithGapsKeys as $key) { $arr[$key] = current($arrWithGapsOrdered); next($arrWithGapsOrdered); } } $gap = floor($gap/2); } return $arr; } function insertionSort(array $table) { $leftHand = array(); foreach ($table as $card) { if (0 === count($leftHand)) { $leftHand[] = $card; } else { $insertedCard = false; $reindexedLeftHand = array(); for ($i = count($leftHand)-1; $i >= 0; --$i) { if ($card >= $leftHand[$i]) { for ($j = 0; $j <= $i; ++$j) { $reindexedLeftHand[$j] = $leftHand[$j]; } $reindexedLeftHand[] = $card; for ($j = $i+1; $j < count($leftHand); ++$j) { $reindexedLeftHand[$j+1] = $leftHand[$j]; } $insertedCard = true; break; } } if (false === $insertedCard) { $reindexedLeftHand[] = $card; foreach ($leftHand as $cardInLeftHand) { $reindexedLeftHand[] = $cardInLeftHand; } } $leftHand = $reindexedLeftHand; } } return $leftHand; } Will result in array(100) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) [10]=> int(10) [11]=> int(11) [12]=> int(12) [13]=> int(13) [14]=> int(14) [15]=> int(15) [16]=> int(16) [17]=> int(17) [18]=> int(18) [19]=> int(19) [20]=> int(20) [21]=> int(21) [22]=> int(22) [23]=> int(23) [24]=> int(24) [25]=> int(25) [26]=> int(26) [27]=> int(27) [28]=> int(28) [29]=> int(29) [30]=> int(30) [31]=> int(31) [32]=> int(32) [33]=> int(33) [34]=> int(34) [35]=> int(35) [36]=> int(36) [37]=> int(37) [38]=> int(38) [39]=> int(39) [40]=> int(40) [41]=> int(41) [42]=> int(42) [43]=> int(43) [44]=> int(44) [45]=> int(45) [46]=> int(46) [47]=> int(47) [48]=> int(48) [49]=> int(49) [50]=> int(50) [51]=> int(51) [52]=> int(52) [53]=> int(53) [54]=> int(54) [55]=> int(55) [56]=> int(56) [57]=> int(57) [58]=> int(58) [59]=> int(59) [60]=> int(60) [61]=> int(61) [62]=> int(62) [63]=> int(63) [64]=> int(64) [65]=> int(65) [66]=> int(66) [67]=> int(67) [68]=> int(68) [69]=> int(69) [70]=> int(70) [71]=> int(71) [72]=> int(72) [73]=> int(73) [74]=> int(74) [75]=> int(75) [76]=> int(76) [77]=> int(77) [78]=> int(78) [79]=> int(79) [80]=> int(80) [81]=> int(81) [82]=> int(82) [83]=> int(83) [84]=> int(84) [85]=> int(85) [86]=> int(86) [87]=> int(87) [88]=> int(88) [89]=> int(89) [90]=> int(90) [91]=> int(91) [92]=> int(92) [93]=> int(93) [94]=> int(94) [95]=> int(95) [96]=> int(96) [97]=> int(97) [98]=> int(98) [99]=> int(99) } ### PHP Bubble Sort Algorithm Wiki Bubble sort algorithm is a basic algorithm for sorting sets of numbers. It is the one you will probably be confronted with at college. There are probably better sorting algorithms but since this is the one you will most likely encounter, I have decided to write a simple implementation of it in PHP. The idea is simple. You iterate over an array (from the first to the last but one number) using a while loop until it's sorted. In every iteration you compare the current and the next number. If the current number is greater than the next number, switch them. That's in a case you want to sort the array in ascending order. For descending order it is very similar. Just change < to >. Ascending order <?php $arr = array(7, 3, 9, 6, 5, 1, 2, 0, 8, 4); $sortedArr = bubbleSort($arr); var_dump($sortedArr); function bubbleSort(array $arr) { $sorted = false; while (false === $sorted) { $sorted = true; for ($i = 0; $i < count($arr)-1; ++$i) { $current = $arr[$i]; $next = $arr[$i+1]; if ($next < $current) { $arr[$i] = $next; $arr[$i+1] = $current; $sorted = false; } } } return $arr; } Will result in array(10) { [0]=> int(0) [1]=> int(1) [2]=> int(2) [3]=> int(3) [4]=> int(4) [5]=> int(5) [6]=> int(6) [7]=> int(7) [8]=> int(8) [9]=> int(9) } Descending order <?php $arr = array(7, 3, 9, 6, 5, 1, 2, 0, 8, 4); $sortedArr = bubbleSortDesc($arr); var_dump($sortedArr); function bubbleSortDesc(array $arr) { $sorted = false; while (false === $sorted) { $sorted = true; for ($i = 0; $i < count($arr)-1; ++$i) { $current = $arr[$i]; $next = $arr[$i+1]; if ($next > $current) { $arr[$i] = $next; $arr[$i+1] = $current; $sorted = false; } } } return $arr; } Will result in array(10) { [0]=> int(9) [1]=> int(8) [2]=> int(7) [3]=> int(6) [4]=> int(5) [5]=> int(4) [6]=> int(3) [7]=> int(2) [8]=> int(1) [9]=> int(0) } ### Disable Time Machine Local Backups and Sleep Image in OS X Mavericks Disable Time Machine Local Backups Time Machine in Mac OS X initiates a sometimes useful, sometimes not feature: local backups. Called snapshots, this seems to be kicked off when your primary Mac is a laptop and the Time Machine backup is an external drive, so OS X Lion compensates for the potentially unavailable external disk by keeping an additional backup locally on the Macs primary hard drive. This has its obvious advantages, since you can restore to past Time Machine backups right away from anywhere, but if you're trying to conserve disk space this can be a real pain. Note: Time Machine local backups are only stored if you have Time Machine enabled in general. Disable Time Machine Local Backup Storage Here's how to turn off local backups: Launch Terminal from /Applications/Utilities, enter the following command: $ sudo tmutil disablelocal Enter the admin password when requested to disable local backups: Re-Enable Local Time Machine Backups Of course this tip would only be half useful if we didn't show you how to turn this back on. The steps are the same as above, except with the following command entered into the Terminal instead: $ sudo tmutil enablelocal Remember, if you disable this feature you will not have local backups anymore, so if something goes wrong you will be out of luck. It's important to have regular backups of your data, and so if you're going to disable this be sure to still use Time Machine, perhaps even initiating a manual backup before you disconnect the drive, in order to preserve a recent copy of your data. Disable Sleep Image $ sudo pmset -a hibernatemode 0 Re-Enable Sleep Image $ sudo pmset -a hibernatemode 1 ### Welding & Metallurgy Courses Site ### Start Learning Laravel Wiki Laravel is a free, open source PHP web application framework, designed for the development of MVC web applications. Laravel is released under the MIT license, with its source code hosted on GitHub. Official Website laravel.com Chinese Website www.golaravel.com Video Tutorial TutsPlus - Laravel Essentials ( Powered by Baidu ) Download Key: 6v5h Zexo - Laravel Video Tutorial ( Powered by Baidu ) Download Key: mex2 Got a blank white screen after install Laravel by composer install? Since upgrading to Laravel 4.1, I've had white screen "errors" (WSOD) when the application could not write to the log location. I've always solved this by making the app/storage directory writable by Apache (either group writable to www-data or world-writable - that depends on your server setup. # Group Writable (Group, User Writable) $ sudo chmod -R gu+w app/storage # World-writable (Group, User, Other Writable) $ sudo chmod -R guo+w app/storage Global Installation of Composer $ curl -sS https://getcomposer.org/installer | php $ sudo mv composer.phar /usr/local/bin/composer And if you're even more lazy, like me, you can create an alias, open .bashrc file located in user directory. Go to the end of the file add the alias: alias composer='/usr/local/bin/composer.phar' Write and close the file, run source ~/.bashrc. This way you can invoke composer with just composer. ### Reslove OS X composer install returned with an error - Mcrypt PHP extension required. When I run composer install in laravel, I got this error-message Mcrypt PHP extension required. Script php artisan clear-compiled handling the post-install-cmd event returned with an error [RuntimeException] Error Output: install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-plugins] [--no-custom-installers] [--no-scripts] [--no-progress] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader] Upgrade PHP on OS X Use homebrew $ brew update $ brew upgrade $ brew tap homebrew/dupes $ brew tap josegonzalez/homebrew-php $ brew install php55 or visit php-osx.liip.ch to use PHP 5.5/5.4/5.3 for OS X 10.6/10.7/10.8/10.9 as binary package Install Mcrypt PHP extension Use Home Brew $ brew install php55-mcrypt Restart Apache Service $ sudo apachectl restart Okey, run again. $ composer install Tips Edit php.ini file in OS X $ sudo cp /private/etc/php.ini.default /private/etc/php.ini ### TimeFragment TimeFragment is a creative community. Include Creative Club, Travel Club, E-commerce Section, Cool Jobs and Timeline five part. This is one of my personal projects developed using Laravel Framework, MySQL database, WebSockets Web IM, and Beanstalk Queue. ### PHP - Website Online Compress Tool Download PHP - Website Online Packaging Tool Download and unzip, get package.php, put it in the same directory using FTP to upload files to be compressed (typically root of the site). Open the page in a browser, enter the password (authentication password in the page prompted), selected to compress files or directories (including subdirectories), click on the "Compress" button to start compression. After the extraction is complete, the file is automatically generated package download link address, click to download. Screenshot When using this script, have written to the web root file permissions, otherwise an error during compression. After use, I recommended to remove the PHP script in time to avoid malicious use. ### Sublime Text Remove Whitespace Characters End of the Line Open Preferences → Settings-User to open user configuration and add the following line "trim_trailing_white_space_on_save": true, Save the file will be automatically deleted when trailing whitespace characters. ### Remove the index.php in ThinkPHP For example, your original path is http://localhost/test/index.php/index/add, and you want to it like this http://localhost/test/index/add. Configure Apache Make sure that httpd.conf configuration file loaded mod_rewrite.so module, uncommented: # LoadModule rewrite_module modules/mod_rewrite.so Modify AllowOverride None to AllowOverride All Configure ThinkPHP Make sure URL_MODEL set to 2, in the project's configuration file: return Array( 'URL_MODEL' => '2', ); Modify .htaccess Add to .htaccess file: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php/$1 [QSA,PT,L] ### MainStage - By Apple Description MainStage 3 lets you take your Mac to the stage with a full-screen interface optimized for live performance, flexible hardware control, and a massive collection of plug-ins and sounds that are fully compatible with Logic Pro X. The Ultimate Live Rig Perform live with more than 80 instrument, effect, and MIDI plug-ins or work with Audio Units plug-ins Add your sounds from Logic Pro X and GarageBand to bring your studio sound to the stage Combine instruments and live audio, such as keyboards and vocals, in a single Patch Switch between Patches without stopping sound output or cutting off held notes Design rich keyboard Patches using splits and layers, the Arpeggiator, and other MIDI plug-ins Perform with stereo or multitrack backing tracks using Playback 64-bit architecture uses all the RAM in your system for larger sampled instruments Hardware Control Connect your favorite hardware and use it to play and control your plug-ins Use USB and MIDI-equipped controllers like keyboards, foot pedals, and drum pads Instantly set up a large variety of hardware using automatic device recognition Quickly assign hardware knobs, buttons, and faders to onscreen controls Manage sophisticated rigs using the Assignments & Mapping overview Compatible with Logic Remote v1.0.3 Designed for the Stage See only what you need during your performance using a customizable, full-screen live interface Smart Controls offer dynamic performance controls that change with each Patch Easily view your Mac screen from a distance using a high-contrast, full-screen performance view Instrument Plug-ins Play deeply sampled, professionally mixed, and fully customizable drum kits provided by Drum Kit Designer Perform electronic beats with the Ultrabeat drum machine Play classic '70s- and '80s-style synthesizer sounds using Retro Synth Play faithful models of vintage keyboards with Vintage B3, Vintage Electric Piano, and Vintage Clav Perform inspired sounds using a collection of synths that provide analog, wavetable, FM, and physical modeling synthesis Play or create a wide variety of rich sampled instruments with the EXS24 Sampler MIDI Plug-ins Transform simple ideas into elaborate performances using nine MIDI plug-ins Instantly turn a simple chord into a rich performance with the Arpeggiator Play a single note to trigger predefined chords with Chord Trigger Perform a part and conform notes to a specific scale using Transposer Creative and Production Effects Play your sounds through realistic acoustic spaces using Space Designer convolution reverb Use a range of multi-tap, vintage tape, and stereo delays Add movement to your tracks with a variety of modulation effects Get the perfect mix with a collection of EQs, dynamics, and other mixing tools Guitar and Bass Gear Build your own guitar or bass rig with Amp Designer using vintage and modern amps, cabinets, and mics Design a custom Pedalboard from a collection of delay, distortion, and modulation stompboxes Access a full-screen Tuner to quickly get in tune Sound Library Over 1800 instrument and effect Patches More than 750 meticulously sampled instruments 4600 Apple Loops in modern urban and electronic genres Minimum System Requirements 4GB of RAM Display with 1280-by-768 resolution or higher OS X v10.9.5 or later Requires 64-bit Audio Units plug-ins 6GB available disk space for minimum install / 45GB hard disk for full Sound Library installation View on App Store Download Updated : Apr 07, 2016 Version : 3.2.4 Torrent : magnet:?xt=urn:btih:d0a2296139409af748236c8aca64dfb8a8c75527&dn=MainStage3.2.4&tr=http109.235.50.1662710announce Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Logic Pro X - By Apple Description Logic Pro X is the most advanced version of Logic ever. Sophisticated new tools for professional songwriting, editing, and mixing are built around a modern interface that’s designed to get creative results quickly and also deliver more power whenever it’s needed. Logic Pro X includes a massive collection of instruments, effects, and loops, providing a complete toolkit to create amazing-sounding music. Powerful New Interface Consolidate and control multiple tracks or create rich, layered instruments with Track Stacks Smart Controls allow you to manipulate multiple plug-ins and parameters with a single move An enhanced Mixer lets you move, copy, and bypass channel inserts more efficiently Quickly re-order song sections and try new ideas using Arrangement Markers Autosave keeps your work safe and sound See and edit more of your performances in the Score editor with a new wrapped view Professional Music Production Fix out-of-tune vocals and change the melodies of recorded audio with Flex Pitch Manipulate the timing and tempo of any recording with ease using Flex Time Record and seamlessly punch in and out of one or multiple tracks Keep takes organized with take folders and build comps quickly with Quick Swipe Comping Record automation for any channel strip or plug-in parameter on the fly Create and mix music from anywhere in the room using Logic Remote on iPad 64-bit architecture supports large projects containing hundreds of tracks and sampled instruments Drum Production Create drum tracks that play with your song using Drummer, a virtual session player Choose among 15 distinct drummers that can take your direction and perform millions of unique grooves Build your own kit with Drum Kit Designer using a diverse collection of deeply sampled, professionally mixed snares, toms, kicks, hi-hats, and cymbals Produce electronic drum sequences with Ultrabeat Keyboards and Synths Instantly turn a simple chord into a rich performance with the Arpeggiator Transform simple ideas into elaborate performances using nine MIDI plug-ins Create classic ’70s- and ’80s-style synthesizer tracks using Retro Synth Play faithful models of vintage keyboards with Vintage B3, Vintage Electric Piano, and Vintage Clav Get inspired by a collection of synths that provide analog, wavetable, FM, and physical modeling synthesis Play or create a wide variety of rich sampled instruments with the EXS24 Sampler Guitar and Bass Gear Build your own guitar or bass rig with Amp Designer using vintage and modern amps, cabinets, and mics Design a custom Pedalboard from a collection of delay, distortion, and modulation stompboxes Access the Tuner with just a click to quickly get in tune Creative and Production Effects Play your sounds through realistic acoustic spaces using Space Designer convolution reverb Use a range of multi-tap, vintage tape, and stereo delays Add movement to your tracks with a variety of modulation effects Get the perfect mix with a variety of EQs, dynamics, and other mixing tools Put the finishing touch on your project using a collection of mastering plug-ins Sound Library Over 1500 instrument and effect Patches More than 800 sampled instruments 30 urban and electronic drum machine Patches 3600 Apple Loops in modern urban and electronic genres Compatibility Expand your instrument and effects library with third-party Audio Units-compatible plug-ins Import and export XML to support Final Cut Pro X workflows Export and share your songs directly to SoundCloud Open projects from Logic 5 or later Minimum System Requirements 4GB of RAM Display with 1280-by-768 resolution or higher OS X v10.10 or later Requires 64-bit Audio Units plug-ins 6GB available disk space for minimum install / 45GB hard disk for full Sound Library installation View on App Store Download Updated : Jun 30, 2016 Version : 10.2.4 Torrent : magnet:?xt=urn:btih:a3440cd71a6e485b64348ee95abd16e68409f994&dn=LogicProX10.2.4OSXdada&tr=udpinferno.demonoid.ooo3392announce&tr=udptracker.opentrackr.org1337announce&tr=udptracker.coppersurfer.tk80announce&tr=udptracker.leechers-paradise.org6969&tr=udptorrent.gresille.org80announce&tr=udpwww.eddie4.nl6969announce&tr=udpthetracker.org80announce&tr=httptracker.flashtorrents.org6969announce&tr=udp9.rarbg.com2710announce&tr=httpmgtracker.org2710announce&tr=httptracker1.wasabii.com.tw6969announce&tr=httptracker.tfile.meannounce&tr=httppow7.comannounce&tr=httptracker.aletorrenty.pl2710announce&tr=httpexplodie.org6969announce&tr=udptracker.blackunicorn.xyz6969announce&tr=httpretracker.2kom.ruannounce&tr=httpretracker.krs-ix.ruannounce&tr=httptracker4.infohash.organnounce&tr=httptracker.free-torrents.plannounce&tr=udptracker.piratepublic.com1337announce&tr=udp9.rarbg.to2710announce&tr=httptracker.tricitytorrents.com2710announce&tr=httptracker.pubt.net2710announce&tr=udp11.rarbg.comannounce&tr=udptracker.ccc.de80announce&tr=udpopen.demonii.com1337announce Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Compressor - By Apple Description Compressor adds power and flexibility to Final Cut Pro X export. Customize output settings, work faster with distributed encoding, and tap into a comprehensive set of delivery features. Powerful Encoding for Final Cut Pro Use Compressor to customize encoding settings available in Final Cut Pro Choose from a wide array of codecs, sizes, frame rates, and other parameters Save your custom settings in Compressor; they automatically appear in Final Cut Pro Share settings with other editors, even if Compressor isn't installed on their workstations Time-Saving Encoding Workflow Work fast using a single-window interface and preset Destinations for common encoding tasks Experiment freely with encoding options, using real-time feedback in the Preview window Find exactly the setting you need in just a few clicks in the updated settings library Set up batch processes to streamline the encoding of large numbers of files Build custom Destinations to combine encoding with tasks such as moving or copying files Create self-contained Droplets to encode on the desktop with drag-and-drop ease Industry-Standard Encoding Support Use one-step settings for Apple devices and websites such as Vimeo, YouTube, and Facebook Work with a choice of themed menus to encode and burn a DVD or Blu-ray disc Add chapter markers for discs or podcasts distributed on iPad, iPhone, iPod, or Apple TV Encode to a broad range of industry-standard formats, such as MPEG-2, H.264, and ProRes Generate files for HTTP live streaming in a single step Import Targa, DPX, TIFF, PSD, or PNG image sequences and encode them to any setting Tap into advanced encoding features for adding closed captioning, metadata, and more Pristine Format Conversions Convert any file from its source format to another format, such as NTSC to PAL or SD to HD Clean up and customize your content using image filters, a timecode overlay, and watermarks Speed up video, slow it down, or adjust the frame rate to make the duration match a runtime Distributed Encoding Save time by distributing encoding work among multiple cores and workstations Install Compressor on any Mac to activate it as a node for distributed encoding Speed up encoding in Final Cut Pro by choosing a group of shared computers for exporting System Requirements: OS X 10.11.4 or later, 4GB of RAM (8GB recommended for 4K and 3D titles), OpenCL-capable graphics card or Intel HD Graphics 3000 or later, 256MB of VRAM (1GB recommended for 4K and 3D titles), 931MB of disk space. Some features require Internet access; fees may apply. Blu-ray recorder required for burning Blu-ray discs. View on App Store Download Updated : Feb 20, 2016 Version : 4.2.2 Torrent : magnet:?xt=urn:btih:1f9325bc7aa9c046e65366ad37e362e8b81c48e1&dn=FinalCutPro10.2.3_Motion5.2.3_Compressor4.2.2&tr=udpinferno.demonoid.ooo3392announce&tr=udptracker.openbittorrent.com80announce&tr=udpcoppersurfer.tk6969announce&tr=udpglotorrents.pw6969announce&tr=udptracker.openbittorrent.com80announce&tr=httptracker.aletorrenty.pl2710announce&tr=udptracker.leechers-paradise.org6969&tr=udp9.rarbg.me2710announce&tr=httpretracker.localannounce&tr=udpopen.demonii.com1337announce Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Motion - By Apple Description Designed for video editors, Motion 5 lets you customize Final Cut Pro titles, transitions, and effects. Or create your own dazzling animations in 2D or 3D space, with real-time feedback as you work. Breakthrough Speed and Quality Dual-GPU optimizations for superior real-time playback performance and faster rendering on the new Mac Pro 64-bit architecture supports deeper, multilayered effects for use in Final Cut Pro ColorSync-managed color pipeline produces accurate, consistent color across applications Shared Render Engine provides consistent quality with Final Cut Pro and Compressor Rendering uses floating-point, linear-light color space for exceptionally realistic results Editor-Friendly Design Tools Work fast using a single-window interface with a familiar look and feel Customize Final Cut Pro titles, effects, and transitions by opening them in Motion Create Smart Motion Templates with simple controls for making changes as you edit Build stunning 2D and 3D compositions with more than 1900 Apple-designed elements Make adjustments to any parameter during playback and instantly see results Easy Animated Text and Titles Design dramatic text effects with complete control over position, opacity, rotation, and more Animate text instantly by choosing from more than 200 text behaviors such as Type On Build a credit roll by importing a text file; the Scroll behavior optimizes the speed for you Move, scale, or rotate a single character with the Adjust Glyph tool Ripple text one character, word, or line at a time with the Sequence Text behavior Stunning Effects Choose from more than 230 behaviors for natural-looking motion without programming Enhance your 2D or 3D animations with more than 130 filters using the FxPlug standard Track a moving object in a video clip with the point tracking and match move feature Create an accurate chroma key in a single step with the new Keying filter Apply the Linking behavior to make parameters for one object animate related objects Use SmoothCam to eliminate camera jitters, and image stabilization to smooth bumpy shots Use third-party FxPlug plug-ins with custom interfaces and dual-GPU support on the new Mac Pro Effortless 3D Transition from 2D to 3D space by adding a camera or cameras to any 2D project Set up realistic shadows that animate dynamically with the movement of cameras and lights Turn any shape, video plane, or paint stroke into a reflective surface Highlight or de-emphasize various objects by defining a range of focus Track objects or groups of objects with the Camera Framing behavior Quick, High-Quality Output Export projects to Apple devices and popular websites such as Vimeo and YouTube Choose to output the ProRes 4444 format for uncompressed quality at small file sizes System Requirements: OS X 10.11.4 or later, 4GB of RAM (8GB recommended for 4K editing and 3D titles), OpenCL-capable graphics card or Intel HD Graphics 3000 or later, 256MB of VRAM (1GB recommended for 4K editing and 3D Titles), 4.8GB of disk space (3.6GB app, 1.2GB supplemental content). Some features require Internet access; fees may apply. Blu-ray recorder required for burning Blu-ray discs. View on App Store Download Updated : Feb 20, 2016 Version : 5.2.3 Torrent : magnet:?xt=urn:btih:1f9325bc7aa9c046e65366ad37e362e8b81c48e1&dn=FinalCutPro10.2.3_Motion5.2.3_Compressor4.2.2&tr=udpinferno.demonoid.ooo3392announce&tr=udptracker.openbittorrent.com80announce&tr=udpcoppersurfer.tk6969announce&tr=udpglotorrents.pw6969announce&tr=udptracker.openbittorrent.com80announce&tr=httptracker.aletorrenty.pl2710announce&tr=udptracker.leechers-paradise.org6969&tr=udp9.rarbg.me2710announce&tr=httpretracker.localannounce&tr=udpopen.demonii.com1337announce Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Final Cut Pro X - By Apple Description Completely redesigned from the ground up, Final Cut Pro adds extraordinary speed, quality, and flexibility to every part of the post-production workflow. Revolutionary Video Editing Assemble clips in the Magnetic Timeline without clip collisions or sync problems Use Clip Connections to attach B-roll, sound effects, and music to the timeline Reduce clutter by grouping clips into a Compound Clip. Perfect your pacing right in the timeline with the Inline Precision Editor Cycle through different shots, graphics, or effects at one place in the timeline with Auditions Edit multi-camera projects with automatic sync and support for up to 64 camera angles Powerful Media Organization Organize your work within libraries for efficient media management and collaboration Work natively with a broad range of formats including RED, AVCHD, H.264 from DSLRs, and mor Content Auto-Analysis captures camera metadata and analyzes shots in the background Create and apply custom keywords on the fly as you select ranges in clips Smart Collections let you dynamically organize content and find any shot in a few clicks Incredible Performance Final Cut Pro uses all the GPUs in your Mac for superior real-time playback performance and fast background renderin 64-bit architecture uses all the RAM in your system for larger projects and richer effects Background processing lets you keep working without interruption External monitoring lets you route video and audio through HDMI or third-party Thunderbolt and PCIe devices Compelling, Customizable Effects Preview effects to see how they look with your footage before applying them Change the look of titles, transitions, and effects using intuitive controls Control effects with precision using a keyframe editor that appears directly in the timeline Use third-party FxPlug plug-ins with custom interfaces and dual-GPU support Integrated Audio Editing Expand and edit multichannel audio files directly in the timeline Let Final Cut Pro repair significant audio problems such as hum, excessive noise, and more Sync DSLR video with separate audio in a single step, with instant audio waveform matching Intuitive Color Grading Improve the look of any clip with the single-click Balance Color feature Apply the Match Color feature to match the looks of two clips shot under different conditions Manipulate color, saturation, and exposure with the Color Board Fine tune color for a specific color range or area of the screen using keying and masks One-Step, Optimized Output Deliver projects for playback on Apple devices and websites such as Vimeo, YouTube, and Facebook Use themed menus to quickly author and burn a DVD or Blu-ray disc Export customized audio stems and multiple versions of a finished video using Roles metadata Import and export XML to support third-party workflows System Requirements: OS X v10.11.4 or later, 4GB of RAM (8GB recommended for 4K editing and 3D titles), OpenCL-capable graphics card or Intel HD Graphics 3000 or later, 256MB of VRAM (1GB recommended for 4K and 3D titles), 4.15GB of disk space. View on App Store Download Updated : Feb 20, 2016 Version : 10.2.3 Torrent : magnet:?xt=urn:btih:1f9325bc7aa9c046e65366ad37e362e8b81c48e1&dn=FinalCutPro10.2.3_Motion5.2.3_Compressor4.2.2&tr=udpinferno.demonoid.ooo3392announce&tr=udptracker.openbittorrent.com80announce&tr=udpcoppersurfer.tk6969announce&tr=udpglotorrents.pw6969announce&tr=udptracker.openbittorrent.com80announce&tr=httptracker.aletorrenty.pl2710announce&tr=udptracker.leechers-paradise.org6969&tr=udp9.rarbg.me2710announce&tr=httpretracker.localannounce&tr=udpopen.demonii.com1337announce Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Add Social Links Buttons for Octopress My Octopress theme is Slash, Slash is a clean one-column theme designed for Octopress, supporting all features in classic theme and built-in Fancybox. Slash Author Website zespia.tw/Octopress-Theme-Slash Install $ cd octopress $ git clone git://github.com/tommy351/Octopress-Theme-Slash.git .themes/slash $ rake install['slash'] $ rake generate Have problems when installing with zsh? Try rake install\['slash'\] instead. Adding Social Links Buttons For example, I add a Sina Weibo links buttons In octopress/_config.yml # Sina Weibo weibo_user: your-ID weibo_button: true Add a Sina Weibo icon in octopress/source/images/social/, named weibo.png Add CSS Code in octopress/source/stylesheets/screen.css #sub-nav .social a.weibo { background:url('/images/social/weibo.png?1392973048') center no-repeat #c83d20; border:1px solid #c83d20; } #sub-nav .social a.weibo:hover { border:1px solid #c83d20 } Add to octopress/source/includes/header.html {% if site.weibo_user %} <a class="weibo" title="Weibo" href="http://weibo.com/{{ site.weibo_user }}" target="_blank"> Weibo </a> {% endif %} ### MacBook Pro SSD Upgrade Recently I purchase a Samsung Electronics 840 EVO-Series 120GB 2.5-Inch SATA III Single Unit Version Internal Solid State Drive MZ-7TE120BW, price about € 68.08, programs of this article is to buy a new SSD solid state drive to the location of the original installation CD-ROM is located. As for why you want to install the optical drive SSD instead of the original hard drive location? The reason is simple. Because Macbook Pro drive bays has a sudden motion sensor called "SMS", its function is to accidentally drop the notebook when the head in an instant recovery to prevent vibration scratched disc in the drop, can effectively protect data security. SSD simply did not have the disc, so it is not afraid to fall, so it will be placed in the original traditional mechanical hard drives original location is good, after all the data is the most important. But the program also has a drawback: the SSD hard disk drive bit depth OS X does not support the system sleep. The so-called deep sleep when the data is automatically shut down after the battery is completely depleted, the system was originally stored in memory is transferred to the hard drive, and then connect the power so you can read the data from the hard disk to memory and then wake up the machine, allowing the computer to return to automatic before shutdown state. If the SSD in the drive position will not be able to use this feature. But I think this situation is rare, so even without this feature should not much impact. If you are using this program, remember to execute two commands in a terminal. The first is to disable the system's Deep Sleep mode. $ sudo pmset -a hibernatemode 0 The second is deleted after the system memory reserved for the image file, the file size and memory capacity equal, my memory is 4GB, delete this file, it will mean more SSD 4GB of free space. $ sudo rm /var/vm/sleepimage In fact, some manufacturers have launched comes the hard fall sensor, such as the Seagate Momentus Laptop Series notebook hard drives have called G-Force Protection feature, it can also protect the data on your computer when accidentally dropped. If your original mechanical hard is this, then you can directly install the original hard drive to optical drive, and recommended to install in the drive position, because it is said the hard drive comes with data protection may fall and SMS features Apple's conflict. In this case, SSD installed in the drive bays, you can use a deep sleep. However, if you have not upgraded the hard drive, then the wife Macbook Pro hard drive is recommended that you install on the hard bit (not heard MBP wife hard disk data protection features built-drop). To install the optical drive SSD also need a special bracket, I'm use a Leicke Adapter Converter for MacBook/MacBook Pro/MacBook Air, price about € 8.24. Hardware Installation First, you need to prepare a small screwdriver One is smaller than the screwdriver, cross. I recommend the purchase of iFixit 54 Bit Driver Kit, with this equipment after the demolition of MBP everything should have no problem. After the Macbook Pro shut down and power, all the screws removed the cover, and then open the back cover, inside the computer to glance up. Then carefully removed the wife's Macbook Pro drive, pay attention to first pull out the connector on the motherboard cable (one is a hard disk, a CD-ROM drive is recommended that all pull out), and then removed the screws fixed optical drive note speaker also need to tear down part of the screws and take off the speaker in order to successfully remove the drive. Pull it down here two data lines Remove the screws fixing the speaker, and another one on the right. Now the optical drive can easily take down. Remove the two small screws attached to the bracket. Next, the SSD into the optical drive bay, and the whole optical drive installed and connected to the data line. Will be equipped with an SSD bracket to the optical drive, connect the data cable SSD Information Benchmark Hardware installation is basically over here. About software optimization, please refer to this article Optimize the SSD Installed Mac. ### Find the Beautiful Answer search.xuri.me Find the Beautiful Answer - few features, just simple and practical. Basic Functions Combination of the two Baidu and Google search engine, you can selectively switch the keyword search. Google search using the original Google server in Beijing, not jump to Google Hong Kong, the search speed is fast. Five commonly used to provide search services, Web pages, pictures, translate, Wikipedia, maps. Pages based on different times of day changes colors, five colors five kinds of feelings. Pages small clock and warm tips. Support for Simplified Chinese and English two languages. Tips First click on the option you want to search, then select Baidu or Google's search services. Press Enter to Baidu search, press Ctrl Enter conduct Google searches. Click the cross on the right of the input box, to clear the keyword search box. 6:00 am to 8:00 changes pink, 8:00 to 12:00 change yellow, 12:00 to 17:00 change blue, 17:00 to 7:00 changes orange, purple rest of the time change, color change according to the day and the sky changes. The mouse moves logo, :) changes into a smile. ### Markdown Example Wiki Markdown is probably the easiest markup format to get started using. The specific flavor of Markdown that Gouda uses is Pandoc-Markdown. Here’s a quick example of some pandoc-markdown-formatted text: first as the source you’d put into your file, then rendered as html. Title Markdown provides two ways (Setext and Atx) to display the title. Syntax Setext Mode Title 1 ================= Title 2 ----------------- Atx Mode # Title 1 ## Title 2 ###### Title 6 Result Setext Mode Title 1 Title 2 Atx Mode Title 1 Title 2 Title 6 Wrap Use two or more spaces at the end of the text to indicate line breaks. Quote Use the line > with a space indicates that the referenced paragraph, can be nested inside multiple references. List Unordered list using *, + or - followed by a space to represent. Syntax * Item 1 * Item 2 * Item 3 + Item 1 + Item 2 + Item 3 - Item 1 - Item 2 - Item 3 Result Item 1 Item 2 Item 3 Item 1 Item 2 Item 3 Item 1 Item 2 Item 3 Ordered list using a digital representation spaces plus English period. Syntax 1. Item 1 2. Item 2 3. Item 3 Result Item 1 Item 2 Item 3 Code Area The lines using a backslash ` said. Code plus four paragraphs is an indented spaces or symbols in front of each line of text representation. Stress Markdown use \* or \_ emphasis. Syntax Single asterisk = *italic* Single underscore = _italic_ Double asterisk = **bold** Double-underlined = __bold__ Result Single asterisk =italic Single underscore = italic Double asterisk = bold Double-underlined = bold Link Markdown supports two styles of links: Inline and Reference. Inline: to link the text shown in parenthesis mark, followed by parentheses enclosing links. If the link has a title attribute, use the empty box "title attribute" in the link. Reference: generally applied to many different locations using the same link. Usually divided into two parts, calling part of the [link text] [ref]; definition section may appear elsewhere in the text, in the format [ref]: http://some/link/address (optional title). Note that ref is not case sensitive. This is a Inline [Example](https://xuri.me/ "Optional Title")。 This is a Reference [Example][ref]。 [ref]: https://xuri.me/ Result This is a Inline Example This is a Reference Example Picture Pictures and links to use essentially similar, just in front of the brackets and exclamation mark. Note that Markdown does not set the picture size, if you have set, you should use HTML <img> tag. Others Automatic Link Using angle brackets, you can automatically create a link to enter the URL or mailbox. If https://xuri.me. Divider Use in a row of three or more *,-, or _ You can add dividers, which can be blank, but can not have other characters. The escape character Markdown escape character is \, may have escaped \\ Backslash \` Backticks \* Asterisk \_ Underscore \{\}Braces \[\]Brackets \(\)In parentheses \# Pound sign \+ Plus \- Minus \. English period \! Exclamation point Markdown Syntax considerably reduced the cost of editing, but in writing this article, they also found some support for the Chinese mark does not seem perfect, although these defects can be inserted directly into the HTML code to solve (but do not at all not pretty). Overall, the ability to use the command line mode in the offline state writing or very interesting, compared to online writing mode energy can be more focused. ### Optimize the SSD Installed Mac Install Trim Enabler for Mac Trim Enabler is the first and safest utility for enabling Trim in Mac OS X. With the flip of a switch you can improve the speed and longevity of your Solid State Drive Download from Cindori Software Official Website cindori.org/trimenabler OS X 10.10.4 Supports TRIM for Third-Party SSD Hard Drives. Called trimforce, the utility can be executed from the OS X terminal, and it requires a reboot to start working. TRIM is a system-level command that allows the operating system and the drive to communicate about which areas of the drive are considered unused and thus ready to be erased and rewritten to. In the absence of TRIM, users can see significantly slower drive writes as the drive begins to fill up. Most modern operating systems support TRIM but for Apple's OS X, it has only included support for its OEM SSDs. This means that Mac users looking to install an after-market SSD in a machine originally intended for spinning disc hard drives would run into trouble without the help of other third-party tools. To enable TRIM, a user just has to type sudo trimforce enable into the Terminal window. Ars Technica points out that running TRIM prompts a "scary" message from the system, but notes it's largely because each SSD implements TRIM in a different way, with older disks sometimes acting in a way OS X would not expect. Turn off virtual memory on Mac Advised to turn off the system's virtual memory (Swap), avoid frequent writes. Turn off the system swap can also improve performance. View the virtual memory currently in use: $ sysctl vm.swapusage vm.swapusage: total = 512.00M used = 0.00M free = 512.00M (encrypted) After closing the virtual memory system that retains those used for virtual memory swap file can be deleted. $ sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist $ sudo rm /private/var/vm/swapfile* If the system is found to be unstable, then re-open the virtual memory: $ sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.dynamic_pager.plist Mac notebooks: About the Sudden Motion Sensor If you install the SSD in the HDD hard drive location, you can turn off hard disks sensor. This is to prepare for the HDD, the purpose is to shut down the hard drive when the notebook to vibration monitoring to protect the data, for the SSD, this is not required. To disable the Sudden Motion Sensor in Mac OS X v10.4 and later: $ sudo pmset -a sms 0 To re-enable the Sudden Motion Sensor in Mac OS X 10.4 and later: $ sudo pmset -a sms 1 ### Reset OS X Launchpad Icon Cache OS X applications are in the form of icons appear in Launchpad, the operation is very intuitive and easy to use. However, in very rare cases, these icons will likely generate some confusion, difficult to solve. Launchpad is stored in a SQLite database storage directory is: ~/Library/Application Support/Dock/. We can use the Terminal.app to perform (without root privileges) to execute SQL commands related to the change Launchpad. Run the following command in the terminal: Reset Icon $ rm -f ~/Library/Application\ Support/Dock/*.db && killall Dock Fix OS X Launchpad Pink Icon Fix Launchpad pink (some say red) icon for AMD CPU Download the attached to Applications Go to System Preferences → Accounts → Login items Add the Launchpad Refresh.app in here. reboot & done. Download Launchpad Refresh.app ### Make a DSDT and SSDT Wiki The DSDT (Differentiated System Description Table) is the main table in the ACPI part of a computer's BIOS. The Advanced Configuration and Power Interface (ACPI) defines a large number of tables that provide the interface between an ACPI-compliant operating system and system firmware. These allow description of system hardware in a platform-independent manner in ACPI Machine Language (AML). The problem is that OS X has an incomplete ACPI implementation which supports only a subset of DSDT. Modifying the DSDT allows the user to better support their hardware. For example, fixing Time Machine and the UUID 35 error is possible after modifying the DSDT. To patch your DSDT, you must either use a new table file that someone else has provided, or extract and modify your own. Then tell your bootloader to use the new DSDT file instead of the BIOS. On a few motherboards it is also possible to replace the BIOS with an updated BIOS with a patched DSDT. Preparations Setting Up on Windows Install AIDA64 AIDA64 is a system information, diagnostics, and auditing program that runs on Microsoft Windows operating systems. It displays detailed information on the components of a computer. Information can be saved to file in formats such as HTML, CSV, or XML. Download and install AIDA64 from official website www.aida64.com Open AIDA64 → Right Click → ACPI Tool → Save DSDT → Save DSDT As acpi_dsdt.bin Save Table → Save SSDT As acpi_ssdt-1.bin, acpi_ssdt-2.bin, ... Video Debug → Video BIOS dump Rename File acpi_dsdt.bin → acpi_dsdt.aml acpi_ssdt.bin → acpi_ssdt.aml vgabios3_gpu1_10DE0A66.rom → 10DE_0A66.rom Setting Up on OS X Install DSDT Editor For Mac and Chameleon Wizzard on OS X. Note that DSDT Editor need to have Java Run Time support. Use open DSDT Editor For Mac acpi_dsdt.aml, press F5 to compile, click Fix Errors, can't have Errors, Warning and Remarks can be temporarily ignored. No Error, save as DSDT.aml, use the same method to create SSDT. Chameleon Wizzard Basic Setting Put DSDT.aml, SSDT.aml, and graphics rom to /Extra floder. Download Chameleon Wizard v4.3.2.zip DSDTEditor_Mac.zip ### Useful .htaccess Tips & Tricks Wiki Apache web servers has a great way to manipulate information using .htaccess files. .htaccess (hypertext access) is the default name of a directory-level configuration file that allows for decentralized management of web server configuration. The .htaccess file is placed inside the web tree, and is able to override a subset of the server's global configuration; the extent of this subset is defined by the web server administrator. The original purpose of .htaccess was to allow per-directory access control (e.g. requiring a password to access the content), hence the name. Nowadays .htaccess can override many other configuration settings, mostly related to content control, e.g. content type and character set, CGI handlers, etc. Following are few very useful .htaccess tricks. Custom Directory Index Files DirectoryIndex index.html index.php index.htm You can change a default index file of directory by using above snippet in your .htaccess file. If a user request /foo/, Apache will serve up /foo/index.html, or whatever file you specify. Custom Error Pages ErrorDocument 404 errors/404.html You may want to redirect your users to an error page is any of the http errors like 404 occurs. You can use above snippet in .htaccess file to map 404 error to error page errors/404.html. Also you may want to write a common page for all the http errors as follows: ErrorDocument 404 /psych/cgi-bin/error/error?404 Control access at files & directory level .htaccess is most often used to restrict or deny access to individual files and folders. A typical example would be an "includes" folder. Your site's pages can call these included scripts all they like, but you don't want users accessing these files directly, over the web. In that case you would drop an .htaccess file in the includes folder with content something like this. # no one gets in here! deny from all which would deny ALL direct access to ANY files in that folder. You can be more specific with your conditions, for instance limiting access to a particular IP range, here's a handy top-level rule for a local test server. # no nasty crackers in here! order deny,allow deny from all allow from 192.168.0.0/24 # this would do the same thing.. #allow from 192.168.0 Generally these sorts of requests would bounce off your firewall anyway, but on a live server they become useful for filtering out undesirable IP blocks, known risks, lots of things. Sometimes, you will only want to ban one IP, perhaps some persistent robot that doesn't play by the rules. # someone else giving the ruskies a bad name.. order allow,deny deny from 83.222.23.219 allow from all Modifying the Environment Variable Environment variables contain information used by server-side includes and CGI. Set / Unset environment variables using SetEnv and UnSetEnv. SetEnv SITE_WEBMASTER "Administrators" SetEnv SITE_WEBMASTER_URI mailto:admin@example.com UnSetEnv REMOTE_ADDR 301 Redirect using htaccess If you want to redirect from an old document to new: Redirect 301 /old/file.html http://yourdomain.com/new/file.html Use following for redirecting Entire Directory. RedirectMatch 301 /blog(.*) http://yourdomain.com/$1 Implementing a Caching Scheme with .htaccess Cache the static files and improve your website's performance. (read this article: PHP, CSS, JS Compression for full implementation) # year <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|swf|mp3|mp4)$"> Header set Cache-Control "public" Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT" Header unset Last-Modified </FilesMatch> # 2 hours <FilesMatch "\.(html|htm|xml|txt|xsl)$"> Header set Cache-Control "max-age=7200, must-revalidate" </FilesMatch> <FilesMatch "\.(js|css)$"> SetOutputFilter DEFLATE Header set Expires "Thu, 15 Apr 2010 20:00:00 GMT" </FilesMatch> Compress output using GZIP Add following snippet into your .htaccess file and compress all the css, js, html files with GZip compression. <IfModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file \.(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </IfModule> Above code works only if mod_gzip module is enabled in your webserver. You may want to add following snippet if your webserver provides mod_deflate support. <Location> SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary SetEnvIfNoCase Request_URI \ \.(?:exe|t?gz|zip|gz2|sit|rar)$ no-gzip dont-vary </Location> If your webserver does not support mod_deflate then you may want to use following snippet. <FilesMatch "\.(txt|html|htm|php)"> php_value output_handler ob_gzhandler </FilesMatch> Read this articles for more detail: Compressing PHP, CSS, JavaScript(JS). Redirect browser to https (SSL) Add following snippet to your htaccess and redirect entire website to https. RewriteEngine On RewriteCond %{HTTPS} !on RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} Rewrite URLs using htacccess Rewriting product.php?id=12 to product-12.html RewriteEngine on RewriteRule ^product-([0-9]+)\.html$ product.php?id=$1 Rewriting product.php?id=12 to product/ipod-nano/12.html RewriteEngine on RewriteRule ^product/([a-zA-Z0-9_-]+)/([0-9]+)\.html$ product.php?id=$2 Redirecting non www URL to www URL RewriteEngine On RewriteCond %{HTTP_HOST} ^viralpatel\.net$ RewriteRule (.*) http://www.viralpatel.net/$1 [R=301,L] Rewriting yoursite.com/user.php?username=xyz to yoursite.com/xyz RewriteEngine On RewriteRule ^([a-zA-Z0-9_-]+)$ user.php?username=$1 RewriteRule ^([a-zA-Z0-9_-]+)/$ user.php?username=$1 Redirecting the domain to a new subfolder of inside public_html RewriteEngine On RewriteCond %{HTTP_HOST} ^test\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.test\.com$ RewriteCond %{REQUEST_URI} !^/new/ RewriteRule (.*) /new/$1 Prevent Directory Listing Add any of the following snippet to avoid directory listing. Options -Indexes or IndexIgnore * Read this article on more details on Denying/Allowing directory listing. Adding new MIME types The type of file depends on the filename extension. Unrecognized file extensions are treated as text data, and corrupted on download. AddType application/x-endnote-connection enz AddType application/x-endnote-filter enf AddType application/x-spss-savefile sav Deny access to static file data Denies any request for static files (images, css, etc) if referrer is not local site or empty. RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{REQUEST_URI} !^/(wp-login.php|wp-admin/|wp-content/plugins/|wp-includes/).* [NC] RewriteCond %{HTTP_REFERER} !^http://www.askapache.com.*$ [NC] RewriteRule \.(ico|pdf|flv|jpg|jpeg|mp3|mpg|mp4|mov|wav|wmv|png|gif|swf|css|js)$ - [F,NS,L] Specify Upload file limit for PHP in htaccess php_value upload_max_filesize 20M php_value post_max_size 20M php_value max_execution_time 200 php_value max_input_time 200 In the above .htaccess file, uploading capability is increased by the four parameter first one is maximum file size for uploading, second one is maximum size of the post data , third one is maximum time in seconds a script is allowed to run before it is terminated by the parser and last one is maximum time in seconds a script is allowed to parse input data such as like file uploads, POST and GET data. Disallow Script Execution Options -ExecCGI AddHandler cgi-script .php .pl .py .jsp .asp .htm .shtml .sh .cgi Change Charset and Language headers AddDefaultCharset UTF-8 DefaultLanguage en-US Set Timezone of the Server (GMT) SetEnv TZ America/Indianapolis Force "File Save As" Prompt AddType application/octet-stream .avi .mpg .mov .pdf .xls .mp4 Protecting a single file Normally .htaccess applies to the entire directory. With the directive you can restrict it to specific files: <Files quiz.html> order deny,allow deny from all AuthType Basic AuthName "Characterology Student Authcate" AuthLDAP on AuthLDAPServer ldap://directory.characterology.com/ AuthLDAPBase "ou=Student, o=Characterology University, c=au" require valid-user satisfy any </Files> Set Cookie using htaccess Set Cookie with environment variable Header set Set-Cookie "language=%{lang}e; path=/;" env=lang Set Cookie based on Request. This code sends the Set-Cookie header to create a cookie on the client with the value of a matching item in 2nd parentheses. RewriteEngine On RewriteBase / RewriteRule ^(.*)(de|es|fr|it|ja|ru|en)/$ - [co=lang:$2:.yourserver.com:7200:/] Send Custom Headers Header set P3P "policyref=\"http://www.askapache.com/w3c/p3p.xml\"" Header set X-Pingback "http://www.askapache.com/xmlrpc.php" Header set Content-Language "en-US" Header set Vary "Accept-Encoding" Blocking request based on User-Agent Header SetEnvIfNoCase ^User-Agent$ .*(craftbot|download|extract|stripper|sucker|ninja|clshttp|webspider|leacher|collector|grabber|webpictures) HTTP_SAFE_BADBOT SetEnvIfNoCase ^User-Agent$ .*(libwww-perl|aesop_com_spiderman) HTTP_SAFE_BADBOT Deny from env=HTTP_SAFE_BADBOT ### Use Beyond Compare in OS X Wiki Beyond Compare is a file comparison utility. Aside from comparing files, the program is capable of doing side-by-side comparison of directories, FTP directories, and archives. It is available for Windows and Linux operating systems. Scooter company released some time ago in a long-awaited Mac beta version Beyond Compare, completing the Windows, Linux and Mac full coverage platform. If you are a Mac user, and it is the git heavy users, why not try it? Official Website www.scootersoftware.com Download Beyond Compare for Mac Extracting the Beyond Compare, drag to "Programs" folder of the installation. After installing Beyond Compare, the next step is to install it from the command-line tool. Open Beyond Compare, select "Install Command Line Tools" in "Beyond Compare" drop-down menu, in the pop-up box, enter the system password, it will automatically install the command-line tool to the system. After installing the command line tools, you can enter in a terminal ... $ bcomp a.txt b.txt ... to compare files. The following steps are based on its command line tools for. Usr in Git CLI After installing Beyond Compare's next step is to set it as the Git diff tool and merge tool. Enter the following few commands in Terminal $ git config --global diff.tool bc3 $ git config --global difftool.prompt false $ git config --global merge.tool bc3 These command means that when you use in Git ... $ git diff ... or ... $ git mereg ... command, Git will automatically call just to open a command line tool to compare files Bycompare. Use in Tower Exit Tower Tool and run following command in a terminal: $ cd ~/Library/Application\ Support/Tower If there is no CompareScripts folder, use ... $ sudo mkdir CompareScripts ... to generating a folder Download bcomp.sh and put into CompareScripts folder Use ... $ sudo chmod +x bcomp.sh ... ommand to give it executable permissions and download CompareToools.plist to ~/Library/Application\ Support/Tower folder Start Tower Use in SourceTree Open SourceTree Select the "Performance" in the menu SourceTree In the top menu bar, select "diff" tab In the "External Diff / Merge" If it is "Visual Diff Tool", select "Other", enter /usr/local/bin/bcomp in "diff commend", enter $LOCAL $REMOTE in the "arguments" in If the "Merge Tool", select "Other", enter /usr/local/bin/bcomp in "Merge Commend", enter $LOCAL $REMOTE $BASE $MERGED in the "arguments". ### Add tree Command in OS X Command Line Not found of OS X default tree command, this command line similar to the effect of the tree... $ find . -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g' ... like this . |____extra | |____httpd-autoindex.conf | |____httpd-dav.conf | |____httpd-default.conf | |____httpd-info.conf | |____httpd-languages.conf | |____httpd-manual.conf | |____httpd-mpm.conf | |____httpd-multilang-errordoc.conf | |____httpd-ssl.conf | |____httpd-userdir.conf | |____httpd-vhosts.conf |____httpd.conf |____magic |____mime.types |____original | |____extra | | |____httpd-autoindex.conf | | |____httpd-dav.conf | | |____httpd-default.conf | | |____httpd-info.conf | | |____httpd-languages.conf | | |____httpd-manual.conf | | |____httpd-mpm.conf | | |____httpd-multilang-errordoc.conf | | |____httpd-ssl.conf | | |____httpd-userdir.conf | | |____httpd-vhosts.conf | |____httpd.conf |____other | |____php5.conf |____users Add alias to ~/.bash_profile alias tree="find . -print | sed -e 's;[^/]*/;|____;g;s;____|; |;g'" Install via Homebrew Use Homebrew to install tree command, first you need to install Homebrew, you can reference Install Homebrew on OS X. Install tree $ brew install tree Uninstall tree $ brew uninstall tree ### Install Homebrew on OS X Wiki Homebrew is a package management system that simplifies the installation of software on the Mac OS X operating system. It is a free/open source software project to simplify installation of other free/open source software. It is similar in aim and function to MacPorts and Fink. It was written by Max Howell and has gained popularity in the Ruby on Rails community and earned praise for its extensibility. In 2012, Homebrew had the largest number of new contributors on GitHub. Official Website brew.sh Install Homebrew $ sudo su $ curl -L http://github.com/mxcl/homebrew/tarball/master | tar xz --strip 1 -C /usr/local Update Homebrew $ brew update Uninstall Homebrew $ cd `brew --prefix` $ rm -rf Cellar $ brew prune $ rm `git ls-files` $ rm -r Library/Homebrew Library/Aliases Library/Formula Library/Contributions $ rm -rf .git $ rm -rf ~/Library/Caches/Homebrew If you installed MacPorts can't install homebrew, you must first uninstall MacPorts $ sudo port -f uninstall installed $ sudo rm -rf \ /opt/local \ /Applications/DarwinPorts \ /Applications/MacPorts \ /Library/LaunchDaemons/org.macports.* \ /Library/Receipts/DarwinPorts*.pkg \ /Library/Receipts/MacPorts*.pkg \ /Library/StartupItems/DarwinPortsStartup \ /Library/Tcl/darwinports1.0 \ /Library/Tcl/macports1.0 \ ~/.macports Fix Error: Failed to update tap: homebrew/dupes $ brew untap homebrew/dupes $ brew untap homebrew/versions $ brew prune $ brew tap homebrew/dupes $ brew tap homebrew/versions Starting and Stopping Background Services launchctl loads and unloads services that start at login. In OS X, these services are represented by files ending with .plist (which stands for "property list"). These plists are usually stored in either ~/Library/LaunchAgents or /Library/LaunchAgents. You load them (i.e. tell them to start at login) with launchctl load $PATH_TO_LIST and unload them with launchctl unload $PATH_TO_LIST. Loading a plist tells the program it represents (e.g. redis) to start at login, while unloading it tells the program not to start at login. This post-install message from Homebrew may look familiar: To have launchd start mysql at login: ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents Then to load mysql now: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist Or, if you don't want/need launchctl, you can just run: mysql.server start Doing all that takes too long, and I can never remember where Homebrew plists are. Fortunately, Homebrew includes a lovely interface for managing this without using ln, launchctl or knowing where plists are. brew services Install brew services by tapping homebrew/services (one time): $ brew tap homebrew/services Here's an example usage: $ brew services start mysql ==> Successfully started `mysql` (label: homebrew.mxcl.mysql) Behind the scenes, brew services start is doing everything in the post-install message above. First it runs ln -sfv ... for you. Then it runs aunchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist. It Just Works. We can easily restart a service: brew services restart mysql Stopping `mysql`... (might take a while) ==> Successfully stopped `mysql` (label: homebrew.mxcl.mysql) ==> Successfully started `mysql` (label: homebrew.mxcl.mysql) See everything we've loaded: postgresql started 305 /Users/xuri/Library/LaunchAgents/homebrew.mxcl.postgresql.plist memcached started 288 /Users/xuri/Library/LaunchAgents/homebrew.mxcl.memcached.plist mysql started 299 /Users/xuri/Library/LaunchAgents/homebrew.mxcl.mysql.plist Note that the list of services includes services you started with launchctl load, not just services you loaded with brew services. Let's say we uninstalled MySQL and Homebrew didn't remove the plist for some reason (it usually removes it for you). There's a command: $ brew services cleanup Removing unused plist /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mysql.plist So easy. ### Creating a GitHub Pages Blog With Octopress on OS X Summary Install Octopress Setup Github Pages Add remote repository Review Git branches Preview on development Deploy to Github Prerequisites Install Ruby, Git Have a github account Basic knowledge of Git and github Confidence about the command line Install Git Please reference GitHub Help http://help.github.com/mac-set-up-git/ this is relatively simple. Install Ruby Install RVM(Ruby Version Manager) Install RVM (development version): $ curl -sSL https://get.rvm.io | bash Install RVM stable with ruby: $ curl -sSL https://get.rvm.io | bash -s stable --ruby Then you need to configure RVM shortcut commands, or you can not use rvm commands directly in the terminal $ echo '[[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" ' >> ~/.bash_profile Meaning is directly inserted into the script .bash_profile in, run $ source ~/.bash_profile or restart Terminal, then you can enter the following command to verify $ type rvm | head -1 rvm is a function That would prove rvm command to configure a success. You can also run $ echo $PATH also to verify. If you are using Sublime Text as the editor related operations, it also needs to be configured first in the $PATH path subl as shortcut commands, as follows $ sudo ln -s /Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl /usr/bin/subl In this case, the above-described rvm configuration can of this operation command $ subl ~/.bash_profile You can then add the following text in the edit [[ -s "$HOME/.rvm/scripts/rvm" ]] && . "$HOME/.rvm/scripts/rvm" Install Ruby $ rvm install 2.1.0 $ rvm use 2.1.0 or $ rvm --default 2.1.0 Verify the settings $ ruby -v ruby 2.1.0p0 (2013-12-25 revision 44422) [x86_64-darwin12.0] Now confirm Ruby Gems, Gems is Ruby's package manager $ gem -v 2.2.1 If it is not the latest version, you can $ gem update --system Install Octopress Install Octopress $ git clone git://github.com/imathis/octopress.git octopress $ cd octopress $ ruby --version Install Plug-ins $ gem install bundler $ bundle install Tips: keep Jekyll Latest Because Jekyll updated more frequently, in order to ensure consistency you see in the local debugging with GitHub Pages site effects on, be sure to frequently update Jekyll, recommended bundle update command is as follows $ bundle update Install Octopress Theme $ rake install Modify the Configuration Modify the configuration file _config.yml, modify url, title, subtitle, author, etc., plus the comments disqus, google+, twitter, Facebook, etc., all are together. Create a CNAME file in the source. Create a Repository on GitHub We’re going to use Github pages. Octopress has some rake tasks to make this easier for you. Your blog will be hosted at http://username.github.io and you need to create a new Github repository called username.github.io that github pages will use the master branch as the html source for your blog. Local configuration GitHub Branch $ rake setup_github_pages When the command prompts you to enter Github URL, enter the address you just created git. Writing Articles $ rake new_post["new article"] Generate a *.markdown file in "octopress/source _post", now you can edit the article. Generate & Preview $ rake generate $ rake preview This will generate the contents of your blog and allow you to preview it at [http://localhost:4000] Git Submit $ rake deploy Finish Now, you have your site running on Github with Octopress. Solve FAQ Slove rake deploy rejected in Octopress $ rake gen_deploy I got error ! [rejected] master -> master (non-fast-forward) Run the command $ cd octopress/_deploy $ git pull origin master $ cd .. $ rake deploy That's it. Reference Creating a GitHub Pages Blog With Octopress on Ubuntu ### Vim Configuration - vimrc File Vim (an acronym for Vi IMproved) is a text editor written by Bram Moolenaar and first released publicly in 1991. Based on the vi editor common to Unix-like systems, Vim is designed for use both from a command line interface and as a standalone application in a graphical user interface. Vim is free and open source software and is released under a license that includes some charityware clauses, encouraging users who enjoy the software to consider donating to children in Uganda. The license is compatible with the GNU General Public License. Although Vim was originally released for the Amiga, Vim has since been developed to be cross-platform, supporting many other platforms. In 2006, it was voted the most popular editor amongst Linux Journal readers. The vimrc file contains optional runtime configuration settings to initialize Vim when it starts. On Unix based systems, the file is named .vimrc, while on Windows systems it is named _vimrc. The ultimate Vim configuration: vimrc Vi and Vim Syntax Highlighting on Fedora, CentOS, RedHat (RHEL) Change root user: $ sudo -i Install Vim enhanced package: $ yum install vim-enhanced Set vi pointing to vim and enable also vi syntax highlighting with following command: $ alias vi=vim ### Img2icns - Create Icons for OS X Img2icns is an application to create icons from images or images from icons, keeping them organized for future use. Creating an icon is as simple as dropping one or more images into Img2icns and choosing an export format. Img2icns Official Website www.img2icnsapp.com Download Img2icns v1.2.2 & Keygen ### Create a DMG File on a Mac One way to store or group files on a Mac is to create a Disk Image. A Disk Image is a file which has the properties of a separate hard drive or CD. It has a size limit and options for encryption to keep your files safe and secure. This how-to will work with Mac OS X. Step 1 Create a New Folder and place the files you would like in your disk image into this new folder. Right click (or CTRL-Click) the folder and select "Get Info" and note the size of its contents. Step 2 Open Disk Utility (Applications → Utilities → Disk Utility) Step 3 Click the "New Image" icon to create a new disk image. Enter a name for the Image, and select a size adequate for the size of your folder you created in Step 2. Set the encryption to "none" and Format to "read/write disk image." or "DVD/CD Master." Alternatively, if all the files you want to add to your image are in a single folder, you can also select "New Image → From folder...". Step 4 Place the contents of the folder from Step 1 into the newly mounted disk image. Step 5 Unmount the Disk Image by dragging its icon to the Trash. In the Finder window, you can also click the Eject symbol next to the mounted image. Tips You can encrypt your Disk Image with a password if you would like your files to be private. Simply Select "AES-128" under the "Encryption" drop-down menu. After you click "Create" and it will ask you to create a password for your files. If you add the password to your key chain, you will not have to enter your password to access the .dmg file when logged into your account. The .dmg file can be a very easy way to send files from one Mac to another. Any Mac can mount and access the Disk Image on their computer. To create an image from a folder, drag the folder onto the Disk Utility-icon, or choose "New → Disk image" from folder in the File-menu in Disk Utility. After you add your files to the image, you can unmount it and then click the "Convert" button in the toolbar. This will let you compress the image or change it to read only (or add encryption). When you double click a .dmg file, it will "Mount" on the desktop. This is the only way to access or modify the contents of the Image. Warnings Be careful with a damaged disc, sometimes you'll have to reformat the disk, (It happens, try FAT32 if you can, just backup your data, if you reformat, the disk will delete all of your stuff). ### FastStone Capture - A Powerful Windows Screen Capture Tool FastStone Capture is a powerful, flexible and intuitive screen-capture utility. It allows you to capture anything on the screen including windows, objects, full screen, rectangle regions, freehand-selected regions and scrolling windows/web pages. It has innovative features such as a floating Capture Panel, hotkeys, resizing, cropping, text annotation, printing, e-mailing, screen magnifier and many more. Official Website www.faststone.org/FSCaptureDetail.htm Download FastStone Capture v5.3 ### OS X Get File Checksum Run a terminal (Applications → Utilities → The terminal app), and then perform the following syntax checking $ md5 /path/to/file $ shasum /path/to/file If a file path is complex $ shasum (Drag and drop files here) Shasum command detection SHA1 default value, and can be modified by the parameter -a to 224, 256, 384 or 512. If you are not accustomed to using the terminal commands, you can try simple, efficient and free hash value in checkSum this Apple Mac OS X system calculation software. The hash value can be found with just one drag and drop files CRC32, MD5, MD4, MD2, MDC2, RMD160, SHA, SHA1, SHA-224, SHA-256, SHA-384, SHA-512, etc. Download checkSum+ ### Wiener Philharmoniker Vienna New Year’s Concert 2014 Conductor Daniel Barenboim Orchestra Wiener Philharmoniker Program Eduard Strauss Helena Quadrille, op. 14 Josef Strauss Olive Branch Waltz, op. 207 Johann Strauss, sen. Caroline Gallop, op. 21 Johann Strauss, Jr. Egyptian March, op. 335 Seid umschlungen, Millionen. Walzer, op. 443 Tempestuous in Love and Dance, Fast Polka, op. 393 -- Pause -- Johann Strauss, Jr. Overture to the Operetta "Waldmeister" Klipp-Klapp, Gallop, op. 466 Tales from the Vienna Woods, Waltz, op. 325 Josef Hellmesberger, Jr. Vielliebchen, Polka française, op. 1 Josef Strauss Bouquet Polka, Fast Polka, op. 188 Richard Strauss Moonlight Music from the Opera "Capriccio" Joseph Lanner The Romantics, Waltz, op. 167 Josef Strauss Teasing, Polka mazur, op. 262 Shenanigans, Fast Polka, op. 98 Leo Delibes Variation dansée (Pizzicati) from the Ballet “Sylvia” Josef Strauss Dynamiden, Waltz, op. 173 Without a Care, Fast Polka, op. 271 Download Daniel Barenboim-Wiener Philharmoniker-Vienna New Year’s Concert 2014.TS[1080P] ******************************************************************************* Vienna Philharmonic Orchestra New Year's Concert 2014 with Daniel Barenboim LIVE ! Satellite UPLINK FULL HD 1.1.2014 1080i 50 MPEG-2 HD 422 41Mbit/s 43.34GB ******************************************************************************* ------------------------------------------------------------------------------- General Information ------------------------------------------------------------------------------- Type................. Live Event (Live Concert) ------------------------------------------------------------------------------- Release Notes ------------------------------------------------------------------------------- General Satellite Information Satellite Sign : Eutelsat 7A (E7A, W3A, Eutelsat W3A) Satellite Position : 7°E Satellite Position 1.1 .2014 : 7.0003°E Satellite Name : Eutelsat 7A Norad : 28187 Cospar number : 2004-008A Operator : Eutelsat Communications S.A. Launch date : 15-Mar-2004 Launch site : Baikonur Cosmodrome Launch vehicle : Proton M Launch mass (kg) : 4300 Dry mass (kg) : 2000 Manufacturer : EADS Astrium Model (bus) : Eurostar-3000S Orbit : Geostationary orbit Expected lifetime : 15 Years Frequency : 10976 SymbolRate : 19200 Polarity : Horizontal Video PID : 512 Audio 1 PID : 4112 Audio 2 PID : 4128 DOLBY-E DVB-S2 : YES FEC : 4/3 Modulation : AUTO System Hardware : Spc TriChea™ Modular System Dish 2 meters Low Noise Block : Inverto BLACK ULTRA Quad High Gain Capture Device : TeVii s471 DVB-S2 PCIe, x16 Capture OS : Windows Server 2003 R2 SP2, Datacenter x64 Edition Capture Program : Spc TriChea™ Packet Capture 2.0 Format : MPEG-2 4 2 2 HD (1080i50) Format/Info : Moving Picture Experts Group Stream Type : Program Sector Size : 2048 Packets Per Block : 1 Video Buffer Size : 232 KB bit rate total : 41.40 Mbps Video Format : MPEG-2 4 2 2 Format profile : High Level 4.0 Codec ID : MPEG-2 Codec ID/Hint : MPEG-2 Bit rate : 38,900 Mbps Width : 1920 pixels Height : 1080 pixels Display aspect ratio : 16/9 Mode : Interlaced Frame rate : 25 fps Resolution : 32 bits DC Coefficient : 12 bit I-Frames : 15 B-Frames : 2 Color Primaries : ITU-R Rec. 624-4 System B,G Transfer : ITU-R Rec. 624-4 System B,G Matrix Coefficients : ITU-R Rec. 624-4 System B,G Colorimetry : 4 2 2 Audio 1 Format : MPEG1 Layer2 Format version : MP2 Format profile : MPG Bit rate mode : Constant Bit rate : 384 Kbps Minimum bit rate : 384 Kbps bit rate per channel : 192 kbps Channel(s) : 2 (DUAL CHANNEL) Sampling rate : 48.0 KHz Resolution : 16 bits ------------------------------------------------------------------------------- ### MacBook Pro MC374 LL/A Detailed item info Product Information The MacBook Pro MC374LL/A boasts a 13.3-inch LED-backlit display that offers wide-angle viewing with exceptional clarity and detail. Powered by the Intel Core 2 Duo processor, this Apple 13.3-inch laptop delivers high-speed performance, great media-management, and enhanced multi-tasking operations. This Apple notebook comes equipped with a NVIDIA GeForce graphics card that accelerates the graphical performance of videos and games. The high speed RAM built in this Apple 13.3-inch laptop facilitates speedy performance and fast data access. The spacious hard drive in this Apple notebook allows you to store maximum amount of data inside. You can also take advantage of the built-in iSight camera in the MacBook Pro MC374LL/A to engage in video calling. Product Identifiers Manufacturer Apple Product Family MacBook Pro Model ID MacBookPro7,1 Model Number A1278 EAN 885909358724, 885909358878 Processor Processor Type Intel Core 2 Duo Processor Speed 2.40 GHz Number of Cores Dual-Core Motherboard Bus Speed 1066 MHz Display Display Technology LED Display Size 13.3" Display Max. Resolution 1280 x 800 Technical Features Security Features Kensington Lock Slot Expansion Ports 2 x USB 2.0, DVI (Digital Video Interface), FireWire 800, HDMI, RJ-45 (NIC), VGA out Expandability 1 x MMC Card Slot, 1 x SD Card Slot Special Features Integrated Video Camera, MagSafe Power Adapter Port Hard Drive Hard Drive Capacity 250 GB Hard Drive Rotation Speed 5400 RPM Storage Controller Type Serial ATA Memory Installed RAM 4 GB RAM Technology DDR3 SDRAM Max Supported RAM 8 GB RAM Speed 1066 MHz Installed Cache Memory 3072 KB CD / DVD Optical Drive Type DVD±R DL/DVD±RW/CD-RW Optical Drive Read Speed 24x (CD), 6x (DVD+R DL), 6x (DVD-R DL), 8x (DVD+R), 8x (DVD-R) Optical Drive Write Speed 24x (CD), 4x (DVD+R Dual Layer), 4x (DVD-R Dual Layer), 8x (DVD+R), 8x (DVD-R) Optical Drive ReWrite Speed 10x (CD-RW), 4x (DVD+RW), 4x (DVD-RW) Video Graphic Processor NVIDIA GeForce 9400M, NVIDIA GeForce GT 320M Installed Video Memory 256 MB Audio Audio Output Type Headphones, Line Out, Optical Digital, Optical Digital Output, Speaker(s) Audio Input Type Integrated Microphone, Line In (3.5mm), Microphone, Optical Digital In Networking Networking Type Bluetooth, Integrated 10/100/1000 Network Card, Integrated Wireless LAN Data Link Protocol AirPort Extreme, Bluetooth, Ethernet, Fast Ethernet, Gigabit Ethernet, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n Battery Battery Run Time Up to 10 Hours Battery Technology Lithium polymer Dimensions Depth 8.94 in. Height 0.95 in., 8.94 in. Width 12.78 in. Weight 4.5 lb. Miscellaneous Release Date April, 2010 Input Method Keyboard, Remote Control, Trackpad Exterior Color Aluminum ### Configure IIS to Support .htaccess Installation Download and install ISAPI Rewrite3 Configuration Open IIS Manager, select the site you want to pseudo-static → right click → property, choose ISAPI → Add, fill the Filter Name, such as "Rewrite". Executable file below click "Browse" and select rewrite module position just installed, rewrite.dll, and click OK Add network service permissions for the wwwroot directory, and click OK to complete the filter added. At last restart IIS service. ### Download .NET Framework Offline Installers from Microsoft Server As many of the new Microsoft Windows applications are developed using .NET Framework, it is becoming mandatory to install .NET Framework on our personal computers. Microsoft provides .NET Framework installer as a free download available to everyone. But it is an online installer. That means, initially we have to download a small piece of installer and upon starting the installation process the installer automatically connects to Microsoft servers and downloads the required components and completes the installation. The online installers works great as long as we have internet connectivity always. But many of us required offline installer to install the software on computers not connected to internet, to save the bandwidth while installing on many computers, etc. I gathered a set of links on Microsoft servers from where we can download the offline installers of .NET Framework 4.0, 3.5, .NET Framework 3.5 Service Pack 1, .NET Framework 3.0 and .NET Framework 2.0. .NET Framework 4.5 Setup Download 32 bit & 64 bit versions .NET Framework 4.0 Setup Download 32 bit & 64 bit versions .NET Framework 3.5 Setup Download 32 bit & 64 bit Versions .NET Framework 3.5 Setup Service Pack 1 Download .NET 3.5 Service Pack 1 .NET Framework 3.0 Setup Download 32 bit version Download 64 bit version .NET Framework 2.0 Setup Download 32 bit version Download 64 bit version .NET Framework Client Profile Offline Installer Download .NET Client Profiler Offline Installer ### Install PHP on IIS 6 for Windows Server 2003 One of most popular development platforms on the web is PHP which powers many popular applications and sites such as Facebook, WordPress and Joomla. While most of these systems are 'designed' to be used on a Linux system running the Apache Web Server, you can deploy PHP applications via IIS 6 on your Windows Server 2003 system. Configuring PHP In order for Windows to run PHP code, the PHP binary files need to be copied to your system. No installation is required, however some configuration must be done in order for it to properly run. The first step is to download the PHP Windows Binaries and extract them (i.e. C:\PHP). For IIS 6, the non thread safe binaries should be used. Download PHP from PHP official website. Note: Non-thread-safe build of PHP is recommended when using IIS. The non-thread-safe builds are available at » PHP for Windows: Binaries and Sources Releases. If your're using Microsoft Windows Server 2003 x86/x64, download PHP 5.4.x version. Copy the php.ini-production file from the extracted files paste it into the Windows directory. In the Windows directory, rename this file to 'php.ini'. Open the php.ini file in Notepad and configure it as needed. Out of the box, the production configuration we copied is preconfigured for what the PHP Team feels is good for a production server. There are a few changes you will need to make to configure PHP for your IIS 6 system: Uncomment and set the key, cgi.force_redirect = 0 Uncomment the key, fastcgi.impersonate = 1 Uncomment and set the key, extension_dir to the 'ext' folder in the path PHP was extracted to (i.e. C:\PHP\ext). Set the key, date.timezone to the time zone of your server (the URL on the line above this key lists the accepted values). At this point, your Windows system can run PHP scripts from the command line using the php.exe tool. Installing the IIS 6 FastCGI Extension In order for Internet Information Services (IIS) 6 to execute and serve PHP scripts, some additional configuration is required. In this guide, we will install PHP on IIS using the FastCGI interface which provides a good balance of stability and performance. Since FastCGI is not included with IIS 6 be default, we need to download and install it. To install needs .NET Framework 2.0 support, you can Download .NET Framework Offline Installers from Microsoft Server. If Microsoft Web Platform Installer doesn't work normally, we can direct download FastCGI for IIS on buttom of The Official Microsoft IIS Site. The FastCGI installer does not require any configuration, however once the FastCGI extension is installed we need to configure it to run PHP. Open the file C:\WINDOWS\system32\inetsrv\fcgiext.ini in Notepad and find the Types section and enter the configuration information: [Types] php=PHP [PHP] ExePath=C:\PHP\php-cgi.exe InstanceMaxRequests=5000 EnvironmentVars=PHP_MAX_REQUESTS:5000 Revise these values as needed according to your environment, but it is important that all of settings have the correct values. Once you are finished, save this file. Configuring IIS to Run PHP via FastCGI With both PHP and FastCGI installed and configured, all that is left is to set up IIS 6. When you open the IIS Management Console, under the Web Service Extensions make sure the FastCGI Handler is set to allowed. Next, go to the Properties settings for the Web Sites group. On the Home Directory tab, click the Configuration button. Add a mapping for files with the .php file extension with the executable set to the FastCGI extension DLL. Once you have applied all the changes, restart IIS. iisreset \restart Testing PHP At this point, your server is ready to go, but just to be sure we can confirm your PHP setup through IIS pretty easily. Create a text file in the directory C:\Inetpub\wwwroot named phpinfo.php which simply contains the line: <?php phpinfo(); ?> Finally, browse to the address: http://localhost/phpinfo.php on your server and you should see the PHP information page. If the page loads successfully, PHP is now up and running on your machine. Conclusion Once you have PHP up and running on your Windows system, you can take advantage of the multitude of PHP based applications available as well as develop and deploy your own. Download PHP Windows Binaries (non thread safe) IIS 6 FastCGI Extension Reference PHP: Microsoft IIS 5.1 and IIS 6.0 ### Search Engine Optimization Related Books When Search Meets Web Usability Publication Date: April 4, 2009 | ISBN-10: 0321605896 | ISBN-13: 978-0321605894 | Edition: 1 This book delivers a proactive approach to building an effective Web site that is search engine friendly and will result in better search rankings. It outlines the steps needed to bridge the gap between a Google search and a Web site, and also improve the users' experience once they get to the site. By understanding the wide variety of information-seeking strategies and the individual behaviors associated with them, this book helps information architects, Web designers/developers, SEOs/SEMs, and usability professionals build better interfaces and functionality into Web sites. Creating a satisfying user experience is the key to maximizing search effectiveness and getting conversions. ### Server Management Related Books 鸟哥的Linux私房菜 基础学习篇 Publication Date: July, 2010 | ISBN: 9787115226266 | Edition: 3 作者简介 鸟哥 台湾成功大学环境工程系博士,就学期间由于研究需要接触到Linux操作系统,又因实验室交接需求而建立“鸟哥的Linux私房菜”网站。因个人喜好“絮叨”,网站文章风格就此成形。 曾任台湾多家知名Linux教育中心讲师,目前于昆山科技大学资讯传播系教授Linux相关课程。专长是Linux 操作系统、网站规划与维护、网络安全,兴趣是写作与阅读,研究方向是网络应用整合。 内容简介 本书是最具知名度的Linux入门书《鸟哥的Linux私房菜基础学习篇》的最新版,全面而详细地介绍了Linux操作系统。全书分为5个部分:第一部分着重说明Linux的起源及功能,如何规划和安装Linux主机;第二部分介绍Linux的文件系统、文件、目录与磁盘的管理;第三部分介绍文字模式接口 shell和管理系统的好帮手shell脚本,另外还介绍了文字编辑器vi和vim的使用方法;第四部分介绍了对于系统安全非常重要的Linux账号的管理,以及主机系统与程序的管理,如查看进程、任务分配和作业管理;第五部分介绍了系统管理员(root)的管理事项,如了解系统运行状况、系统服务,针对登录文件进行解析,对系统进行备份以及核心的管理等。 本书内容丰富全面,基本概念的讲解非常细致,深入浅出。各种功能和命令的介绍,都配以大量的实例操作和详尽的解析。本书是初学者学习Linux不可多得的一本入门好书。 Ubuntu权威指南 Publication Date: January, 2010 | ISBN-13: 978-7115212672 《Ubuntu权威指南》首先介绍Ubuntu Linux系统的安装与GNOME桌面环境,然后从基本命令行入手,由浅入深,逐步阐述Linux系统的基本概念与原理,同时给出大量的应用实例。在此基础上,对Linux系统的各种文件系统、Shell编程、进程管理、软件管理、磁盘空间管理、用户管理、系统启动过程、作业调度与系统日志、TCP/IP网络管理与应用、OpenSSH、DNS、NFS、Samba、Apache服务器与MySQL数据库等方面进行了深入的讨论。 《Ubuntu权威指南》内容丰富,语言流畅,涵盖了Linux系统的主要课题,可以作为学习、使用、管理与维护Ubuntu Linux系统的工具书,也可作为学习Linux操作系统的主要参考书。 Ubuntu Server 最佳方案 Publication Date: June, 2009 | ISBN-13: 978-7121087769 《Ubuntu Server 最佳方案》是第一本全面介绍 Ubuntu Linux Server的中文书籍,是作者10多年实战经验的结晶;从构思到出版,花费作者近3年时间。无论您是初学者还是 Linux 高手,都能从中获益。 在内容上,《Ubuntu Server最佳方案》基本上涉猎到了 Linux 服务器的各方面常见应用。“最佳方案”是《Ubuntu Server 最佳方案》的特色。对于每一种应用,作者都不是泛泛而谈,而是推荐一个“最佳方案”——从硬件服务器的选型到 Linux 入门,从 LAMP 到 Tomcat,从代理服务器到邮件服务器,从 DHCP 到 DNS ,从 Samba 到虚拟化,从负载均衡到远程控制,从 RAID 到 LVM 再到系统安全——不管您是在校学生还是在职 Linux 管理员,不管您是小公司还是大企业,都可以直接复制《Ubuntu Server最佳方案》内容,在最短的时间内您就可以架设起自己的最佳 Linux 服务器。 拥有《Ubuntu Server最佳方案》,您就拥有了一位具有10年经验的资深 Linux 管理员和开发者的最佳实战方案,有如专家亲临现场、全程陪护,让您在工作中胸有成竹,胜券在握。 Cacti 0.8 Network Monitoring Author Dinangkur Kundu, S. M. Ibrahim Lavlu Publication Date July 2009 | ISBN 1847195962 | ISBN 13 9781847195968 Install and setup Cacti to monitor your network and assign permissions to this setup in no time at all Create, edit, test, and host a graph template to customize your output graph Create new data input methods, SNMP, and Script XML data query Full of screenshots and step-by-step instructions to monitor your network with Cacti 鸟哥的Linux私房菜:服务器架设篇(第3版) Publish: 1st July, 2012 | Pages: 804 | ISBN: 7111384997, 9787111384991 | ASIN: B008AEI8A2 《鸟哥的Linux私房菜:服务器架设篇(第3版)》共分为四篇:第一篇,服务器搭建前的进修专区,主要介绍架设服务器之前必须具备的基本知识,看完这一篇,不论您的Linux是以何种方式进行Internet连接,都将不成问题;第二篇,主机的简易安全防护措施,这一篇鸟哥将告诉您如何保护您的主机,以及如何养成良好的操作习惯,使您的主机能够百毒不侵,安然渡过一次次的考验;第三篇,局域网内常见服务器的搭建,介绍内部网络经常使用的远程连接服务(SSH、VNC、XRDP),网络参数设置服务(DHCP、NTP),网络磁盘服务(Samba、NFS、iSCSI),以及代理服务器等服务。其中SSH密钥系统,对于异地备份更是相当有帮助,您绝对不能错过;第四篇,常见因特网服务器的搭建,介绍DNS、WWW、FTP及Mail Server等常见的服务。 ### PHP and MySQL Related Books PHP for the Web: Visual QuickStart Guide Publication Date: December 22, 2008 | ISBN-10: 0321442490 | ISBN-13: 978-0321442499 | Edition: 3 With PHP for the World Wide Web, Third Edition: Visual QuickStart Guide, readers can start from the beginning to get a tour of the programming language, or look up specific tasks to learn just what they need to know. This task-based visual reference guide uses step-by-step instructions and plenty of screenshots to teach beginning and intermediate users this popular open-source scripting language. Leading technology author Larry Ullman guides readers through the new features in PHP 6, focusing primarily on improved support for handling any language in a Web site. Other addressed changes include removal of outdated and insecure features, plus new functions and more efficient ways to tackle common needs. Both beginning users, who want a thorough introduction to the technology, and more intermediate users, who are looking for a convenient reference, will find what they need here--in straightforward language and thorough readily accessible examples. Beginning PHP and MySQL From Novice to Professional Publication Date: March 28, 2008 | ISBN-10: 1590598628 | ISBN-13: 978-1590598627 | Edition: 3 Beginning PHP and MySQL: From Novice to Professional, Third Edition offers a comprehensive introduction to two of the most prominent open-source technologies on the planet: the PHP scripting language and the MySQL database server. Updated to introduce the features found in MySQL's most significant release to date, readers will learn how to take advantage of the features of both technologies to build powerful, manageable, and stable web applications. Essentially three books in one, readers will not only profit from extensive introductions to the core features of each technology, but also learn how to effectively integrate the two in order to build robust data-driven applications. Packed with practical examples and insight into the real-world challenges faced by developers based on author W. Jason Gilmore's seven years of expertise working with these technologies, readers will repeatedly return to this book as both a valuable instructional tool and reference guide. MySQL Developer's Library Publication Date: August 29, 2008 | Edition: 4 The unexpected pleasure of reading books about databases is that they are often written by authors with highly organized minds. Paul DuBois and his editors at New Riders have assembled MySQL with a clarity and lucidity that inspires confidence in the subject matter: a (nearly) freely redistributable SQL-interpreting database client/server primarily geared for Unix systems but maintained for Windows platforms as well. What isn't "free" about MySQL (the application) is its server's commercial use; all clients and noncommercial server use are free. DuBois's tome isn't free either, but its list price is modest in light of its value and the value of its namesake. The volume is superbly organized into 12 chapters and 10 appendices and contains a concise table of contents and a comprehensive 50-page index. It is peppered with references to the online HTML documentation that comes with the source and binary distributions (which are available and easy to install in stable rpm and tar releases.) The first third of MySQL is an excellent instruction tool for database newbies; the second third is a detailed reference for MySQL developers; and the last third consists of clearly annotated appendices, including C, Perl (but not Python), and PHP interfaces. Perhaps as an indication of the collective will of the developers of MySQL, DuBois does not separate Windows 95/98/NT design or development specifics from its main discussions. Platform-independent design is a goal, not a reality, and users will have to rely on newsgroups and mailing lists for details. Moreover, security issues are addressed in a mere 18 pages, a large part of which is devoted to standard Unix file and network-access permissions. Next to nothing is mentioned about defense against common hacking strategies, the use of secure shell interfaces, or access encryption. Although it is nearly 800 pages in length, DuBois's book is thankfully not encyclopedic. It is a valuable précis of the MySQL database, and its easy-to-skim look and feel will make it an excellent browse for database experts who want to know what is and is not possible within MySQL, the application. --Peter Leopold --This text refers to an out of print or unavailable edition of this title. Pro PHP and jQuery By Jason Lengstorf | ISBN13: 978-1-4302-2847-9 | User Level: Intermediate to Advanced | Publication Date: June 21, 2010 This book is for intermediate programmers interested in building Ajax web applications using jQuery and PHP. Along with teaching some advanced PHP techniques, it will show you how to take your dynamic applications to the next level by adding a JavaScript layer with jQuery. Learn to utilize built-in PHP functions to build calendar tools. Learn how jQuery can be used for Ajax, animation, client-side validation, and more. What you’ll learn How to use PHP to build a calendar application that allows users to post, view, edit, and delete events How to use jQuery to allow the calendar app to be viewed and edited without requiring page refreshes using built-in Ajax functions The power and versatility of PHP's object-oriented programming style The various forms of security available and how to best apply them jQuery plug-in development patterns to create modular, reusable jQuery plug-ins. The basics of jQuery effects, including fading elements, generating HTML markup on the fly, and creating modal windows Who this book is for This book is intended for programmers who want to bridge the gap between front- and back-end programming. It does not cover HTML or CSS except where absolutely necessary, and it focuses on taking intermediate PHP developers to the next level while getting comfortable with Ajax and the power of jQuery. Pro PHP Application Performance: Tuning PHP Web Projects for Maximum Performance Publication Date: December 28, 2010 | ISBN-10: 1430228989 | ISBN-13: 978-1430228981 | Edition: 1 Pro PHP Application Performance will help you understand all the technologies and components which play a role in how well your applications run. When seconds can mean the difference between retaining a user and losing a user, it's important for all of us to have optimization as part of our project roadmap. But what components within your application should you analyze? How should you optimize? And how can you measure how well your application is performing? These are some of the questions that are answered in this book. Along the way you will also learn the "why" of optimizing. You’ll discover why you should optimize a specific component, why selecting one function over another is beneficial, and how to find and use the optimization tools available to the open source community. You’ll also learn how to deploy caching software as well as web server software. Pro PHP Application Performance will also teach you more advanced techniques, such as: Using Xdebug to profile functions that are not running as efficiently as possible. Comparing opcode executed by different PHP functions to narrow the search for functions that run efficiently. Using strace to analyze Apache while your application is served to the user. Once you complete this book you will have a complete sense of where to start optimizing and, most importantly, have the tools which allow you to continue optimizing in other PHP applications going forward. What you’ll learn How to improve the performance of existing PHP applications How to maximize the performance of existing operating systems and databases How to refactor existing PHP code for better performance How to identify and rectify bottlenecks in PHP code How to take advantage of variable caching to increase performance Strategies for selecting server hardware for best performance Who this book is for This book will appeal to established PHP developers who wish to write new, complex applications or want to rearchitect existing ones. Developers of other languages such as Ruby, Python, and VB.NET will also have an interest in the techniques for tuning the performance of web servers, databases, and operating systems. Table of Contents Benchmarking Techniques Improving Client Download and Rendering Performance PHP Code Optimization Opcode Caching Variable Caching Choosing the Right Web Server Web Server and Delivery Optimization Database Optimization Installing Apache, MySQL, PHP, and PECL on Windows Installing Apache, MySQL, PHP, and PECL on Linux PHP Objects, Patterns and Practice Publication Date: June 6, 2010 | ISBN-10: 143022925X | ISBN-13: 978-1430229254 | Edition: 3 This book takes you beyond the PHP basics to the enterprise development practices used by professional programmers. Updated for PHP 5.3 with new sections on closures, namespaces, and continuous integration, this edition will teach you about object features such as abstract classes, reflection, interfaces, and error handling. You’ll also discover object tools to help you learn more about your classes, objects, and methods. Then you’ll move into design patterns and the principles that make patterns powerful. You’ll learn both classic design patterns and enterprise and database patterns with easy-to-follow examples. Finally, you’ll discover how to put it all into practice to help turn great code into successful projects. You’ll learn how to manage multiple developers with Subversion, and how to build and install using Phing and PEAR. You’ll also learn strategies for automated testing and building, including continuous integration. Taken together, these three elements—object fundamentals, design principles, and best practices—will help you develop elegant and rock-solid systems. What you’ll learn Learn to work with object fundamentals: writing classes and methods, instantiating objects, and creating powerful class hierarchies using inheritance. Master advanced object-oriented features, including static methods and properties. Learn how to manage error conditions with exceptions, and create abstract classes and interfaces. Understand and use design principles to deploy objects and classes effectively in your projects. Learn about design patterns, their purpose and structure, and the underlying principles that govern them. Discover a set of powerful patterns that you can deploy in your own projects. Learn about the tools and practices that can guarantee a successful project including unit testing; version control; build, installation, and package management; and continuous integration. Who this book is for This book is suitable for anyone with at least a basic knowledge of PHP who wants to use its object-oriented features in their projects. Those who already know their interfaces from their abstracts may well still find it hard to use these features in their projects. These users will benefit from the book’s emphasis on design. They will learn how to choose and combine the participants of a system, how to read design patterns, and how to use them in their code. Finally, this book is for PHP coders who want to learn about the practices and tools (version control, testing, continuous integration, etc.) that can make projects safe, elegant, and stable. Table of Contents PHP: Design and Management PHP and Objects Object Basics Advanced Features Object Tools Objects and Design What Are Design Patterns? Why Use Them? Some Pattern Principles Generating Objects Patterns for Flexible Object Programming Performing and Representing Tasks Enterprise Patterns Database Patterns Good (and Bad) Practice An Introduction to PEAR and Pyrus Generating Documentation with phpDocumentor Version Control with Subversion Testing with PHPUnit Automated Build with Phing Continuous Integration Objects, Patterns, Practice High Performance MySQL, 3rd Edition Optimization, Backups, and Replication By Baron Schwartz, Peter Zaitsev, Vadim Tkachenko | Publisher: O'Reilly Media | Final Release Date: March 2012 | Pages: 828 How can you bring out MySQL's full power? With High Performance MySQL, you'll learn advanced techniques for everything from designing schemas, indexes, and queries to tuning your MySQL server, operating system, and hardware to their fullest potential. This guide also teaches you safe and practical ways to scale applications through replication, load balancing, high availability, and failover. Updated to reflect recent advances in MySQL and InnoDB performance, features, and tools, this third edition not only offers specific examples of how MySQL works, it also teaches you why this system works as it does, with illustrative stories and case studies that demonstrate MySQL's principles in action. With this book, you'll learn how to think in MySQL. Learn the effects of new features in MySQL 5.5, including stored procedures, partitioned databases, triggers, and views Implement improvements in replication, high availability, and clustering Achieve high performance when running MySQL in the cloud Optimize advanced querying features, such as full-text searches Take advantage of modern multi-core CPUs and solid-state disks Explore backup and recovery strategies—including new tools for hot online backups ### Editors Related Books Learning the vi and Vim Editors Publication Date: July 22, 2008 | ISBN-10: 059652983X | ISBN-13: 978-0596529833 | Edition: 7 There's nothing that hard-core Unix and Linux users are more fanatical about than their text editor. Editors are the subject of adoration and worship, or of scorn and ridicule, depending upon whether the topic of discussion is your editor or someone else's. vi has been the standard editor for close to 30 years. Popular on Unix and Linux, it has a growing following on Windows systems, too. Most experienced system administrators cite vi as their tool of choice. And since 1986, this book has been the guide for vi. However, Unix systems are not what they were 30 years ago, and neither is this book. While retaining all the valuable features of previous editions, the 7th edition of Learning the vi and vim Editors has been expanded to include detailed information on vim, the leading vi clone. vim is the default version of vi on most Linux systems and on Mac OS X, and is available for many other operating systems too. With this guide, you learn text editing basics and advanced tools for both editors, such as multi-window editing, how to write both interactive macros and scripts to extend the editor, and power tools for programmers -- all in the easy-to-follow style that has made this book a classic. ### Information Industry Related Books 浪潮之巅 Publication Date: August, 2011 | ISBN-13: 978-7121139512 近一百多年来,总有一些公司很幸运地、有意识或无意识地站在技术革命的浪尖之上。在这十几年间,它们代表着科技的浪潮,直到下一波浪潮的来临。 从一百年前算起,AT&T 公司、IBM 公司、苹果公司、英特尔公司、微软公司、思科公司、雅虎公司和 Google 公司都先后被幸运地推到了浪尖。虽然,它们来自不同的领域,中间有些已经衰落或正在衰落,但是它们都极度辉煌过。本书系统地介绍了这些公司成功的本质原因及科技工业一百多年的发展。 在极度商业化的今天,科技的进步和商机是分不开的。因此,本书也系统地介绍了影响到科技浪潮的风险投资公司,诸如 KPCB 和红杉资本,以及百年来为科技捧场的投资银行,例如高盛公司,等等。 在这些公司兴衰的背后,有着它必然的规律。本书不仅讲述科技工业的历史,更重在揭示它的规律性。 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 OS X Mavericks (version 10.9) is the tenth major release of OS X, Apple Inc.’s desktop and server operating system for Macintosh computers. OS X Mavericks was announced on June 10, 2013, at WWDC 2013, and was released on October 22, 2013, as a free update through the Mac App Store. The update places emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple’s iOS apps to the OS X platform. This release marks the beginning of a change in the naming scheme of OS X, departing from the use of big cats and moving to names based on places in California. Following the new naming scheme, the current version of the operating system is named Mavericks, after the surfing location in California. Apple Inc. Official Website www.apple.com Related Links Install Mac OS X Mavericks in VMware Virtual Machine Laptop Basic Parameters Time to market : September 2012 Motherboard Chipset : Intel HM67 CPU Series : Intel Core I5 3 3210M (Ivy Bridge) Memory : 4GB DDR3 1600 MHZ (Maximum 32GB) Hard Disk : 750GB 5400 Turn, SATA Drive type : Built-in DVD (support DVD SuperMulti double burn) Display : 15.6 inches 1366 x768 (16:9) Graphics Chip : Intel HD Graphics 4000 and AMD Radeon HD 7670M (Not Work) Audio Chip : Conexant High-Definition CX20561 Wired Network Adapters : Realtek RTL8139/810x Fast Ethernet Adapter TouchPad : Elantech TouchPad V3 Multimedia Equipment : Integrated Camera Note that the wireless network card not work after the installation, but wired network is available, you can use the USB adapter connected to the wireless Internet, and the AMD Radeon HD 7670M Graphics doesn't work. Preparations Download OS X Install Mavericks 13A603.torrent.zip Chameleon-2.2svn-r2255 1604-Kext Wizard.zip Download MultiBeast Mavericks Edition from tonymacx86.com Extra v2.0.zip Extra for DELL Inspiron 5520 v2.0.zip OS X Mavericks Single System Installation Before installation you need have an existing Mac or Hackintosh with Mac OS X to run some program. You could use a real Mac or existing Hackintosh, if you own one. You can also be installed on a virtual machine. Divid the hard disk is into two or more partitions, a partition to restore the installation cdr image(10GB), another as a system installation partition (20GB+). Please confirm before installation, hard disk partition table is MBR partition table, use Disk Utility to restore cdr file recovery to 10GB partition ... ... install Chameleon-2.2svn-r2255 on 10GB partition. Download Extra v2.0.zip and unzip it, copy Extra folder to 10GB partition replace same name folder. Copy Extra for DELL Inspiron 5520 v2.0.zip, 1604-Kext Wizard.zip and MultiBeast Mavericks Edition.zip to 10GB partition not unpacked. Well, now you’ve created a bootable installation disk. Install OS X Mavericks 10.9 Restart your computer, enter BISO by press F2, confirm HDD mode is AHCI, and use the legacy boot mode. After you start select the 10GB partition to enter the installation interface. Install Driver After installation, select system where the partition to start OS X Mavericks, after entering the operating system ... ... unzip downloaded Extra for DELL Inspiron 5520 v2.0.zip replace same name folder of system partition. Install the Sound Card Driver Extracting MultiBeast Mavericks Edition.zip, Open MultiBeast, in the Driver tab, check the Universal → VoodooHDA v2.8.4 ... ... in the Build tab, click Install. After install, you should extracting 1604-Kext Wizard.zip, use the Kext Wizard to repair disk permissions and restart your computer. Installation is complete! At boot time press Enter to select the system. Configuration Chameleon Startup Parameters -f This forces rebuilding of extensions cache -s Boots into a single user command line mode -v Verbose, boots in text mode showing all debug info and errors -x Boots into safe mode Set the Default Boot Partition Open /Extra/org.chameleon.Boot.plist, add the following one configuration <key>Default Partition</key> <string>hd(0,2)</string> Hidden Partition <key>Hide Partition</key> <string>hd(0,1) hd(0,3)</string> Automatically Enter the System <key>Quiet Boot</key> <string>Yes</string> Solve FAQ Solve Unable to sign in because of a problem communicating whith iCloud. Remove /Library/Preferences/SystemConfiguration/NetworkInterfaces.plist Open /Extra/org.chameleon.Boot.plist, copy <key>device-properties</key> <string>5e0000000100000001000000520000000100000002010c00d041030a000000000101060000027fff04002c0000004100410050004c002c00690067002d0070006c006100740066006f0072006d002d006900640000000800000003006601</string> ... to /Library/Preferences/SystemConfiguration/com.apple.Boot.plist Restart your computer. ### Install Mac OS X Lion Theme on Ubuntu 12.04 1. Install New Apple Logo in Launcher Enter following commands to install Apple Logo $ wget -O apple-logo.zip http://goo.gl/NZyKc $ sudo unzip apple-logo.zip -d /usr/share/unity/5/ It will ask to replace file, Type A and press Enter If you want back Ubuntu enter following commands in terminal $ wget -O ubuntu-logo.zip http://goo.gl/mU42p $ sudo unzip ubuntu-logo.zip -d /usr/share/unity/5/ It will ask to replace file, Type A and press Enter 2. To Install Mac OS X Lion Theme and Icons on Ubuntu Press Ctrl Alt T to open Terminal and copy the following commands in the Terminal $ sudo add-apt-repository ppa:noobslab/themes $ sudo apt-get update $ sudo apt-get install mac-os-lion-icons-v2 $ sudo apt-get install mac-os-lion-theme-v2 3. Install Mac Cursors Copy the following commands in the Terminal $ wget -O mac-cursors.zip http://goo.gl/eYtho $ sudo unzip mac-cursors.zip -d /usr/share/icons/; rm mac-cursors.zip $ cd /usr/share/icons/mac-cursors $ sudo chmod +x install-mac-cursors.sh uninstall-mac-cursors.sh $ sudo ./install-mac-cursors.sh To Uninstall Cursors $ cd /usr/share/icons/mac-cursors $ sudo ./uninstall-mac-cursors.sh 4. Install Ubuntu Tweak to change Themes, Icons, Cursors ### Install KVM on Ubuntu Server 12.04 LTS and Create a Virtual Machine KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. Official Website www.linux-kvm.org Environment Host Operating System : Microsoft Windows 8.1 Professional With Media Center 64-bit VMware Virtual Machine Guest Operating System : Ubuntu Server 12.04.2 LTS 64-bit KVM Guest Operating System : CentOS 6.4 64-bit 1. Install Ubuntu Server In VMware Virtual Machine environment, install Ubuntu Server 12.04, the installation process is very simple, I will not detail here, and I chose the virtual network NAT mode. In the Virtual Machine Settings dialog box, enable the Virtualization engine options( Virtualize Intel VT-x/EPT or AMD-v/RVI and Virtualize CPU performance counters). Use the following command to check whether the cpu support virtualization $ sudo egrep '(vmx|svm)' --color=always /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx f16c rdrand hypervisor lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi ept vpid fsgsbase smep If nothing is displayed, it indicates that the CPU does not support virtualization, experiments can't continue. 2. Install KVM Now install kvm and kvm builder script. This script is used to create a virtual machine based on Ubuntu. $ sudo apt-get install ubuntu-virt-server python-vm-builder kvm-ipxe virtinst libcap2-bin 3. Add the current user to the kvm and libvirtd group $ sudo adduser `id -un` libvirtd $ sudo adduser `id -un` kvm Logout and login again. Run the following command to check whether installation kvm successfully $ sudo virsh -c qemu:///system list Id Name State ---------------------------------- 3. Setting Network The following settings network to ensure network access to other hosts kvm manage virtual machines. May need to install bridge-utils, my test machine when installing the operating system already has. If you have not installed, run the following command $ sudo apt-get install bridge-utils Edit /etc/network/interface file like this, modify ... # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp ... to ... # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface (disable) #auto eth0 #iface eth0 inet dhcp # The primary network interface for kvm auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.40.134 network 192.168.40.0 netmask 255.255.255.0 broadcast 192.168.40.255 gateway 192.168.40.1 dns-nameservers 8.8.8.8 8.8.4.4 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off Restart network $ sudo /etc/init.d/networking restart Then check the network configuration $ ifconfig br0 Link encap:Ethernet HWaddr 00:0c:29:ba:a9:0a inet addr:192.168.40.134 Bcast:192.168.40.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:feba:a90a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:85 errors:0 dropped:0 overruns:0 frame:0 TX packets:90 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7122 (7.1 KB) TX bytes:15530 (15.5 KB) eth0 Link encap:Ethernet HWaddr 00:0c:29:ba:a9:0a UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1530 errors:0 dropped:0 overruns:0 frame:0 TX packets:1134 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:146559 (146.5 KB) TX bytes:227280 (227.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:40 errors:0 dropped:0 overruns:0 frame:0 TX packets:40 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:3280 (3.2 KB) TX bytes:3280 (3.2 KB) virbr0 Link encap:Ethernet HWaddr 16:5d:91:f5:d2:71 inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Restart the computer $ sudo reboot 4. Fix Some Bug Completed, there will be some bugs: User login host via ssh after a while you can not connect, and then ssh can not log in until restart the host machine's network services. Solutions, run the following command, after executed this command, it will automatically add the missing route rules $ sudo dhclient br0 & 5. Configuring qemu-kvm $ sudo setcap cap_net_admin=ei /usr/bin/qemu-system-x86_64 Configuring qemu-kvm inheritance CAP_NET_ADMIN capability by edit capability.conf file $ sudo vi /etc/security/capability.conf Add the following line cap_net_admin username Open /etc/libvirt/qemu.conffile, check vnc_listen="0.0.0.0"is enabled, if not enabled, uncomment it. 6. Install Guest OS Before Create a virtual machine, you need to create a disk image as a virtual machine's hard disk, using the following commands, generation a 60GB centos.img, format is qcow2. $ sudo qemu-img create -f qcow2 /opt/centos.img 60 Using virt-install command to create a virtual machine virt-install --name centos-server1 --ram 512 --vcpus=1 --check-cpu --os-type=linux -c /home/xuri/CentOS-6.4-x86_64-LiveCD.iso --disk path=/opt/centos.img,device=disk,bus=ide,format=qcow2 --bridge=br0 --vnc --vncport=5901 --vnclisten=0.0.0.0 Note -c Location of system installation iso file --disk path= Created the position of disk mirroring 7. Connect to the Virtual Machine After the above step, the virtual machine has actually started, and enter the system installation interface, Install UltraVNC Viewer 1.0.9.6 on client to connect to the virtual machine for subsequent installation. 8. Management the Virtual Machine View the Virtual Machine $ sudo virsh --connect qemu:///system virsh # list --all Id Name State ---------------------------------- 1 centos-server1 running Start the Virtual Machine virsh # start centos-server1 Domain centos-server1 started Stop the Virtual Machine virsh # destroy centos-server1 Domain centos-server1 destroyed ### Install Xen on Ubuntu Desktop 12.04 LTS and Using Virtual Manage Create a Virtual Machine Xen is a native (bare-metal) hypervisor providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. The University of Cambridge Computer Laboratory developed the first versions of Xen. The Xen community develops and maintains Xen as free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. Xen is currently available for the IA-32, x86-64 and ARM instruction sets. Official Website www.xenproject.org Nested Virtual Testing Environment Host Operating System : Ubuntu Desktop 12.04 LTS 64-bit VMware Virtual Machine Guest Operating System : Ubuntu Desktop 12.04 LTS 64-bit Xen Guest Operating System : Microsoft Windows Server 2003 Professional R2 32-bit 1. In VMware Virtual Machine environment, install Ubuntu Server 12.04, the installation process is very simple, I will not detail here, and I chose the virtual network NAT mode. In the Virtual Machine Settings dialog box, enable the Virtualization engine options( Virtualize Intel VT-x/EPT or AMD-v/RVI and Virtualize CPU performance counters). 2. Configuring APT Source (Optional) # Ubuntu 12.04 Source Powered by Sohu deb http://mirrors.sohu.com/ubuntu/ precise main restricted universe multiverse deb http://mirrors.sohu.com/ubuntu/ precise-security main restricted universe multiverse deb http://mirrors.sohu.com/ubuntu/ precise-updates main restricted universe multiverse deb http://mirrors.sohu.com/ubuntu/ precise-proposed main restricted universe multiverse deb http://mirrors.sohu.com/ubuntu/ precise-backports main restricted universe multiverse deb-src http://mirrors.sohu.com/ubuntu/ precise main restricted universe multiverse deb-src http://mirrors.sohu.com/ubuntu/ precise-security main restricted universe multiverse deb-src http://mirrors.sohu.com/ubuntu/ precise-updates main restricted universe multiverse deb-src http://mirrors.sohu.com/ubuntu/ precise-proposed main restricted universe multiverse deb-src http://mirrors.sohu.com/ubuntu/ precise-backports main restricted universe multiverse $ sudo apt-get update 3. Install Xen Package $ sudo apt-cache search xen $ sudo apt-get install xen-hypervisor-4.1-amd64 xen-utils-4.1 xen-tools xen-docs-4.1 4. Install libvirt and Virtual Manager Tools $ sudo apt-cache search virt-install $ sudo apt-get install virtinst python-libvirt virt-viewer virt-manager 5. Create Xen Virtual Machine Images $ sudo dd if=/dev/zero of=/etc/xen/vm01.img bs=1M count=3072 3072+0 records in 3072+0 records out 3221225472 bytes (3.2 GB) copied, 39.5561 s, 81.4 MB/s 6. View Dom0 Current Status $ sudo xm list Name ID Mem VCPUs State Time(s) Domain-0 0 945 2 r----- 73.1 $ sudo xm info host : ubuntu release : 3.5.0-23-generic version : #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 machine : x86_64 nr_cpus : 2 nr_nodes : 1 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 2493 hw_caps : 1fabfbff:28100800:00000000:00007f40:f2ba2223:00000000:00000001:00000281 virt_caps : hvm total_memory : 1023 free_memory : 63 free_cpus : 0 xen_major : 4 xen_minor : 1 xen_extra : .2 xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_scheduler : credit xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : unavailable xen_commandline : placeholder cc_compiler : gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) cc_compile_by : stefan.bader cc_compile_domain : canonical.com cc_compile_date : Tue Jun 25 13:22:36 UTC 2013 xend_config_format : 4 If you execute the xm list command, get a Can't find hypervisor information in sysfs error, running the following command, reboot, choose the xen kernel boot into the system, and then check whether it has been successfully installed Xen $ sudo mv /etc/grub.d/10_linux /etc/grub.d/50_linux $ sudo update-grub2 $ sudo reboot 7. Modify the Configuration File $ sudo vi /etc/xen/xend-config.sxp Modify #(xend-unix-server no) to (xend-unix-server yes), save the file and then restart the xend service $ sudo xend restart 8. View xen version information via libvirt, confirm libvirt and xen have been successfully installed $ virsh version Compiled against library: libvir 0.9.8 Using library: libvir 0.9.8 Using API: Xen 0.9.8 Running hypervisor: Xen 4.1.0 9. Using Xen Install Windows Server 2003 R2 Virtual Machine Copy ISO installation files to /usr or /opt directory, in a terminal type ... $ sudo virt-manage ... then you follow the wizard to install the guest operating system If you get a Unable to complete install: 'POST operation failed: xend_post: error from xen daemon: ..., try running the following command $ sudo ln -s /usr/share/qemu-linaro /usr/share/qemu So far install Xen on ubuntu desktop 12.04 LTS and using virtual manage create a virtual machine has beed completed. ### Install MySQL Cluster on Ubuntu 12.04 LTS MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability. MySQL Cluster is implemented through the NDB or NDBCLUSTER storage engine for MySQL ("NDB" stands for Network Database). Developer's Official Website www.oracle.com This post is is a guide on how to create a MySQL cluster on Ubuntu 12.04 LTS server distribution. Please do note that MySQL clusters have set limitations compared to one server MySQL configuration. Before venturing on with creating a MySQL cluster, do check the following page here. Personally I would recommend creating a PostgreSQL cluster. 1. SQL Cluster VM's To create a fully functional cluster you need a minimum of 3 VM's to complete. One Virtual Machine to act as the cluster management and two database nodes. In case one of the DB-nodes goes offline, the cluster can still work and will synchronize once the node is back up. The Management node can also be inaccessible but will most likely case a “Split-Brain" issue with inconsistent data between the two working DB-nodes. In my example I will include a second management node to the cluster to give you an example on how it will work. In this tutorial I will walk you trough installing a total of 5 VM's, where two of which as Management Nodes, two are DB nodes and one is a MySQL proxy. It is possible to add additional nodes to the cluster later on and the single MySQL-proxy is possible to make Highly Avaiable (HA), but not covered in this post. Virtual Machine Cluster Platform VMware ESXi 4.1 Virtual Machine Operating System Ubuntu 12.04.3 LTS (Precise Pangolin) 64-bit All virtual machines are in the same 192.168.67.0/24 network, but the load-balancer could as well have a connection to the outside network. This will vary from your network configuration and should be treated as a simple example network. Before you continue, you should have all VM's working and have at least local network connectivity working. 2. Installing the management nodes First off we need to install the management nodes of the MySQL cluster. We will start off with MySQL-MGM1 and once it is completed, the procedure should be carried out on the second management host. If you prefer to have only one management VM, just leave it out of the configuration and you are clear to go. Before you continue, go to http://www.mysql.com/downloads/cluster/#downloads and verify which version is the current one. This guide has been made for MySQL cluster version 7.3.3. First we want to download the MySQL cluster packages to the management host to a good location. Navigate to /usr/src directrory and create mysql-mgm directory. $ sudo mkdir /usr/src/mysql-mgm $ cd /usr/src/mysql-mgm After that is done, download the latest source from the mysql download site and extract the package. $ sudo wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.3/mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz $ sudo tar xvfz mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz Navigate to the extracted folder and move the binaries. $ cd mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64 $ sudo cp bin/ndb_mgm /usr/bin $ sudo cp bin/ndb_mgmd /usr/bin Change the permissions for the directory and optionally remove the downloaded source. $ sudo chmod 755 /usr/bin/ndb_mg* $ cd /usr/src $ sudo rm -rf /usr/src/mysql-mgm Next, we must create the cluster configuration file in /var/lib/mysql-cluster/ named config.ini The folder doesn't exist so you need to create it: $ sudo mkdir /var/lib/mysql-cluster After which a config.ini file can be created with your favourite text editor. [NDBD DEFAULT] NoOfReplicas=2 DataMemory=80M IndexMemory=18M [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] DataDir=/var/lib/mysql-cluster [TCP DEFAULT] # Section for the cluster management node [NDB_MGMD] NodeId=1 # IP address of the first management node (this system) HostName=192.168.67.10 [NDB_MGMD] NodeId=2 #IP address of the second management node HostName=192.168.67.11 # Section for the storage nodes [NDBD] # IP address of the first storage node HostName=192.168.67.12 DataDir= /var/lib/mysql-cluster [NDBD] # IP address of the second storage node HostName=192.168.67.13 DataDir=/var/lib/mysql-cluster # one [MYSQLD] per storage node [MYSQLD] [MYSQLD] Note that all hosts are defined at this stage, even though we are only installing the first one. Note that the management host nodes need the NodeID specifed where as the NDBD nodes do not. Once this is done you can start the management node with the following command: $ sudo ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/ Once that is done, you can automate the start procedure by entering an entry to the init.d with the following commands: $ sudo echo "ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/" > /etc/init.d/ndb_mgmd $ sudo chmod 755 /etc/init.d/ndb_mgmd Once everything is working, follow the same procedure on the second management node and use the same configuration. You mustn't change the node ID's in the configuration file! You can verify that both management nodes are operational by entering ndb_mgm (just enter ndb_mgm in terminal) and by typing show once in the configuration utility. At this point both ndbd nodes and mysql nodes are disconnected. 3. The database nodes Creating the DB nodes is fairly simliar to creating the management nodes. First off lets start by creating a mysql group and adding a user to it. $ sudo groupadd mysql $ sudo useradd -g mysql mysql Navigate to /usr/local and download the same compressed file as to the management nodes and extract it. $ cd /usr/local/ $ sudo wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.3/mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz $ sudo tar xvfz mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz Create a symbolic link named mysql pointing to the extracted folder (this will be later used by the DB cluster so do not delete it!). Once the symlink is complete you can install the database. $ sudo ln -s mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64 mysql $ cd mysql $ sudo apt-get install libaio1 libaio-dev $ sudo scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data Change the owner to the newly created mysql group $ sudo chown -R root:mysql . $ sudo chown -R mysql data Like on the management nodes, we want the databae engine to start automatically and thus we need to create the command to init.d. $ sudo cp support-files/mysql.server /etc/init.d/ $ sudo chmod 755 /etc/init.d/mysql.server Lastly copy the bin files to the /usr/bin location to keep everything neat and create a symlink to keep references right. $ cd /usr/local/mysql/bin $ sudo mv * /usr/bin $ cd ../ $ sudo rm -fr /usr/local/mysql/bin $ sudo ln -s /usr/bin /usr/local/mysql/bin The MySQL configuration file is missing at first so we need to create it ourselves. The file is located in /etc/ and is named my.cnf. Open your favorite texteditor and add the following lines to it: [mysqld] ndbcluster # IP address of the cluster management node ndb-connectstring=192.168.67.10,192.168.67.11 [mysql_cluster] # IP address of the cluster management node ndb-connectstring=192.168.67.10,192.168.67.11 Note that both management nodes are entered, seperated by a comma. If you only have one management node, just remove the second one in the list. Once my.cnf file has been saved we need to create the data folders for MySQL. $ sudo mkdir /var/lib/mysql-cluster Once that is done, we just need to initialize the cluster and start the service. The initialization needs to be done only when you start the node for the first time, or when /var/lib/mysql-cluster/config.ini file has been changed on the management node(s). $ cd /var/lib/mysql-cluster $ sudo ndbd –-initial $ sudo /etc/init.d/mysql.server start After this, secure the MySQL installation by running the appropriate script: $ sudo /usr/local/mysql/bin/mysql_secure_installation And lastly we need ndb to start automatically: $ sudo echo "ndbd" > /etc/init.d/ndbd $ sudo chmod 755 /etc/init.d/ndbd Once everything is done, you've completed the first node. Follow the steps on the second database node to get it up and running. 4. Testing and Verification Once everything is completed it's time to verify our configuration to see that everything is working as intended. To do this, we need to verify that all nodes are visible and connected one one of the management nodes. Open the ndb management shell by typing ndb_mgm in terminal and type show. This time you should see that the database nodes have been populated in the output. root@MYSQL-MGM1:~# ndb_mgm ndb_mgm> show Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @192.168.67.12 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0, *) id=4 @192.168.67.13 (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=1 @192.168.67.10 (mysql-5.6.14 ndb-7.3.3) id=2 @192.168.67.11 (mysql-5.6.14 ndb-7.3.3) [mysqld(API)] 2 node(s) id=5 @192.168.67.12 (mysql-5.6.14 ndb-7.3.3) id=6 @192.168.67.13 (mysql-5.6.14 ndb-7.3.3) If you see a similar output, you are clear to go and try out some basic SQL commands. Move to the first DB node in the cluster. Log in to the SQL database and create a new database, table to verify replication. Note that when creating a database, the engine for the tables has to be NDBCLUSTER. If you use InnoDB for example, the data will NOT be replicated between the cluster nodes. There are some drawbacks when using ndbcluster as your engine, which can be found in the MySQL website. http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-unsupported.html http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-syntax.html $ mysql -u root -p mysql> CREATE DATABASE mysqlclustertest; mysql> USE mysqlclustertest; mysql> CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER; mysql> INSERT INTO testtable () VALUES (1); mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.01 sec) Connect to the second database node and lets see if we get the same output. mysql -u root -p mysql> USE mysqlclustertest; mysql> SELECT * FROM testtable; +------+ | i | +------+ | 1 | +------+ 1 row in set (0.01 sec) You should see the same output as on the first node when doing the select statement. Now if you insert a new entry to the table, it will be replicated back to the first node. 5. Loadbalancer The last part in our guide is to install the load-balancer to get some additional use of the MySQL-cluster. The loadbalancer can be something else than the mysql-proxy, but this is easy and simple to install and does it's job. You could use something like pFsense if you'd like to. root@mysql-proxy:~# apt-get install mysql-proxy root@mysql-proxy:~# mkdir /etc/mysql-proxy root@mysql-proxy:~# cd /etc/mysql-proxy root@mysql-proxy:/etc/mysql-proxy# nano mysql-proxy.conf And add the following to your mysql-proxy.conf [mysql-proxy] daemon = true proxy-address = 192.168.67.14:3306 proxy-skip-profiling = true keepalive = true event-threads = 50 pid-file = /var/run/mysql-proxy.pid log-file = /var/log/mysql-proxy.log log-level = debug proxy-backend-addresses = 192.168.67.12:3306,192.168.67.13:3306 proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua For automatic start with additional options create the following file /etc/default/mysql-proxy ENABLED="true" OPTIONS="--defaults-file=/etc/mysql-proxy.conf --plugins=proxy" After that you can start the mysql-proxy by invoking the this command: /etc/init.d/mysql-proxy start/stop/status Once this is completed, you should be able to connect to the MySQL servers using the proxy address. Do remember that for this to work, you will need to create a new user that has a specific subnet allowed to connect to it. You will also need to add to the my.cnf file the bind-address for the MySQL servers. SQL users do not replicate, so the same user has to be added to all database nodes individually. Once logged in to the DB node SQL shell, execute the following command: CREATE USER 'newuser'@'192.168.67.%' IDENTIFIED BY 'password'; FLUSH PRIVILEGES; SELECT * FROM mysql.user; Change newuser, ip and password as per your configuration needs. The % in the IP-address acts as a wildcard and thus the whole subnet it allowed to connect remotely to this DB node. Remember to flush privileges after the user has been added. Remember to add the same user with same configuration to all other Database nodes in this cluster. This article references MySQL NDB Cluster setup on Ubuntu 12.04 LTS, with some changes. ### Install PEAR in OS X PEAR is a framework and distribution system for reusable PHP components. In the terminal, run the following command: $ sudo php /usr/lib/php/install-pear-nozlib.phar $ sudo pear config-set php_ini /private/etc/php.ini $ sudo pecl config-set php_ini /private/etc/php.ini $ sudo pear upgrade-all ### Install OS X Mavericks in VMware Virtual Machine OS X Mavericks (version 10.9) is the tenth major release of OS X, Apple Inc.'s desktop and server operating system for Macintosh computers. OS X Mavericks was announced on June 10, 2013, at WWDC 2013, and was released on October 22, 2013, as a free update through the Mac App Store. The update places emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to the OS X platform. This release marks the beginning of a change in the naming scheme of OS X, departing from the use of big cats and moving to names based on places in California. Following the new naming scheme, the current version of the operating system is named Mavericks, after the surfing location in California. The installation process similar with Mountain Lion (version 10.8), please refer to Install OS X Mountain Lion in VMware Virtual Machine. Start the VMware virtual machine and install Mac OS X Mavericks Edit the virtual machine configuration "CD / DVD" option in the "Use ISO image file" at the choice of our Mac OS X installation disc image. (Please select the browse file dialog box when the file type, select "All Files (*. *)", You can see the file name suffix for the dmg image file.) Confirmation is complete, start the VMware virtual machine. Start the virtual machine, the virtual machine default boot from the CD, the Apple Logo appears after officially came to Mac OS X installation interface. In the "Mac OS X Utilities" select "Disk Utility", select the first hard drive on the left and on the right select "Erase" (the "Format" select "Mac OS X Extended (Journaled)"), Click the "Erase" button to format the current drive. After formatting the hard disk partition, back to "Mac OS X Utilities" and click "Install Mac OS X", enter the installation process. Completed in a VMware virtual machine after installing Mac OS X, the first thing of course is to install VMware Tools. VMware Tools includes the VMware virtual machine "hardware" drive, the virtual machine and the host communication between functional support. Download Latest darwin.iso file com.vmware.fusion.tools.darwin.zip.tar Size : 2580480 Byte Release : Thursday, November 14, 2013 MD5 : 503C2338937FF6AF13A6E6C7A9CB0B0A SHA1 : C248A7E9F0E6C05858B0C9CDE0D6229B790A6E98 CRC32 : 52C6A2A4 OS X Install Mavericks 13A603.torrent.zip Size : 15756 Byte Release : Thursday, November 14, 2013 MD5 : 7E034B90217F12904D160EF313CBC0AC SHA1 : 44E4844305A735E409AD69080FEAF1CDEFC2979D CRC32 : 955CE154 ### Use Solarized Color Scheme in OS X Terminal Solarized is the most complete Terminal / Editor / IDE color project, covering almost all major operating systems (Mac OS X, Linux, Windows), editor and IDE (Vim, Emacs, Xcode, TextMate, NetBeans, Visual Studio, etc.), terminal (iTerm2, Terminal.app, Putty, etc.). Similar projects have Tomorrow Theme. Project Home Page https://github.com/altercation/vim-colors-solarized Solarized Palette To Mac OS X terminal in comfortable using the command line (at least) three tools need to give color, terminal, vim, and ls. First Download Solarized $ git clone git://github.com/altercation/solarized.git Terminal/iTerm2 Mac OS X comes with Terminal and free iTerm2 are a very good tool, iTerm2 can be cut into multiple windows, more convenient. If you are using Terminal, then, in the solarized/osx-terminal.app-colors-solarized double-click Solarized Dark ansi.terminal and Solarized Light ansi.terminal two color schemes will be automatically imported into the Terminal.app in Dark and Light. If you are using iTerm2, then double-click to solarized/iterm2-colors-solarized Solarized Dark.itermcolors and Solarized Light.itermcolors two files can be imported into the configuration file iTerm Lane. Vim Set the Terminal and Vim color consistency $ cd solarized $ cd vim-colors-solarized/colors $ mkdir -p ~/.vim/colors $ cp solarized.vim ~/.vim/colors/ $ vi ~/.vimrc syntax enable set background=dark colorscheme solarized ls OS X is based on FreeBSD, so some tools ls, top, etc. are BSD that, ls not the GNU ls, so even Terminal/iTerm2 configured color, but on the Mac typing ls command does not display highlighting can be solved by installing coreutils (brew install coreutils), but if the color is not picky ls there a simple way is .bash_profile in output CLICOLOR = 1 export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 # Tell ls to be colourful export CLICOLOR=1 export LSCOLORS=gxBxhxDxfxhxhxhxhxcxcx # Tell grep to highlight matches export GREP_OPTIONS='--color=auto' ### OS X to Read and Write NTFS Partition Using a Mac ® is a wonderful experience – until you try to use a NTFS-formatted drive, which includes most Windows ® hard drives! Because in order to create, delete or modify any files on Windows NTFS drives you need Paragon NTFS for Mac ® - after minutes you won't remember there ever was a problem. Remove the barriers between NTFS and HFS with ease and confidence! Support OS X 10.9 Mavericks. Paragon Official Website www.paragon-software.com/home/ntfs-mac Download Paragon NTFS v10.1.78.dmg Disconnect the network, offline registration. ### Installing GoAgent on OS X Application appid and Upload appid Recommended in the windows to complete this operation, this part of the tutorial and the PC version of the same, please check the PC version tutorial, this article doesn't elaborate. Please refer to Installing GoAgent on Ubuntu. Download and Deployment Download GoAgent 3.2.3.zip and GoAgentMac.dmg from Google Drive. Filename : GoAgent 3.2.3.zip Version : 3.2.3 MD5 : 8e7a9e7472843a731eb2675598cb4627 SHA : 4d6a9d8aae96ea666778c011f90d90e4711c967c Filename : GoAgentMac.dmg Version : 1.0 Last Update : Oct 15, 2011 MD5 : 99c94f1a6b83a2f22d179b55b8194174 Size : 423,493 Bytes SHA1 : 6aaf5e0e1780d5981bb32ab17344a0c5f3d685ca Unpack the downloaded GoAgentMac archive into the system Applications folder, and put the main program of GoAgent into the Applications folder, Right click GoAgentMac → Show Package Contents In the Contents folder, edit Info.plist file, find keywords, change it to GoAgentPath /Applications/goagent/local/proxy.py Note Here I have put the main program of GoAgent into the Applications folder, modify it like this, if you put the main program of GoAgent into the other where, please modify address to your location. Back goagent folder, modify proxy.ini, fill in your appid. So far, goagent has been installed. Slove "the site's security certificate is not trusted" error If you open some https connections appear the site's security certificate is not trusted error, please import goagent/local folder CA.crt certificate. Double-click GoAgent / local directory CA.crt certificate into the system In the Launchpad → Utilities → Keychain Access → System and double-click to find GoAgent CA Select Trust → When using this certificate → Always trust Restart the browser Solve FAQ Install Python Download Python last version for OS X from official website www.python.org. Change the default Python running version, run the following command in the terminal: $ sudo ln -s /usr/local/bin/python2 /usr/bin/python $ sudo ln -s /usr/local/bin/pydoc2 /usr/bin/pydoc $ sudo ln -s /usr/local/bin/pythonw2 /usr/bin/pythonw Install pyOpenSSL Solve WARNING - - python-openssl not found, please install it! Download pyOpenSSL-0.13.tar.gz file and run the following command in the terminal: $ tar zxvf pyOpenSSL-0.13.tar.gz $ cd pyOpenSSL-0.13 $ sudo python3 setup.py install ### Install Web Bench Web Bench is very simple tool for benchmarking WWW or proxy servers. Uses fork() for simulating multiple clients and can use HTTP/0.9-HTTP/1.1 requests. This benchmark is not very realistic, but it can test if your HTTPD can realy handle that many clients at once (try to run some CGIs) without taking your machine down. Displays pages/min and bytes/sec. Can be used in more aggressive mode with -f switch. Web Bench Official Website http://home.tiscali.cz/~cz210552/webbench.html Install $ wget http://home.tiscali.cz/~cz210552/distfiles/webbench-1.5.tar.gz $ sudo mkdir /usr/local/man $ tar zxvf webbench-1.5.tar.gz $ cd webbench-1.5 $ sudo make install Test $ webbench -c 500 -t 30 http://127.0.0.1/index.html To see your cpu and process with top command. ### Install OpenVZ on CentOS OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files. OpenVZ Official Website openvz.org About install CentOS please reference to Install CentOS 6.4 in Text Mode Installation can be divided into two ways, the first is more complex, use the rpm installation, you need to manually download and install the package; second relatively simple, is through YUM to install. Here is through YUM to install. Host Operating System: CentOS Release 6.4 Final Kernel Version: 2.6.32-358.e16.x86_64 Install OpenVZ Configuration YUM source Download openvz.repo file to /etc/yum.repos.d/ directory, use the following command to complete $ cd /etc/yum.repos.d $ sudo wget http://download.openvz.org/openvz.repo $ sudo rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ Install OpenVZ Kernel and vzctl, vzquota Note: For RHEL6 or CentOS6 or later, use vzkernel, for other versions ovzkernel $ sudo yum install vzkernel $ sudo yum install vzctl vzquota Configuration CentOS kernel parameters and SELinux properties Configure the OS kernel parameters, enter /etc/sysctl.conf file, modify the following two parameters $ sudo vi /etc/sysctl.conf net.ipv4.ip_forward = 1 kernel.sysrq = 1 SELinux is the 2.6 version of the Linux kernel provides mandatory access control system, install OpenVZ need to disable it, enter /etc/sysconfig/selinux file, in accordance with the following settings: $ sudo vi /etc/sysconfig/selinux SELINUX=disabled Reboot System Start OpenVZ Before starting OpenVZ can first check the service status $ service vz status $ service vz start VE Template Installation Download a template needs to be installed to /vz/template/cache directory (If the host is Debain, you need to download to /var/lib/vz/template/cache/ directory), Install VC needs according to their different requirements of the VE will vary. We can go to http://download.openvz.org/template/precreated/ download already installed templates, there centos, debian, ubuntu, fedora, suse, etc. you can choose several templates. $ cd /vz/template/cache $ sudo wget http://download.openvz.org/template/precreated/centos-6-x86.tar.gz Create an ID of 1 Virtual Environment (VE) $ sudo vzctl create 1 --ostemplate ubuntu-10.04-x86 $ sudo vzctl set 1 --onboot yes --save $ sudo vzctl set 1 --ipadd 172.16.82.129 --save $ sudo vzctl set 1 --nameserver 8.8.8.8 --save $ sudo vzctl set 1 --hostname vps01.vpsee.com --save $ sudo vzctl set 1 --diskspace 10G:10G --save $ sudo vzctl start 1 $ sudo vzctl exec 1 passwd Start, restart, shutdown and power off ID 1 VE/VPS $ sudo vzctl start 1 $ sudo vzctl restart 1 $ sudo vzctl stop 1 $ sudo vzctl destroy 1 View the running VE/VPS $ sudo vzlist Computing ID 1 VE/VPS resources used $ sudo vzcalc -v 1 Enter and Exit the Guest VE can enter the system after successful startup, the equivalent of the xen xm console, command exit to exit the guest. $ sudo vzctl enter 1 $ sudo exit ### Install the Cacti Server Monitor on Ubuntu Server Last Update: Feb 22, 2016 Introduction Cacti is a complete network graphing solution designed to harness the power of RRDTool's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices. Official Website www.cacti.net Cacti Architecture Diagram Cacti Working Principle Installation Cacti and all of its dependencies can by installed through apt-get on Ubuntu 12.04. This guide will also install cacti-spine, which is a faster way to poll servers for information than the default php script. $ sudo apt-get update $ sudo apt-get install cacti-spine If you have not set up MySQL, you will be prompted for a root user password. Make your selection and confirm the password to continue. Next, press "Return" or "Enter" to acknowledge a configuration change in php. The initial configuration of Cacti also happens during installation. There are a few questions you need to answer. Select "Apache2" from the list of webservers. After Cacti configures apache, the installation sets up a MySQL account for the application. Select "Yes" to allow a generic database configuration. Provide the password for the administration of the Cacti database that you set up during the MySQL configuration. Next, it asks for a password for Cacti to use with the database. This is an internal password that you should not ever have to use, so it is okay if you just press "Enter" to create a random password. Now that Cacti is installed, we can start the configuration process on it. Configuration Cacti One-Time Setup Point your web browser towards http://YOURSERVERIP/cacti/install/ to start the initial setup and click next The first page you will see is an introduction to the Cacti software. Click "Next >>" when you are finished reading. Click "Next >>" again on the following page since this is a new installation. The next page shows the application paths of the "helper" applications that Cacti uses to operate. All of the applications should be green and marked with "[FOUND]". Click "Finish" to continue. Next, you'll be asked to enter the Cacti user name and password. These are not the passwords you entered during installation. Instead, enter the following default values. User Name: admin Password: admin You'll be prompted to enter a new password for administrating Cacti. Choose a password and click "Save". You are now on your Cacti page. General Configuration A few options must be changed to ensure that Cacti produces data correctly. On the left-hand navigation panel, click on "Settings" under the Configuration heading. In the General tab, we want to change some parameters. Change these settings to match what is shown here. Click "Save" when finished. SNMP Version: Version 2 SNMP Community: secret Next, click the "Poller" tab on the navigation settings. Change these options and match what is shown here. Click "Save" when finished. Poller Type: spine Poller Interval: Every Minute Whenever the Poller Interval is changed, the cache must be emptied. To do this, click "System Utilities" under the Utilities heading on the left-hand navigation panel. Click on "Rebuild Poller Cache" to empty the cache. Creating Devices and Graphs Device Settings To begin graphing, we need to set up device profiles and tell Cacti what to graph. Click "Devices" under the Management heading on the left-hand navigation panel. First, delete the "Localhost" device because we will be recreating some of the same functionality in the device we will be setting up momentarily. Click the checkbox on the right-hand side, make sure Choose an action has "Delete" selected, and click "Go". Confirm the delete on the following page. In the upper-right corner of the page, click the "Add" button to add a new device. Now, you need to fill out some information that describes your device. Fill out the following fields. Click "Create" when you are finished. Description: Ubuntu Cacti Server Hostname: YourIPAddress Host Template: Local Linux Machine SNMP Version: Version 2 SNMP Community: secret Graph Settings Next, scroll down and create some associated graph templates and associated data queries. Under Associated Graph Templates, select "Unix – Ping Latency" from the drop-down and click "Add". Your selection should match what's shown below. Complete the same steps in the Associated Data Queries section to add "SNMP – Get Mounted Partitions", "SNMP – Get Processor Information", and "SNMP – Interface Statistics". Add each of those and then click "Save". Next, click "Create Graphs for this Host" at the top-right of the page. Select each of the right-hand boxes in the light-blue subheadings to select all of the graphs. Click "Create" at the bottom of the page. On the next page, you can change the color of some of the graphing choices. Make your selections and then click "Create". At the top of the page, click "Graphs" tab. Click on the last tab in the top-right corner. It should look like a graph. Your VPS will take a while to generate values for these graphs. It might be five or ten minutes before you even see an empty graph. Sometimes, it will appear that there is a broken image until there is enough data to graph. If you come back in a few hours, you will have some colorful graphs showing some important system statistics. Click on each graph to show daily, weekly, monthly, and yearly graphs for that same resource. Now you have access to Cacti's graphing capabilities. Cacti becomes more useful with every new cloud server you tell it to monitor, so explore the possibility of adding more servers as Cacti devices. ### Restore Ubuntu Network Connection Icon $ sudo service network-manager stop $ sudo rm /var/lib/NetworkManager/NetworkManager.state $ sudo service network-manager start $ sudo vim /etc/NetworkManager/nm-system-settings.conf Modift false to true. $ sudo service network-manager restart ### Using Recovery Mode Reinstall Gnome on Ubuntu Getting Started To enter in Recovery Mode under Ubuntu, reboot your system and hold down the Shift key to bring up the GRUB boot menu. When it shows up, select the line saying Ubuntu, with Linux 3.8.0-32-generic (recovery mode) and press Enter. Wait a few seconds and another menu will appear. Select network Enable networking using your keyboard ↓ arrow key and press Enter to enable your internet connection. In the next screen, select Yes and press Enter. Select now root Drop to root shell prompt and press Enter. You are now in CLI mode, update your system firstly using following command: $ sudo apt-get update && sudo apt-get upgrade If you can not connect to the network Dynamic IP Address Assignment (DHCP Client) $ sudo vim /etc/network/interfaces The example below assumes you are configuring your first Ethernet interface identified as eth0. $ auto eth0 $ iface eth0 inet dhcp Or get address by this command $ sudo dhclient eth0 Static IP Address Assignment $ sudo nano /etc/network/interfaces The example below assumes you are configuring your first Ethernet interface identified as eth0. Change the address, netmask, and gateway values to meet the requirements of your network. auto eth0 iface eth0 inet static address 10.0.0.100 netmask 255.255.255.0 gateway 10.0.0.1 Name Resolution Name resolution as it relates to IP networking is the process of mapping IP addresses to hostnames, making it easier to identify resources on a network. The following section will explain how to properly configure your system for name resolution using DNS and static hostname records. DNS Client Configuration $ sudo nano /etc/resolv.conf Below is an example of a typical configuration of /etc/resolv.conf for a server on the domain example.com and using two public DNS servers. $ search example.com nameserver 8.8.8.8 nameserver 8.8.4.4 Restart Network $ sudo /etc/init.d/networking restart $ sudo apt-get install gnome Then select Resume normal boot to start your system in normal mode. I hope you find this tutorial useful. ### Setup Hosting Git Repositories on Ubuntu Server Git is a distributed revision control and source code management (SCM) system with an emphasis on speed. Git was initially designed and developed by Linus Torvalds for Linux kernel development in 2005. Based on a recent survey of Eclipse IDE users. Every Git working directory is a full-fledged repository with complete history and full version tracking capabilities, not dependent on network access or a central server. Git is free software distributed under the terms of the GNU General Public License version 2. Official Website git-scm.org Install SSH Server $ sudo apt-get install openssh-server openssh-client Install Git $ sudo apt-get install git-core Configure Git server, create Git server management user $ sudo useradd -m git $ sudo passwd git Create Git repository storage directory: $ sudo mkdir /home/git/repositories Setting Git repository permissions: $ sudo chown git:git /home/git/repositories sudo chmod 755 /home/git/repositories Initialize global settings: $ git config --global user.name "username" $ git config --global user.email "username@server" Install Git Web UI Install Apache: $ sudo apt-get install apache2 Install Git Web UI $ sudo apt-get install gitweb Configuration Gitweb The default is no css loaded, the use of static files gitweb connected to DocumentRoot directory: $ cd /var/www/ $ sudo ln -s /usr/share/gitweb/* . Modify the configuration Modify the $projectroot to git repository storage directory (for example: $projectroot = /home/git/repositories),save and refresh your browser. If you do not find the items that you will need to $projectroot/*. Git attributes to 755, so that apache user has read permissions. You can only change you need to let others through web access that git. $ sudo chmod 755 -R gitosis-admin.git Visit: http://localhost/cgi-bin/gitweb.cgi Modify /etc/gitweb.conf, like following content: # path to git projects (.git) # $projectroot = "/var/cache/git"; $projectroot = "/home/git/repositories"; # directory to use for temp files $git_temp = "/tmp"; # target of the home link on top of all pages $home_link = $my_uri || "/"; # html text to include at home page $home_text = "indextext.html"; # file with project list; by default, simply scan the projectroot dir. $projects_list = $projectroot; # stylesheet to use @stylesheets = ("/gitweb/static/gitweb.css"); # javascript code for gitweb $javascript = "/gitweb/static/gitweb.js"; # logo to use $logo = "/gitweb/static/git-logo.png"; # the 'favicon' $favicon = "/gitweb/static/git-favicon.png"; # git-diff-tree(1) options to use for generated patches #@diff_opts = ("-M"); @diff_opts = (); Restart Apache $ sudo service apache2 restart Visit: http://localhost/cgi-bin/gitweb.cgi Install Gitosis Install the Python's setup tools: $ sudo apt-get install python-setuptools Install Gitosis: $ cd /tmp $ git clone https://github.com/res0nat0r/gitosis.git $ cd gitosis $ sudo python setup.py install Add user for Git $ sudo adduser \ --system \ --shell /bin/sh \ --gecos 'git version control' \ --group \ --disabled-password \ --home /home/git \ git Generate native key, switch to the localhost, if there ~/.ssh/id_rsa.pub skip this step: $ ssh-keygen -t rsa Upload the key to a temporary directory server: $ scp ~/.ssh/id_rsa.pub username@server:/tmp Initialize gitosis (On Git server): $ sudo -H -u git gitosis-init < /tmp/id_rsa.pub Modify the post-update privileges: $ sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update Clone gitosis management platform on localhost: $ git clone username@server:gitosis-admin.git $ cd gitosis-admin By modifying the gitosis-admin user rights management gitosis, add a public secret to keydir, add users. After modification commit, push to the server to complete the repositories permissions related operations. Examples Goal: Add User belief and repositories teamwork to gitosis, and cooperative management administrator xuri. User belief add id_rsa.pub and send it to xuri $ belief:~$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/belief/.ssh/id_rsa): Created directory '/home/belief/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/belief/.ssh/id_rsa. Your public key has been saved in /home/belief/.ssh/id_rsa.pub. $ belief:~$ cp /home/belief/.ssh/id_rsa.pub /tmp Gitosis administrator belief assign privileges to xuri xuri:~$ cd ~/projects $ git clone git@192.168.67.201:gitosis-admin $ cd gitosis-admin $ cat gitosis.conf [gitosis] [group gitosis-admin] writable = gitosis-admin members = xuri@ubuntu ls keydir/ xuri@ubuntu.pub cp /tmp/id_rsa.pub keydir/belief.pub nano gitosis.conf [gitosis] [group gitosis-admin] writable = gitosis-admin members = xuri@ubuntu [group teamwork] writable = teamwork members = xuri@ubuntu belief $ git add . $ git commit -am "add member belief and project teamwork" $ git push User xuri add Project teamwork xuri:~$ cd ~/projects $ mkdir teamwork $ cd teamwork $ git init $ touch readme.txt $ git add readme.txt $ git commit -am 'first commit' $ git remote add origin git@192.168.67.201:teamwork.git $ git push origin master User belief clone teamwork and modify readme.txt belief:~$ git clone git@192.168.67.201:teamwork.git $ cd teamwork $ ls $ date > readme.txt $ git commit -am 'add time to readme.txt' && git push Users xuri pull teamwork xuri:~/projects/teamwork$ nano .git/config [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true [remote "origin"] url = git@192.168.67.201:teamwork.git fetch = +refs/heads/*:refs/remotes/origin/* [branch "master"] remote = origin merge = refs/heads/master $ git pull ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + This tutorial will show you how to set up an Apache server to run PHP on a mac OSX 10.8+ with Mountain Lion or later operation system. Part 1 set up an Apache server to run PHP on a mac OSX 10.8+ with Mountain Lion or later operation system What you need to know since the release of Mountain Lion Apple changed its hosting environment for OS X 10.8 + Apple dropped the GUI option to turn on web sharing, but it's still possible to do manually. If you upgraded from a previous version of OS X your hosting environment will be broken. Apache is still installed on your Mac by default. Step 1 How to start your Apache web server: Apache comes preinstalled on Apple OS X 10.8 Mountain Lion Open your Terminal (found in your Utilities folder which is inside your Applications folder) To start the Apache server type the following command in your Terminal window $ sudo apachectl start Press Enter and Apache will start. If you open your internet browser and navigation to http://localhost. You will see the following. Congratulations you now have a running Apache server. To stop the Apache web server type the following command in your Terminal: $ sudo apachectl stop To restart it $ sudo apachectl restart To find the Apache version $ httpd -v Step 2 How to Configure Apache to enable PHP: You will need to edit your Apache Httpd configuration file to enable PHP. Inside your Terminal window type the following command to open the httpd.conf file in the editor nano $ sudo nano /etc/apache2/httpd.conf Uncomment the php line Now that the file is open, find the line where LoadModule php5_module libexec/apache2/libphp5.so is commented out. Remove the # in the front of the line commenting it out. This will enable PHP. Step 3 How to set up a user configuration file: To host files in your Sites/ directory you will need to create a user file for your osx user (use the user short name, if unsure of what it is go /Users directory and you will see the short names of all the users) Create the file by typing the following in your Terminal. Using nano will open the file if it already exists or create it if it doesn't. Replace [username] with your username $ sudo nano /etc/apache2/users/username.conf Add the following lines to the file.*Important: the directory path in the first line is case sensitive. <Directory "/Users/username/Sites" > Options FollowSymLinks Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory> Hold Ctrl X to save and exit. Type Y to confirm. Then press enter. Set the permission of the user configuration file by typing the following in your Terminal $ sudo chmod 644 /etc/apache2/users/username.conf Step 4 How to enable and configure Htaccess: .htaccess is not enabled by default. Many php applications rely on htaccess for URL routing. Again open the httpd.conf file in your terminal. $ sudo nano /etc/apache2/httpd.conf Now press control+w to search in the Nano editor. Search for the word override. On all instances, change AllowOverride None to AllowOverride All. Save the file. In your terminal type the following and press enter to restart your Apache server. $ sudo apachectl restart Step 5 Test it out: Create a test file in your Sites folder called phpinfo.php 1. Check that your Sites folder exists in your Users/username/ directory. If not then create it. 2. Next you will need to create a phpinfo.php file $ sudo nano /Users/username/Sites/phpinfo.php In your file add the following. <?php phpinfo(); ?> Save the file and exit the editor 3. Open your internet browser and type http://localhost/~username/phpinfo.php You should now see the phpinfo file load. Congratulations! You have set up your PHP Apache server on your mac. Part 2 setting up Mysql on your Mac's localhost Step 1 Download MySQL: You need to download and install MySQL on your Mac. 1. Go to the MySQL site, scroll down the page and look for version Mac OS X ver. 10.7 (x86, 64-bit), DMG Archive. Click the blue download button. This will take you to a page that asks you to login or signup. You don't have to signup, instead there is a link right below these buttons. Click the link No thanks, just start my download.. This will start the download. 2. After downloaded you will need to install all 3 of the components in the package. Double-click on each package to install them. mysql-5.6.14-osx10.7-x86_64.pkg – MySQL software MySQL.prefPane – Allows mysql to stop/start MYSQLStartupItem.pkg – Allows mysql to start when booted Step 2 Start MySQL: To start MySQL via the command line $ sudo /usr/local/mysql/support-files/mysql.server start Or you can click Start MySQL Server when the dialog opens after installing the 2nd package. Step 3 Set MySQL root password: $ /usr/local/mysql/bin/mysqladmin -u root password 'yourpassword' That's it. You have MySQL installed and running on your localhost. Part 3 setting up phpMyAdmin on your Mac's localhost Step 1 Fix socket error: $ sudo mkdir /var/mysql $ sudo ln -s /tmp/mysql.sock /var/mysql/mysql.sock Step 2 Download phpMyAdmin: Go to phpMyAdmin download page and choose the file in your language. I usually download phpMyAdmin-3.5.6-english.tar.gz Unzip the file, rename it to phpMyAdmin, and move it into your Sites folder. You may need to create a Sites folder in your users directory /Users/username/Sites/phpMyAdmin. Step 3 Setup phpMyAdmin config: 1. Make the config folder $ mkdir ~/Sites/phpMyAdmin/config 2. Change the permissions $ chmod o+w ~/Sites/phpMyAdmin/config Go to the new phpMyAdmin localhost URL in your browser and run the phpMyAdmin setup Step 4 Setup phpMyAdmin settings: 1. Open your browser and go to the url localhost/~username/phpMyAdmin/setup/ 2. Click "New server" 3. The new page will load and click on the "Authentication" Tab 4. Enter the MySQL root user and password (this was set in the previous mysql setup tutorial) 5. Click "Save" – This will take you back to the previous screen. 6. Scroll to the bottom of the page and click "Save" to finish 7. A new file called config.inc.php is now created in the /config directory that we created previously. Move this file from phpMyAdmin/config/ to phpMyAdmin/ 8. Delete the /config folder Step 5 Go to your newly created phpMyAdmin: 1. Make sure your MySQL is running $ sudo /usr/local/mysql/support-files/mysql.server start 2. Open your browser and go to the your phpMyAdmin url localhost/~username/phpMyAdmin/ 3. Login using your mysql username and password Congratulations! You have now installed phpMyAdmin to run on your Mac localhost. This article references How to install Apache and PHP on a Mac with OSX 10.8 + , with slight changes. ### Install Hyper-V Server 2012 in VMware Workstation Microsoft ® Hyper-V™ Server 2012 provides a simplified, reliable, and optimized virtualization solution, which enables improved server utilization and reduced costs. Because Hyper-V Server is a dedicated stand-alone product that contains only the Windows ® hypervisor, a Windows Server ® driver model, and virtualization components, it provides a small footprint and minimal overhead. It can easily fit into customers' existing IT environments, leveraging their existing provisioning, management, support tools, processes, and skills. Host Platform : Windows 8 64bit Virtual Platform : VMWare Workstation 10 Remote Management : Windows 8 64bit (Virtual Machine) Hyper-V : Hyper-V Server 2012 (Virtual Machine) Set up VMware Workstation Open VMware Workstation and create a new VM by using the Windows Server 2012 template. We will need to make two tweaks to the VM configuration in order to (fully) support Hyper-V Server: In the Virtual Machine Settings dialog box, enable the Virtualization engine options( Virtualize Intel VT-x/EPT or AMD-v/RVI and Virtualize CPU performance counters). Open the VMware's .VDX configuration file from the VMware's working directory. You can find the working directory by navigating to the Options page in the Virtual Machine Settings dialog box and checking the Working directory field. Add the following three new entries to the .VDX configuration file. VMware configuration files can be opened with any text editor. hypervisor.cpuid.v0 = "FALSE" mce.enable = "TRUE" vhv.enable = "TRUE" Note hypervisor.cpuid.v0 = "FALSE": This option tricks the Windows Server 2012 into "thinking" it is not running in a virtualized instance mce.enable = "TRUE": This option enables Machine Check Exception (MCE), which enable the Windows Server 2012 VM to report CPU hardware issues vhv.enable = "TRUE": This option enables nested virtualization Install Hyper-V Server 2012 Hyper-V Server 2012 installation process similar to the Windows standard installation procedure. Configure Hyper-V Server 2012 Hyper-V is no GUI interface, can only be configured from the command line. System is installed, the first requirement to modify the Administrator password. After installation is complete, it will be out of a basic configuration interface, if you accidentally closed the configuration interface, at the prompt, enter sconfig.cmd can be opened. Enable Remote Desktop Remote Management Here using a Windows 8 Hyper-V virtual machine as a remote management terminals. In the "Programs and Features" Adding Hyper-V tool. Execute the command added the Hyper-V Server 2012 account and password to the system. cmdkey /add:Computer-Name /user:Administrator /pass:****** Modify the Windows\System32\Drivers\etc\hosts file, add Hyper-V Server 2012 Address 192.168.235.136 WIN-PCVB3BM0QBA Because Windows 8 system itself is the problem (Windows 7 also have this problem), this time through the Hyper-V Manager to connect to the server, an error will occur can not communicate, you need to download from Microsoft's official website a script Hyper-V Remote Management Configuration Utility (HVRemote), and execute with administrator privileges cscript hvremote.wsf /anondcom:grant Now, you can connect to the server. ### Transparent Proxy Transparent proxy also known as an intercepting proxy, inline proxy, or forced proxy, a transparent proxy intercepts normal communication at the network layer without requiring any special client configuration. Clients need not be aware of the existence of the proxy. A transparent proxy is normally located between the client and the Internet, with the proxy performing some of the functions of a gateway or router. Transparent Proxy practice example is nowadays a lot of companies use behavior management software. User A and User B does not know behavior management device acts as a transparent proxy behavior when user A or user B to server A or B to submit a request to the server when the device according to its own strategy transparent proxy to intercept and modify user A or B packets and as the actual request party a or B to the server sends a request to receive information when returned, transparent proxy and then according to their own settings to allow packets sent back to the user a or B, as shown above, if the transparent proxy settings do not allow access server B, then the user a or user B does not get the server B data. ### Install OS X Mountain Lion in VMware Virtual Machine If your machine is part of the mainstream configuration, you can discard the traditional real native partition installation method, while the use of virtual machine method described in this article, not only greatly reduce the problems encountered difficult, but also in the use of Mac OS X to get the experience is excellent the. (This tutorial configuration needs: support hardware virtualization technology (Virtualization) of the CPU. Current mainstream Intel Core series and AMD Athlon series CPU virtualization technology is supported, the name was Intel VT-x and AMD-V.) Description of the whole process Preparatory work Installing VMware Crack VMware supports Mac OS X In VMware Install Mac OS X Finishing work late Preparation This tutorial method used by the virtual machine, provided that the machine's CPU must support hardware virtualization technology. Readers can be CPU-V or CPU-Z and other tools to test the result. Open CPU hardware virtualization technology can be turned on via the motherboard BIOS settings. Download and install VMware Workstation VMware can go to the official website or on the ed2k network search download (recommended). Described here no longer compile. Apple systems on Mac OS X installation disc image, use the dmg format of the original image. I recommend to search for ed2k network download (recommended), the keyword is "Mac OS X Mountain Lion dmg" or "Mac OS X Lion dmg". Described here no longer compile. Crack VMware Workstation supports Mac OS X Here we need to use the tool "Workstation and Fusion Mac OS X Unlocker" VMware Workstation and complete the installation after the restart, completely unpack Unlocker compressed package to the administrator mode windows/install.cmd. Unlocker will automatically stop VMware services, backup VMware program files, program files crack VMware, restart the VMware service and a series of operations, and will automatically end the command line window. After the break VMware has created a virtual machine system selection dialog: Create Mac OS X, VMware virtual machines After Unlocker crack, start VMware Workstation normally create a virtual machine, select "Apple Mac OS X" (depending on the Mac OS installation disk in the system, select the corresponding version of the correct version, here I use a Mac OS X Mountain Lion, version 10.8 64-bit). Creation process has the following non-default configuration: (1) in the processor options, select "Number of cores per processor (each processor cores)" for the two, which can induce a VMware virtual machine automatically enables advanced hardware virtualization technology. (2) after created, first click on the "Edit virtual machine settings", the hardware's "Floppy" delete option is enabled in the display "Accelerate 3D graphics". (3) In Virtual machine configuration, in the Advanced Settings, set the priority of the input capture "high", and check "Disable memory page trimming. With a text editor (such as Notepad) to edit the virtual machine files (where the author is E:\Program Files(x86)\Virtual_Machines\Mac OS X\Mac OS X.vmx), add a line at the end of the file: mainMem.useNamedFile = "FALSE" This operation is used to disable the VMware virtual machine is placed on your hard disk memory mirroring prevents the virtual machine hard disk read and write operation occurs during the busy horrors. Start the VMware virtual machine and install Mac OS X Edit the virtual machine configuration "CD / DVD" option in the "Use ISO image file" at the choice of our Mac OS X installation disc image. (Please select the browse file dialog box when the file type, select "All Files (*. *)", You can see the file name suffix for the dmg image file.) Confirmation is complete, start the VMware virtual machine. Start the virtual machine, the virtual machine default boot from the CD, the Apple Logo appears after officially came to Mac OS X installation interface. In the "Mac OS X Utilities" select "Disk Utility", select the first hard drive on the left and on the right select "Erase" (the "Format" select "Mac OS X Extended (Journaled)"), Click the "Erase" button to format the current drive. After formatting the hard disk partition, back to "Mac OS X Utilities" and click "Install Mac OS X", enter the installation process. Completed in a VMware virtual machine after installing Mac OS X, the first thing of course is to install VMware Tools. VMware Tools includes the VMware virtual machine "hardware" drive, the virtual machine and the host communication between functional support. Tools used herein Attachments Download OS X 10.8 Install DVD.iso Size : 5020352512 Bits MD5 : 161B4FF7674A6A1D8F8967708FEC686E SHA1 : DF6E9DFA4EB5A540BF502C9DC3993CA0D0E339D6 CRC32 : C0CB8876 unlock-all-v120.zip Size : 4329002 Bits MD5 : EF051B1DE590388A1335736A5BAFED41 SHA1 : 8108C5239CF49F1AF03752B180123C788F75A432 CRC32 : 34DFDAF4 Update : 13-09-2013 ### RAID Troubleshooting About Installation and create RAID, please refer to Install Software RAID10 on Ubuntu 12.04 LTS Server Part 1 Remove device from RAID Remove the device being used does not allow, if you want to remove, must first be marked as fail. If you have a RAID device fails, you need to remove, also need to mark it as fail. 1 Remove a single RAID physical volume Assuming an exception occurs partition sda1, we should to remove it. $ sudo mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 mdadm: hot removed /dev/sda1 from /dev/md0 If you intend to remove the device cleared so for other purposes, then you must be superblock clear away, otherwise, the system recognizes that the device is still part of a RAID array. part $ sudo mdadm --zero-superblock /dev/sda1 2 Remove the entire hard drive If you want to remove the entire hard drive, need to put on this hard drive to remove all physical volumes RAID. For example, we want to completely remove the first hard disk sda, should mark sda1, sda2, sda3 needs as fail, and then remove the entire hard drive. $ sudo mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1 mdadm: set /dev/sda1 faulty in /dev/md0 mdadm: hot removed /dev/sda1 from /dev/md0 $ sudo mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2 mdadm: set /dev/sda2 faulty in /dev/md1 mdadm: hot removed /dev/sda2 from /dev/md1 $ sudo mdadm /dev/md2 --fail /dev/sda3 --remove /dev/sda3 mdadm: set /dev/sda3 faulty in /dev/md2 mdadm: hot removed /dev/sda3 from /dev/md2 Now, if the server supports hot-swappable, you can pull out a piece of this hard disk. Part 2 Add existing RAID physical volume If you want to add already created RAID physical volume, such as we have just removed sda1, sda2, sda3. $ sudo mdadm /dev/md0 --add /dev/sda1 mdadm: added /dev/sda1 $ sudo mdadm /dev/md1 --add /dev/sda2 mdadm: added /dev/sda2 $ sudo mdadm /dev/md2 --add /dev/sda3 mdadm: added /dev/sda3 Part 3 Replace the new hard drive 1 Remove the bad hard drive $ sudo mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1 $ sudo mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2 $ sudo mdadm /dev/md2 --fail /dev/sda3 --remove /dev/sda3 Removed, check RAID status cat /proc/mdstat Now, if the server supports hot-swappable, you can pull out a piece of this hard disk. 2 Insert the new hard drive While the first one is removed from the RAID disk, but the system can still be started. Because sdb became the first hard disk, and now grub configuration hd0, actually sdb. If you are in a real server to do this experiment, and the server supports hot-swappable, not need to restart, we can pull out a piece of this hard disk, and insert the new hard drive. Because I was doing experiments in the VMware, does not support hot-swappable, so adding a new hard drive must be shut down. $ sudo poweroff After shutdown, in the VMware, first add a new disk, then delete the original first bad hard drive, if you remove the old hard drive first and then add a new hard disk, VMware will set the new hard drive as SCSI0:0, for Linux is sda, because the new hard drive has not grub, so it will cause the system fails to start. Boot after add a new hard disk, after the system starts, view existing hard stats $ sudo fdisk -l Disk /dev/sda: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a0d0e Device Boot Start End Blocks Id System /dev/sda1 * 2048 98303 48128 fd Linux raid autodetect /dev/sda2 98304 1075199 488448 fd Linux raid autodetect /dev/sda3 1075200 16775167 7849984 fd Linux raid autodetect Disk /dev/sdb: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ef169 Device Boot Start End Blocks Id System /dev/sdb1 2048 98303 48128 fd Linux raid autodetect /dev/sdb2 98304 1075199 488448 fd Linux raid autodetect /dev/sdb3 1075200 16775167 7849984 fd Linux raid autodetect Disk /dev/sdd: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdd doesn't contain a valid partition table Disk /dev/sdc: 8589 MB, 8589934592 bytes 255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002acaf Device Boot Start End Blocks Id System /dev/sdc1 2048 98303 48128 fd Linux raid autodetect /dev/sdc2 98304 1075199 488448 fd Linux raid autodetect /dev/sdc3 1075200 16775167 7849984 fd Linux raid autodetect Disk /dev/md1: 999 MB, 999292928 bytes 2 heads, 4 sectors/track, 243968 cylinders, total 1951744 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md0: 49 MB, 49217536 bytes 2 heads, 4 sectors/track, 12016 cylinders, total 96128 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md2: 16.1 GB, 16067330048 bytes 2 heads, 4 sectors/track, 3922688 cylinders, total 31381504 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table You should see the original three hard drives, the location is moved forward, sdb becomes sda, new hard drivers becomes sdd. 3 New hard disk partition Copy the existing hard drive partition information to the new hard driver. $ sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdd 4 Adding a new RAID partition We check details md1 before adding a new RAID partition. $ sudo mdadm --detail /dev/md1 Add the sdd2 to md1 $ sudo mdadm /dev/md1 --add /dev/sdd2 After executing a command, mdadm will re-establish md1, the reconstruction process is not the same length of time, the end of the thick of the state should be like this. Then we can rebuild the md0 and md1 $ sudo mdadm /dev/md0 --add /dev/sdd1 $ sudo mdadm /dev/md2 --add /dev/sdd3 5 Setting grub Finally, you need Setting grub, otherwise the system will not boot. $ sudo apt-get install grub sudo grub grub> root (hd3,0) grub> setup (hd3) grub> quit If you are in a real server to do this experiment, new hard driver is sda, grub should be installed on hd0. Well, now the new hard drive has been added to the RAID, replace the original faulty hard driver. ### Setting Up Email Alerts for Network Monitoring with Nagios About Installation and Configuration Nagios, please refer to Nagios - Server Monitoring Scheme Setting Up Email Alerts What good is a network monitoring tool if you have to sit at a monitor and constantly be watching and waiting for trouble to occur. What you need is a monitoring system that will alert you when something is amiss. It is possible to set Nagios up for this feature. And it doesn't take too much time and effort to pull off. I will say that you must have a working email system up and running. Nagios send warning messages is to use the local smtp service, you can view commands.cfg mail order regarding the definition, use the machine's mail command, which need to open the machine smtp service, in order to be safe on the firewall settings reject other machines connected port 25 of the machine. Now we have a mail server inside the network, so they requested to use this existing mail server, do not turn the unit smtp service, which requires the use of third-party software to redefine the command sendEmail. SendEmail is a lightweight, command line SMTP email client. If you have the need to send email from a command line, this free program is perfect: simple to use and feature rich. It was designed to be used in bash scripts, batch files, Perl programs and web sites, but is quite adaptable and will likely meet your requirements. SendEmail is written in Perl and is unique in that it requires NO MODULES. It has an intuitive and flexible set of command-line options, making it very easy to learn and use. sendEmail Official Website caspian.dotconf.net/menu/Software/SendEmail Install sendEmail $ sudo apt-get install sendemail By defaule, sendEmail has been installed in /usr/bin/sendEmail directory. Using sendEmail to send a E-mail, for example: $ sendemail -f sender@email.com -t receiver@email.com -s smtp.mail.com -u "subject" -xu sender@email.com -xp password -m contents or /usr/bin/sendEmail -f sender@email.com -t receiver@email.com -s smtp.mail.com -u "subject" -xu sender@email.com -xp password -m contents Parameters explained -f sender's mailbox -t recipient's mailbox -s SMTP server's domain name or ip -u subject of the message -xu SMTP authentication user name -xp SMTP authentication password -m message content The configuration file you will be using is /etc/nagios3/conf.d/contacts_nagios2.cfg. Although we are working with Nagios3, the "2" in the configuration file name is correct. Within this file you will find a section that looks like: define contact{ contact_name USERNAME service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f host_notification_options d,u,r,f service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email sender@email.com } The USERNAME you see above will be the text you need to configure for your alerts. If you need more than one email address to be alerted, you have to add a defined for each user. Most of the definitions above will be pretty obvious. The service_notification flags are defined as such: w = notify on warning states c = critical states r = recovery f = start/stop of flapping d = notify on down states u = notify on unreachable states s = notify on stopped states You can pick and choose what states you want to be alerted for. Once you have edited this file, save it, close it, and restart Nagios with the command: $ sudo /etc/init.d/nagios3 restart You are now ready to move on. The next section will be to define a contact group. Contact groups allow you to group people together so it is easier to alert specific people to certain events. This way you can have web-admins, file-server-admins, firewall-admins, and so on. Each group would have a specific user (or users) associated with it who would be alerted if a problem arises. Go back to the same file you were just editing and look for the section labeled CONTACT GROUPS. In this section you will define a group like so: define contactgroup {      contactgroup_name  GROUPNAME      alias GROUP ALIAS      members USERNAME1, USERNAME2 } All fields in BOLD are user specific. Once you have defined all of your groups, save that file and close it. Now you have to attach groups to services so those groups will be alerted when something is wrong with their specific service. To do this open up the file /etc/nagios3/conf.d/services_nagios2.cfg. In this file you will find a few pre-defined groups (HTTP, SSH, and PING). Let's say you created a contact group called Web-Admins and want to associate that group with all HTTP services. To do this look for the section: define hostgroup { hostgroup_name http-servers service_description  HTTP check_command  check_http use generic-service notification_interval 0 } To this section add the following line: contact_groups Web-Admins Because nagios to use sendmail to send a warning message, so we should modify commands.cfg mail order regarding the definition, we can now modify the notify-by-email of this order, like this. ############################################################################### # COMMANDS.CFG - SAMPLE COMMAND DEFINITIONS FOR NAGIOS ############################################################################### ################################################################################ # NOTIFICATION COMMANDS ################################################################################ # 'notify-host-by-email' command definition define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/sendEmail -f sender@email.com -t $CONTACTEMAIL$ -s smtp.mail.com -xu sender@email.com -xp password -u "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" -m "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" } [...] Save the file and close it. Now restart Nagios again and your monitoring system will begin sending out any HTTP errors to everyone associated with the Web-Admin group. ### Nagios – Server Monitoring Scheme Nagios is an open source computer system monitoring, network monitoring and infrastructure monitoring software application. Nagios offers monitoring and alerting services for servers, switches, applications, and services. It alerts the users when things go wrong and alerts them a second time when the problem has been resolved. Nagios Official Website www.nagios.org Install Nagios $ sudo apt-get install apache2 $ sudo apt-get install nagios3 nagios-plugins nagios-images The configuration file is in /etc/nagios3/ directory. Configuration Nagios Permission settings By default, user nagiosadmin has all permissions, you need to do is set a password for the user. For safety, we replace the user name ngadmin and set password. $ sed 's/nagiosadmin/ngadmin/g' /etc/nagios3/cgi.cfg > /tmp/cgi.cfg $ sudo mv /tmp/cgi.cfg /etc/nagios3/cgi.cfg $ sudo htpasswd -c /etc/nagios3/htpasswd.users ngadmin In the browser to http://yourdomain/nagios3/ view Nagios monitoring interface. Enable external command By default, Nagios disabled the external command. Modify nagios.cfg file by sudo vim /etc/nagios3/nagios.cfg: set check_external_commands=1 $ sudo /etc/init.d/nagios3 stop $ sudo dpkg-statoverride --update --add nagios www-data 2710 /var/lib/nagios3/rw $ sudo dpkg-statoverride --update --add nagios nagios 751 /var/lib/nagios3 $ sudo /etc/init.d/nagios3 start Note: If not used dpkg-statoverride, only use chown/chmod to modify permissions, when the package upgraded( apt-get upgrade), the privilege will be overwritten. If Nagios: It appears as though you do not have permission to view information for any of the hosts you requested… You have to do configuration in cgi.cfg, by default cgi.cfg is located in /etc/nagios3/cgi.cfg The default user with permission to access is nagiosadmin, modify username as your set. Change all the parameters to the user you set to permit viewing. You can alternatively add user for authorization by using a comma after nagiosadmin. Restart the nagios service after the cgi.cfg has been modified. $ sudo service nagios restart ### Apache Performance Tuning Part 1 Choosing an MPM MPMs must be chosen during configuration, and compiled into the server. Compilers are capable of optimizing a lot of functions if threads are used, but only if they know that threads are being used. event - An experimental variant of the standard worker MPM prefork - Implements a non-threaded, pre-forking web server worker - Multi-Processing Module implementing a hybrid multi-threaded multi-process web server HostnameLookups and other DNS considerations Prior to Apache 1.3, HostnameLookups defaulted to On. This adds latency to every request because it requires a DNS lookup to complete before the request is finished. In Apache 1.3 this setting defaults to Off. If you need to have addresses in your log files resolved to hostnames, use the logresolve program that comes with Apache, or one of the numerous log reporting packages which are available. It is recommended that you do this sort of postprocessing of your log files on some machine other than the production web server machine, in order that this activity not adversely affect server performance. If you use any Allow from domain or Deny from domain directives (i.e., using a hostname, or a domain name, rather than an IP address) then you will pay for two DNS lookups (a reverse, followed by a forward lookup to make sure that the reverse is not being spoofed). For best performance, therefore, use IP addresses, rather than names, when using these directives, if possible. Note that it's possible to scope the directives, such as within a <Location /server-status> section. In this case the DNS lookups are only performed on requests matching the criteria. Here's an example which disables lookups except for .html and .cgi files: HostnameLookups off <Files ~ "\.(html|cgi)$"> HostnameLookups on </Files> But even still, if you just need DNS names in some CGIs you could consider doing the gethostbyname call in the specific CGIs that need it. Part 2 Optimization MaxClients The MaxClients directive configures the maximum number of authenticated clients which may be logged into a server or anonymous account. Once this limit is reached, additional clients attempting to authenticate will be disconnected. The special value none may be supplied which removes all maximum connection limits from the applicable configuration context. Additionally, an optional message argument may be used which will be displayed to a client attempting to exceed the maximum value; immediately before disconnection. The message argument is parsed for the magic string "%m", which is replaced with the configured maximum value. If message is not supplied, a system-wide default message is used. Example: MaxClients 5 "Sorry, the maximum number of allowed users are already connected (%m)" Results in: 530 Sorry, the maximum number of allowed users are already connected (5) <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Modified to <IfModule mpm_worker_module> StartServers 10 MaxClients 256 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> MaxClients more than 256, you need to modify the ServerLimit: <IfModule mpm_worker_module> StartServers 10 ServerLimit 512 MaxClients 512 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Determine whether the need to increase MaxClients params value, cat /var/log/apache2/error.log, if you see like this, you need to increase it. [error] server reached MaxClients setting, consider raising the MaxClients setting Part 3 Optimization KeepAlive The number of seconds Apache will wait for a subsequent request before closing the connection. Once a request has been received, the timeout value specified by the Timeout directive applies. Setting KeepAliveTimeout to a high value may cause performance problems in heavily loaded servers. The higher the timeout, the more server processes will be kept occupied waiting on connections with idle clients. In a name-based virtual host context, the value of the first defined virtual host (the default host) in a set of NameVirtualHost will be used. The other values will be ignored. Part 4 Enable compression $ sudo a2enmod deflate $ sudo /etc/init.d/apache2 force-reload In /etc/apache2/mods-enabled/deflate.conf file. <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml </IfModule> You can modify to <IfModule mod_deflate.c> SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary SetEnvIfNoCase Request_URI \.(?:exe|t?gz|zip|bz2|sit|rar)$ \ no-gzip dont-vary SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary </IfModule> Part 5 Disable unnecessary log Commented out in the configuration file logs the relevant line. Part 6 mod_cache mod_cache implements an RFC 2616 compliant HTTP content cache that can be used to cache either local or proxied content. mod_cache requires the services of one or more storage management modules. Section A mod_disk_cache $ sudo a2enmod disk_cache Modify /etc/apache2/sites-available/yourdomain file, add before </VirtualHost>: <IfModule mod_disk_cache.c> CacheEnable disk / CacheRoot /var/www/yourdomain/cache CacheDefaultExpire 7200 CacheMaxExpire 604800 </IfModule> Then, create /var/www/yourdomain/cache/ directory, set user group www-data. $ sudo mkdir /var/www/yourdomain/cache $ sudo chown www-data.www-data /var/www/yourdomain/cache Restart Apache Service $ sudo /etc/init.d/apache2 restart Section B mod_mem_cache <IfModule mod_mem_cache.c> CacheEnable mem / CacheDefaultExpire 7200 CacheMaxExpire 604800 </IfModule> The not cached content CacheDisable /secure Part 7 Use Reverse Proxy Apache Stress Test Command ab, The syntax is: ab [options] [http[s]://]hostname[:port]/path $ ab -n 20000 -c 200 http://localhost/ Before Optimization This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 2000 requests Completed 4000 requests Completed 6000 requests Completed 8000 requests Completed 10000 requests Completed 12000 requests Completed 14000 requests Completed 16000 requests Completed 18000 requests Completed 20000 requests Finished 20000 requests Server Software: Apache/2.2.22 Server Hostname: localhost Server Port: 80 Document Path: / Document Length: 177 bytes Concurrency Level: 200 Time taken for tests: 12.953 seconds Complete requests: 20000 Failed requests: 0 Write errors: 0 Total transferred: 9060000 bytes HTML transferred: 3540000 bytes Requests per second: 1544.04 [#/sec] (mean) Time per request: 129.531 [ms] (mean) Time per request: 0.648 [ms] (mean, across all concurrent requests) Transfer rate: 683.06 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 3 21.4 1 1005 Processing: 3 62 503.3 17 12940 Waiting: 3 61 503.4 16 12940 Total: 12 65 504.1 20 12946 Percentage of the requests served within a certain time (ms) 50% 20 66% 22 75% 25 80% 27 90% 34 95% 41 98% 51 99% 1027 100% 12946 (longest request) After Optimization This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 2000 requests Completed 4000 requests Completed 6000 requests Completed 8000 requests Completed 10000 requests Completed 12000 requests Completed 14000 requests Completed 16000 requests Completed 18000 requests Completed 20000 requests Finished 20000 requests Server Software: Apache/2.2.22 Server Hostname: localhost Server Port: 80 Document Path: / Document Length: 177 bytes Concurrency Level: 200 Time taken for tests: 3.550 seconds Complete requests: 20000 Failed requests: 0 Write errors: 0 Total transferred: 9060000 bytes HTML transferred: 3540000 bytes Requests per second: 5633.26 [#/sec] (mean) Time per request: 35.503 [ms] (mean) Time per request: 0.178 [ms] (mean, across all concurrent requests) Transfer rate: 2492.06 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 38.2 1 1010 Processing: 3 27 150.9 17 3070 Waiting: 3 27 150.9 17 3070 Total: 8 29 155.9 17 3074 Percentage of the requests served within a certain time (ms) 50% 17 66% 18 75% 19 80% 21 90% 26 95% 30 98% 37 99% 41 100% 3074 (longest request) ### Reverse Proxy In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client as though they originated from the server itself (or servers themselves).While a forward proxy acts as an intermediary for its (usually nearby) associated client(s) and returns to them resources accessible on the Internet, a reverse proxy acts as an intermediary for its (usually nearby) associated server(s) and only returns resources provided by those associated server(s). The role of using a reverse proxy server Protect and hide the original resource server User A is always considered it to access server B instead of the original proxy Z, but practical on the occasion of user A reverse proxy server accepts the response from the original source server to obtain user A B needs resources, and then sent to the user A. Because firewall, proxy server only allows access to the original resource server Z B. Although in this virtual environment, the firewall and reverse proxy to protect the joint action of the original resource server B, but User A did not know. Load balance When the reverse proxy server more than one time, we can even make them into a cluster, when more users access the resource server B, let different proxy servers Z(X) to answer a different user, and then send different users required resources. Of course, the reverse proxy server the same as forward proxy server has cache role, it can cache the original resource server B resources, rather than every time the resource server B requests to the original data, especially some static data, such as images and documents, if these reverse proxy server and the user can do X from the same network, then the user X to access the reverse proxy server X, you will get a high quality speed. This is the core of CDN technology. We do not explain the CDN, so removed the most critical core technology Intelligent DNS. CDN technology is actually just showing the use of this principle is precisely the reverse proxy . Reverse proxy and forward proxy opposite conclusion, for the client it looks like the original server and the client does not require any special settings . Client to the reverse proxy namespace (name-space) contents send normal request, then the reverse proxy will determine where to (origin server) forwarded the request, and will receive the content back to the client, just like these content was originally its own the same. Basically, the online program to do a lot of pros and cons to the agent, the agent software can do most positively can do a reverse proxy. The most popular open-source software is squid, both can do forward proxy, there are a lot of people used to do the front-end reverse proxy server . Another MS ISA can also be used to do in the Windows platform forward proxy . Reverse Proxy WEB service is the most important practice in recent years is the most fire Nginx it. Some say the Internet Nginx forward proxy can not do, in fact, is wrong. Nginx forward proxy can also be done, but relatively few people use it. ### Forward Proxy Agency services technology is a very old technology, is in the early stage of the use of internet technology. General way to implement agent technology installed on the server proxy service software, let it become a proxy server, enabling agent technology. Commonly used agent technology into forward proxy, reverse proxy and transparent proxy. This article is aimed at these three agents to explain some basic principles and specific scope, to facilitate a deeper understanding of agency services technology. First, the forward proxy (Forward Proxy) Under normal circumstances, if not specified, the default agent technology say that forward proxy technology. About the concept of forward proxy as follows: Forward proxy (forward) is a user in the client [A] and the origin server (origin server) [Server B] [proxy server between Z], in order to obtain content from the origin server, user A sends to the proxy server Z a request and specify the target (server B), then the proxy server to server B Z transmit the contents of the request and get back to the client. Clients must be some special settings to use forward proxy. From the above concept, we see that the text is a so-called proxy servers to replace forward proxy access side [User A] to access a target server [Server B] This is the significance of forward proxy. And why use a proxy server to access the side instead of [User A] to access server B do? This from talking about the significance of the proxy server to use. Forward Proxy role with the following main points: Access the inaccessible server B We throw in addition complex network routing plot, assuming from left to right router named R1, R2 Assuming the initial user A to B needs to access the server through R1 and R2 routers such a routing node, if the router R1 or R2 router fails, then it can not access the server B it. However, if user A to Z to replace their own proxy server to access the server B, Z not because the proxy server in the router R1 or R2 node, but through other routing node access server B, then user A can get the data server B. Real-world example is the "over the wall" But since the VPN technology is widely used, the "over the wall" is not only using the traditional forward proxy technology, and some also use VPN technology. Accelerated access to Server B This argument is currently not so popular, mainly the rapid development of bandwidth traffic. Early forward proxy, many people use the forward proxy is speed. Suppose the user A to server B, through R1 and R2 routers router, router R1 to R2 link is a low -bandwidth link. The user A to the proxy server Z, Z from the proxy server to server B are high- bandwidth links. Then obviously you can speed up access server B. Cache Role Cache ( Cache ) technology and agency services technology are closely linked ( not just forward proxy, reverse proxy also uses the Cache ( cache ) technology also shown above. A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Client Access License Content of this field is still in use today more and more, for example, some companies use ISA SERVER as a forward proxy server to authorize the user has permission to access the Internet. Firewall as a gateway to filter external network access to it. Assuming the user A and the user B are set to a proxy server, the user A to allow access to the Internet, and the user B is not allowed to access the Internet (the proxy server that no limitation on Z) so that the user A as authorized to access the server through a proxy server B, while user B because Z was not authorized proxy server, so access to the server B, the packet is discarded. Hide Visitors Whereabouts We can see that Server B does not know is the user access to their actual A, because the proxy server A to Z instead of the user to interact directly with the server B. If the proxy server Z full control by the user A (or incomplete control), will be used to order "chicken" Terminology title. Forward proxy, in order to obtain content from the origin server, the client sends a request to the proxy and specify the destination (origin server), then a proxy server to forward the request and the original the content will be returned to the client. The client must set the forward proxy server, of course, the premise is to know the forward proxy server's IP address, as well as the agent's port. ### Install CentOS 6.4 in Text Mode CentOS ("Community Enterprise Operating System") is a Linux distribution which attempts to provide a free enterprise class computing platform which has 100% binary compatibility with its upstream source, Red Hat Enterprise Linux (RHEL). As of version 6.4, it officially supports x86 and x86-64 architecture with Physical Address Extension, while a beta is expected to be available for PowerPC. The first CentOS was based upon RHEL version 2.1AS, and was numbered as CentOS version 2. Official Website www.centos.org Installation Process Install VMware Tools If you install CentOS in a VMware virtual machine. $ su # mount /dev/cdrom /mnt/ # cp /mnt/VMwareTools-9.6.1-1378637.tar.gz /tmp/ # cd /tmp/ # tar -xvzf VMwareTools-9.6.1-1378637.tar.gz # cd vmware-tools-distrib/ # chmod 777 vmware-install.pl # ./vmware-install.pl Afer install # umount /dev/cdrom /mnt/ Issues If got error: .vmware-install.pl: /usr/bin/perl: bad interpreter: No such file or directory, it's complaining it can't find /usr/bin/perl. You need to run: # yum install perl If got error: Setup is unable to find the "ifconfig" program on your machine. Please make sure it is installed. Do you want to specify the location of this program by hand?[yes], run the following command to install the ifconfig program: # yum install net-tools Login with root account, and reate a new user. # useradd username # passwd username # echo 'username ALL=(ALL) ALL' >> /etc/sudoers Open SSH Service # /etc/init.d/sshd start If you want to allow external network connected to the server, please close the iptables # /etc/init.d/iptables stop Check SSH whether the run level is set to boot $ sudo chkconfig --list sshd $ sudo chkconfig --level 2345 sshd on $ sudo service sshd restart Update $ sudo yum update Setup network on CentOS 7 $ sudo vi /etc/sysconfig/network-scripts/ifcfg-eno16777984 Edit file: TYPE=Ethernet BOOTPROTO=static # Default: none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno16777984 UUID=7f1aff2d-48fa-4436-9497-34d38ebc9360 DEVICE=eno16777984 ONBOOT=no # Default: none IPADDR=192.168.1.100 # Added NETMASK=255.255.255.0 # Added GATEWAY=192.168.1.1 # Added DNS1=8.8.8.8 # Added DNS2=8.8.4.4 # Added HOSTNAME=centos # Added Restart network: $ sudo service network restart Check network status: $ ip address ### CSS ::selection attributes change select the background color of the text on the page Summary The ::selection CSS pseudo-element applies rules to the portion of a document that has been highlighted (e.g., selected with the mouse or another pointing device) by the user. Only a small subset of CSS properties can be used in a rule using ::selection in its selector: color, background, background-color and text-shadow. Note that, in particular, background-image is ignored, like any other property. text-shadow in ::selection is supported by Chrome and Safari and will be supported in Firefox 17 and above. Though this pseudo-element was in drafts of CSS Selectors Level 3, it was removed during the Candidate Recommendation phase, as it appeared that its behavior was under-specified, especially with nested elements, and interoperability wasn't achieved (based on discussion in the W3C Style mailing list). The ::selection pseudo-element currently isn't in any CSS module on the standard track. It should not be used in production environments. Examples Gecko is the only engine requiring the prefix. Due to the fact that the CSS parsing rules require dropping the whole rule when encountering an invalid pseudo-element, two separate rules must be written: ::-moz-selection, ::selection {...}. The rule would be dropped on non-Gecko browsers as ::-moz-selection is invalid on them. /* draw any selected text yellow on red background */ ::-moz-selection { color: gold; background: red; } ::selection { color: gold; background: red; } /* draw selected text in a paragraph white on black */ p::-moz-selection { color: white; background: black; } p::selection { color: white; background: black; } Specifications The ::selection CSS pseudo-element was drafted for CSS Selectors Level 3 but removed before it reached the recommendation status. Despite this, it's implemented in some browsers, which will probably retain experimental support for it. Right now, the ::selection CSS pseudo-element is not in any specification on the standard track. ### Solve Cannot modify header information error in WordPress Some functions modifying the HTTP header are: header / header_remove session_start / session_regenerate_id setcookie / setrawcookie Output can be: Unintentional: Whitespace before <?php or after ?> UTF-8 Byte Order Mark Previous error messages or notices Intentional: print, echo and other functions producing output (like var_dump) Raw <html> areas before <?php code. Why does it happen? To understand why headers must be sent before output it's necessary to look at a typical HTTP* response. PHP scripts mainly generate HTML content, but also pass a set of HTTP/CGI headers to the webserver: HTTP/1.1 200 OK Powered-By: PHP/5.3.7 Vary: Accept-Encoding Content-Type: text/html; charset=utf-8 <html><head><title>PHP page output page</title></head> <body><h1>Content</h1> <p>Some more output follows...</p> and <a href="/"> <img src=about:note> ... The page/output always follows the headers. PHP is required to pass the headers to the webserver first. It can only do that once. And after the double linebreak it can't ever append to them again. When PHP receives the first output (print, echo, <html>) it will "flush" the collected headers. Afterwards it can send all the output bits it wants. But sending further headers is impossible from then. If you have functions.php file into your current theme directory, then do as below into your functions.php file // allow redirection, even if your theme starts to send output to the browser add_action('init', 'clean_output_buffer'); function clean_output_buffer() { ob_start(); } ### Using Google API to add QR Code for WordPress Web Page Smart phones becoming increasingly popular, website support for mobile phones is becoming more important, QR code can be said to be the most simple and convenient way to deliver the message to the mobile phone, especially transport web URL. For example: https://xuri.me/, generated for the QR code image URL is https://chart.googleapis.com/chart?chs=220x220&cht=qr&chl=https://xuri.me&choe=UTF-8 https://chart.googleapis.com/chart? is a QR code API address. cht=qr indicate the type of QR code. chs=220×220 generated QR code image size. choe=UTF-8 encoding of the content, the default is UTF-8. chld=L|2 error correction levels and blank border width. Error correction level from low to high, respectively LMQH, the higher the level, the lower the error rate of the rows and columns of QR code is also more (Google API based on QR code and it will automatically select the appropriate content-length QR code ranks number) (Google API documentation in more detail). Border width can choose. chl=https://xuri.me/ contents of QR code. Add a QR code for the web method is very simple, just use the JavaScript window.location.href get the URL and add it to chl=immediately after the code is as follows: document.write("<img src=\"https://chart.googleapis.com/chart?cht=qr&chs=220x220&choe=UTF8&chld=L|2&chl="+encodeURI(window.location.href)+"\" width=\"220\" height=\"220\"/></img>"); If you want the WordPress Support only need to create a QR code text sidebar gadget and paste the code above into the can. ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array Features and specifications Processor Intel Xeon Processor up to 3.2GHz/533MHz front-side bus Memory 512MB or 1GB/8GB PC2100 ECC DDR Hard disk drive (HDD) Ultra320 SCSI hard disk drives Networking Dual integrated 10/100/1000 Ethernet Video ATI Radeon 7000-M video controller with 16 MB SDRAM video memory on the system board. Power supply 332W 1/1 or 411W 1/1 RAID support Integrated RAID-1 (mirroring) Configuration Rack/1U Using the LSI Logic Configuration Utility program The LSI Logic Configuration Utility program is part of the BIOS code. You can use it to: v Perform a low-level format of a SCSI hard disk drive. v Set a SCSI device scan order v Set a SCSI ID for a controllerNote: The integrated SCSI controller with RAID capabilities in the server supports only RAID level-1. Installing an optional RAID adapter provides additional RAID levels. See “Using ServeRAID Manager” for information about configuring the server for RAID operation. If you install a RAID adapter in the server, use the configuration method supplied with the RAID adapter to view or change SCSI settings for attached devices. Complete the following steps to start the LSI Logic Configuration Utility program: Turn on the server. When the <<< Press to start LSI Logic Configuration Utility >>> prompt appears, press Ctrl C. Note: If an administrator password has been set, you are prompted to type the password to start the LSI Logic Configuration Utility program. Use the arrow keys to select a controller (channel) from the list of adapters; then, press Enter. Follow the instructions on the resulting screen to change the settings of the selected items; then, press Enter. The Device Properties and Mirroring Properties choices produce additional screens of parameters to review or change.Main Menu arrow keys to select RAID Properties press Enter. When Array Disk column under 'NO' , press + / - will be 'NO' converted to 'YES'. Press F3 When the remaining array when prompted to press Del key. Select Save Changes, Exit menu to save. Press Esc to return, select Discard Changes, Exit menu and restart server. Delete RAID Disk Array Press F2 or Tab key to select Delete Array. ### Install NI Circuit Design Suite 12 on Windows 8 National Instruments Corporation, or NI, is an American company with international operation. Headquartered in Austin, Texas, it is a producer of automated test equipment and virtual instrumentation software. Common applications include data acquisition, instrument control and machine vision. Official Website www.ni.com NI Multisim (formerly MultiSIM) is an electronic schematic capture and simulation program which is part of a suite of circuit design programs, along with NI Ultiboard. Multisim is one of the few circuit design programs to employ the original Berkeley SPICE based software simulation. Multisim was originally created by a company named Electronics Workbench, which is now a division of National Instruments. Multisim includes microcontroller simulation (formerly known as MultiMCU), as well as integrated import and export features to the Printed Circuit Board layout software in the suite, NI Ultiboard. Multisim is widely used in academia and industry for circuits education, electronic schematic design and SPICE simulation. Download NI.Circuit.Design.Suite.12.0.keygen.rar ( Powered by Baidu ) Download Key: xdj6 NI_Circuit_Design_Suite_12.0.zip ( Powered by Baidu ) Download Key: 3gr6 Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Disable the Shutdown Event Tracker in Windows Server 2003 R2 Some IT departments are under governmental and/or industry regulations that require them to account for all system downtime. Other IT departments may be mandated by their service-level agreements (SLAs) to provide documentation for any server restarts or shutdowns. In Microsoft Windows Server 2003 R2, we have the Shutdown Event Tracker to assist us in not only the previously mentioned scenarios, but also for any shutdown-related troubleshooting we may be called to undertake. Click "Start → Run" in the Run dialog box pops up, enter "gpedit.msc", open the Group Policy window, select the "Computer Configuration → Administrative Templates → System", double-click the "Display Shutdown Event Tracker" in the pop-up window select "Disabled", and finally click "Apply → OK" button. ### Install Node.js on Windows Node.js is a software platform that is used to build scalable network (especially server-side) applications. Node.js utilizes JavaScript as its scripting language, and achieves high throughput via non-blocking I/O and a single-threaded event loop. Node.js contains a built-in HTTP server library, making it possible to run a web server without the use of external software, such as Apache or Lighttpd, and allowing more control of how the web server works. The ability to create an entire web application in JavaScript, both server-side and client-side, is particularly attractive to web developers. If you want to install the latest version just follow these simple steps and you should be good to go. Go to the Node.js home page nodejs.org Click install to download the .msi installer package Run it and follow the instrucitons, you now have NPM (node package manager) and Node.js installed Reset your computer to get everything working in your command-line interface (CLI) Windows Installation Instructions To install Node.js on Windows is quite easy. You'll just need to grab an installer package from their website and run it. Well it isn't quite that easy, but its close. Download Node.msi Windows installer Go no Node.js's website and download the Windows .msi installer file. This is literally all you need setup everything. Run Node.msi Windows installer Run Node's .msi file from anywhere on your computer. An installation window will appear. All you need to do is agree to and click okay on everything. If your Node.js install is corrupted, you can re-run this file and select repair to fix it. Reset your computer This is the step that everyone forgets. Once you've installed the .msi file you need to reset your computer. Failure to do so will prevent you from using Node in your command prompt. Wondering why? You need to flush Windows since its adjusting registry files and who knows what else under your computer's hood. Post Install Node.js on Windows Just because you've installed Node.js doesn't mean it works. In the next three steps, we'll quickly walk through creating a test file and running it. Create a test.js file var http = require("http"); http.createServer( function (req, res) { res.writeHead( 200 , {"Content-Type": "text/html"}); res.write("<h1>Node.js</h1>"); res.end("<p>Hello World</p>"); }).listen(3000); console.log("HTTP server is listening at port 3000."); To make sure your Node.js install works on Windows, create a simple hello world script with the above code. Call it test.js and store it somewhere easy to access on your computer. A location such as E:\node\test.js will make your life easier when accessing the file via Windows Command Prompt. If you use a different location you'll have to adjust the following examples. Running windows command prompt to test node.js Run your Windows Command Prompt and relocate to test.js's folder. Here you'll need to run node test.js. Running this line of code causes Windows to activate the file and output a message. Verify install Node.js on Windows node js hello world You probably noticed that running node test.js output a message identical/similar to Server running at http://127.0.0.1:3000/. To verify node is working correctly leave your command prompt running in the background, then copy and paste http://127.0.0.1:3000 into your browser. This should display an image similar to the above picture. If it still isn't working walk back through the above steps or leave a comment below for more help. If you made it through the above steps there is nothing left to install Node.js on Windows. Your computer should run it fine and no further configuration is necessary. ### Install Software RAID10 on Ubuntu 12.04 LTS Server RAID (redundant array of independent disks, originally redundant array of inexpensive disks) is a storage technology that combines multiple disk drive components into a logical unit. Data is distributed across the drives in one of several ways called "RAID levels", depending on the level of redundancy and performance required. The term "RAID" was first defined by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. Marketers representing industry RAID manufacturers later attempted to reinvent the term to describe a redundant array of independent disks as a means of disassociating a low-cost expectation from RAID technology. RAID is now used as an umbrella term for computer data storage schemes that can divide and replicate data among multiple physical drives: RAID is an example of storage virtualization and the array can be accessed by the operating system as one single drive. The different schemes or architectures are named by the word RAID followed by a number (e.g. RAID 0, RAID 1). Each scheme provides a different balance between the key goals: reliability and availability, performance and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable (sector) read errors, as well as whole disk failure. Target Using RAID technology enables hard disk failure caused by server downtime. The experiment requires four SCIS hard drive, boot partition as the boot partition is not supported, can only be installed on the RAID-1. / Partition and SWAP partition using RAID-10. /boot RAID-1 50M / RAID-10 7.45G Swap RAID-10 500MB Create a new Ubuntu Server in VMware WorkStation About RAID troubleshooting, please refer to RAID Troubleshooting. ### Disabling Windows Server 2003 Ctrl+ALT+DEL Logon Prompt Here you will find information how to bypass the Ctrl-ALT-DEL Logon Prompt for a Windows 2003 Server. This security setting determines whether pressing Ctrl Alt Del is required before a user can log on. If this policy is enabled on a computer, a user is not required to press Ctrl Alt Del to log on. Not having to press Ctrl Alt Del leaves users susceptible to attacks that attempt to intercept the users' passwords. Requiring Ctrl Alt Del before users log on ensures that users are communicating by means of a trusted path when entering their passwords. Note: If this policy is disabled, any user is required to press Ctrl Alt Del before logging on to Windows. To disable (Enable) the policies follow the steps below: 1. From the Run line or from a command prompt enter GpEdit.Msc and press enter.(or open Local Policies) 2. At the Group Policy Object Editor user interface navigate to the following: Computer Configuration Windows Settings Security Settings Local Policies Security options 3. Double click on the policy for "Interactive Logon: Do not require CTRL+ALT+DEL" 4. Set the property to "Enabled" and then select "OK" and close the Group Policy Object Editor dialog box ### Configure virtual machines in VMware vSphere boot start automatically Go to the Configuration tab of your VMware vSphere Server, then click on Virtual Machine Startup/Shutdown. By default (I'm pretty sure) automatic startup is disabled. To enable it, click on "Properties…" on the far upper right of the window. You'll now see this window: Check/Tick "Allow virtual machines to start and stop automatically with the system". Now, this is the bit where I nearly cried… You know you want to "enable" your Guest OSes to automatically boot, but how? I tried clicking and dragging, right clicking for a context menu to enable "Automatic start up" and gave up. Turns out, you need to click on the Guest OS you'd like to enable, and then click "Move Up" until it sits underneath the "Automatic startup" title. Argh! ### Install VMware Workstation on Ubuntu VMware Workstation is a hypervisor that runs on x64 computers, it enables users to set up multiple virtual machines and use them simultaneously along with the actual machine. Download VMware-Workstation-Full-10.0.1-1379776.x86_64.bundle install file from VMware Official Website. Install Run these command in a terminal: $ sudo chmod +x VMware-Workstation-Full-10.0.1-1379776.x86_64.bundle $ sudo ./VMware-Workstation-Full-10.0.1-1379776.x86_64.bundle VMware Workstation Serial Number: 5F29M-48312-8ZDF9-A8A5K-2AM0Z If you got warning Piix4_SMBus: 000:00:07.3: Host SMBus controller bus not enabled on boot, reboot Ubuntu into recovery mode by hold on Shift, in the grub menu, choose "Advanced options for Ubuntu", and select "Recovery mode". Once the system boots, choose "Root - Drop into a root shell". In the root shell, enter the following commands: # mount -o remount,rw / # nano /etc/modprobe.d/blacklist.conf Add following line in the end of the file: blacklist i2c_piix4 Reboot after save the file. ### Solved Nginx reverse proxy solution caused WordPress dashboard articles list pagination button link error If your web server uses apache service and use Nginx as a reverse proxy, Nginx configuration file similar to the following example: ## Basic reverse proxy server ## #Hot swap conf -> upstream 81Port_hot_swap_class { server ip:81 weight=10 max_fails=3 fail_timeout=30s; #Apache } ## Start ## server { listen *:81; server_name domain; access_log logs/domain.access.log main; error_log logs/domain.error.log; root html; index index.html index.htm index.php; ## send request back to apache ## location / { proxy_pass http://81Port_hot_swap_class; #Proxy Settings proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_max_temp_file_size 0; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 432k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } ## End ## WordPress dashboard articles list pagination button link like this http://81Port_hot_swap_class/wp-admin/edit.php?paged=2 Solution In order to open Wordpress installation directory, wp-admin/includes/class-wp-list-table.php Find the following code $current_url = set_url_scheme( 'http://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'] ); replaced the $ _SERVER ['HTTP_HOST'] by URL. If you have a better solution, please leave a message. ### Matlab Courses Website Matlab Courses Website www.usthmatlab.com Apr 20, 2014 Revision Matlab Courses Website Sep 6, 2013 Some time ago I was designed a Matlab courses website for my teacher. ### Resolve Ubuntu the disk drive for EFI is not ready yet or not present When you boot Ubuntu got this error message with no response from keyboard input. Below it is says Wait to continue, press S to skip mounting, or M for manual recovery. Looks like the UUIDs of some of your partitions have changed. And your /etc/fstab specifies the old UUID so it is not able to mount EFI and some other partitions during boot process. Open a Terminal, run blkid to get the list of UUIDs for all your block devices/logical volumes. Then edit fstab file: $ sudo vim /etc/fstab Make sure the UUIDs of partitions/block devices are correct, if not, update and save. NOTE: If you don't like UUID, just use traditional block device, e.g. /dev/sda2 instead of UUID=x. ### Resolve Ubuntu error: unknown filesystem. grub rescue> The reason why this problem occurs because users are generally carried out under the window partition operations (many users will choose to install window and ubuntu dual system), such as disk partitioning and merging, etc., affect the original partition information. Solution as follows Step 1 First use the ls command shows the ubuntu partition installation information: grub rescue>ls Usually list all disk partition information, such as (hd0, gpt1) (hd0, gpt2) (hd0, gpt3) etc. Step 2 Execute the following command: grub rescue>set Grub will list the current settings, such as prefix=(hd0,gpt1)/boot/grub,root=hd0,gpt1 or prefix=(hd0,gpt1)/grub,root=hd0,gpt1, precisely because these settings wrong , which leads to grub can not find the Ubuntu partition, followed by re-setting grub. Step 3 If you know what ubuntu is installed on a partition, such as (hd0, gpt1), respectively, execute the following command grub rescue> set root=hd0,gpt1 grub rescue> set prefix=(hd0,gpt1)/boot/grub (In some cases would be set prefix=(hd0,gpt1)/grub) Step 4 Execute the following command: insmod normal If you still get error: unknown filesystem then back to step 3 grub settings, if there is a highlighted grub rescue>, represents the partition settings are correct, and then input normal. But here is not over, because if you reboot the system again, it will still encounter error: unknown filesystem this question, then you have to repeat the steps to see the appeal system selection interface, so there is a step needs to be done. Step 5 Fix grub partition information Choose ubuntu, start after entering the terminal, execute the following command $ sudo update-grub $ sudo grub-install /dev/sda ### Poster for UCAIS ### HTML XHTML and CSS Related Books Here is I have read recorded as part of the book I think reading a good book for a programmer is very important, especially when just getting started And be able to find a suitable book is also not an easy thing to please the following after reading these books in my gave me a lot of help so I learned a lot from the relevant expertise I think these books written all well. if you have some good books, welcome to recommended to me. HTML, XHTML, and CSS Publication Date: August 26, 2006 | ISBN-10: 0321430840 | ISBN-13: 978-0321430847 | Edition: 6 Need to learn HTML fast? This best-selling reference's visual format and step-by-step, task-based instructions will have you up and running with HTML in no time. In this completely updated edition of our best-selling guide to HTML, Web expert and best-selling author Elizabeth Castro uses crystal-clear instructions and friendly prose to introduce you to all of today's HTML and XHTML essentials. You’ll learn how to design, structure, and format your Web site. You'll create and use images, links, styles, lists, tables, frames, and forms, and you'll add sound and movies to your site. Finally, you will test and debug your site, and publish it to the Web. Along the way, you'll find extensive coverage of CSS techniques, current browsers (Opera, Safari, Firefox), creating pages for the mobile Web, and more. Beginning CSS Web Development From Novice to Professional Publication Date: August 24, 2006 | ISBN13: 978-1-59059-689-0 Cascading Style Sheets (CSS) are one of the most important technologies on the web today. They give web developers the power to style their web sites so those sites are usable, compact, good looking, consistently displayable, and quick and efficient to change if so desired. There are many books out there on CSS, but Beginning CSS Web Development is different—it doesn’t waste time discussing theory, and it delves straight into the practical matter. It provides you with what you need to know, faster. It is also completely up to date, covering the most modern CSS standards and design techniques. In addition to the essential CSS basics, this book covers advanced techniques like accessibility, hacks, and filters. The book concludes with a case study, and features a CSS reference section that allows you to look up required syntax as quickly as possible. CSS: The Definitive Guide Publication Date: November 14, 2006 | ISBN-10: 0596527330 | Edition: 3 CSS: The Definitive Guide, 3rd Edition, provides you with a comprehensive guide to CSS implementation, along with a thorough review of all aspects of CSS 2.1. Updated to cover Internet Explorer 7, Microsoft's vastly improved browser, this new edition includes content on positioning, lists and generated content, table layout, user interface, paged media, and more. Simply put, Cascading Style Sheets (CSS) is a way to separate a document's structure from its presentation. The benefits of this can be quite profound: CSS allows a much richer document appearance than HTML and also saves time--you can create or change the appearance of an entire document in just one place; and its compact file size makes web pages load quickly. Author Eric Meyer tackles the subject with passion, exploring in detail each individual CSS property and how it interacts with other properties. You'll not only learn how to avoid common mistakes in interpretation, you also will benefit from the depth and breadth of his experience and his clear and honest style. This is the complete sourcebook on CSS. CSS Mastery: Advanced Web Standards Solutions Publication Date: October 29, 2009 | ISBN-10: 1430223979 | ISBN-13: 978-1430223979 | Edition: 2 Building on what made the first edition a bestseller, CSS Mastery: Advanced Web Standards Solutions, Second Edition unites the disparate information on CSS-based design spread throughout the internet into one definitive, modern guide. Revised to cover CSS3, the book can be read from front to back, with each chapter building on the previous one. However, it can also be used as a reference book, dipping into each chapter or technique to help solve specific problems. In short, this is the one book on CSS that you need to have. ### Install VMware vSphere Client on Windows 8 VMware vSphere Client is a client tool to manage VMware ESX and its VMs. It’s a basic but essential tool I need to manage my Virtual environment. But when I tried to install it on my Windows 8 machine, this is what I got. Well, Windows 8 is above XP SP2, isn’t it? Not sure who to blame but I guessed that a workaround would be needed to get me over this silly error. This is how I solved this error. 1. Assume you already have the executable installation file, extract the content out of it to a folder. If you have 7Zip installed, you can simply right click the executable file, go to 7Zip, and choose Extract to folder name. 2. Go to the extracted folder, bin sub-folder, right click VMware-viclient.exe and choose Properties. Go to Compatibility tab, and check the option “Run this program in compatibility mode for” and pick Windows 7 from the list. 3. click OK to save the setting, and launch VMware-viclient.exe to start the installation again. Woola... that’s the workaround that saved my day. It works on my vSphere 4.1 but according to here, it works on 5.0 as well. ### Resolved Warning: session_start(): Cannot send session cache limiter The solution Open your php.ini file, modify session.auto_start = 0 to session.auto_start = 1, after do that, save the file and restart apache service. ### Configure Apache Different Port to Use Virtual Hosts on Ubuntu Server By default, linux apache on use Default document directory is: /var/www The default port is: 80 If you want to publish your own a system resource directory, you can use the following method, execute the following command Setp 1 Add a listening port $ cd /etc/apache2 $ sudo vim ports.conf Add in file NameVirtualHost *:81 Listen 81 <IfModule dir_module> DirectoryIndex index.php default.php index.html </IfModule> Step 2 Configure the virtual directory $ cd /etc/apache2/sites-available $ sudo cp default default-me $ sudo vim default-me Document reads as follows <VirtualHost *:81> ServerAdmin webmaster@localhost DocumentRoot /var/www/wwwroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/wwwroot/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> Step 3 Publishing Site $ sudo a2ensite default-me Step 4 Restart Service $ sudo /etc/init.d/apache2 restart Step 5 Test If you can normally access http://localhost:81 means that the correct configuration. ### My Studio ### Install Zend Framework on Linux Ubuntu Zend Framework is an open source, object oriented web application framework for PHP 5. Zend Framework is often called a "component library", because it has many loosely coupled components that you can use more or less independently. But Zend Framework also provides an advanced Model-View-Controller (MVC) implementation that can be used to establish a basic structure for your Zend Framework applications. Features Zend Framework features include All components are fully object-oriented PHP 5 and are E_STRICT compliant Use-at-will architecture with loosely coupled components and minimal interdependencies Extensible MVC implementation supporting layouts and PHP-based templates by default Support for multiple database systems and vendors, including MariaDB, MySQL, Oracle, IBM DB2, Microsoft SQL Server, PostgreSQL, SQLite, and Informix Dynamic Server Email composition and delivery, retrieval via mbox, Maildir, POP3 and IMAP4 Flexible caching sub-system with support for many types of backends, such as memory or a file system. Install LAMP Zend Framework can be tested by creating test project, it should be placed under the document root of your web server. Web server might be Apache, Nginx and Lighttpd, for easy understanding used Apache as web server. Here is the Step by Step guide to Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu Here is the Step by Step guide to Installation LNMP ( Nginx+MySQL+PHP ) on Ubuntu Install Zend Framework $ sudo apt-get install zend-framework Confirm the installed version. $ zf show version Zend Framework Version: 1.11.11 Testing Zend project In Apache /var/www is the default document root, so going to create test project on /var/www, go to the default document root of web server. $ cd /var/www/ Create the test project called zftest. $ zf create project zftest You should get the above message after hitting Enter; if you get any error message like below, you required to install PHPUnit on your Linux Ubuntu. PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/share/php/PHPUnit/Autoload.php on line 46 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/share/php/PHPUnit/Autoload.php on line 46 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='/usr/share/php/libzend-framework-php:.:/usr/share/php:/usr/share/pear') in /usr/share/php/PHPUnit/Autoload.php on line 46 Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='/usr/share/php/libzend-framework-php:.:/usr/share/php:/usr/share/pear') in /usr/share/php/PHPUnit/Autoload.php on line 46 Now we need to have the Zend Library files in the library directory under the created project (By default it's empty), go to the library directory. $ cd zftest/library/ SymLink is the better option than copying the library files to library directory, because the library files will get update when we are installing the latest Zend Framework. $ sudo ln -s /usr/share/php/libzend-framework-php/Zend/ . Tree view of your project directory, see the library/Zend directory too. $ tree . ├── application │   ├── Bootstrap.php │   ├── configs │   │   └── application.ini │   ├── controllers │   │   ├── ErrorController.php │   │   └── IndexController.php │   ├── models │   └── views │   ├── helpers │   └── scripts │   ├── error │   │   └── error.phtml │   └── index │   └── index.phtml ├── docs │   └── README.txt ├── library │   └── Zend -> /usr/share/php/libzend-framework-php/Zend/ ├── public │   └── index.php └── tests ├── application │   └── controllers │   └── IndexControllerTest.php ├── bootstrap.php └── phpunit.xml 16 directories, 11 files Open up a web browser and navigate to http://localhost/zftest/public. Screen Shot of Test Project in Ubuntu 12.04. ### Modify WordPress Dashboard Login Page without a Plugin First in your theme functions.php file add the following code // Modify WordPress Dashboard login page function login_enqueue_scripts(){ echo ' '."\n"; } add_action( 'login_enqueue_scripts', 'login_enqueue_scripts' ); Then, according to the above code shows that you want to create a custom-login.css file, and then into the root directory of your own theme. .background-cover{ background:url(./img/loginbj.jpg) no-repeat center center fixed; -webkit-background-size: cover; -moz-background-size: cover; -o-background-size: cover; background-size: cover; position:fixed; top:0; left:0; z-index:10; overflow: hidden; width: 100%; height:100%; } #login{ z-index:9999; position:relative; } .login form { box-shadow: 0px 0px 0px 0px !important; } .login h1 a { background-image:url(./img/login-logo.png) !important; background-size: auto !important;height:52px !important; } .login #nav a, .login #backtoblog a { color:#fff !important; text-shadow: none !important; } .login #nav a:hover, .login #backtoblog a:hover{ color:#f99009 !important; text-shadow: none !important; } .login #nav, .login #backtoblog{ text-shadow: none !important; } .login #backtoblog a { color:#fff; } #backtoblog { border-bottom:0; height:30px; left:-520px; position:absolute; top:0; width:100%; } Next, we just pictures according to their own path, modifications. ### No plug-in realize WordPress theme pagination function The use of fewer plug-ins, but it allows users to fully experience the theme of the function is really good theme, which is all the theme designer should consider. By default WordPress flip page function is very weak, therefore, similar to the wp-pagenavi plugin page, almost WordPress blog plugin must be installed, and also a very high penetration plug-in. The following paragraph lightweight functions, plus a few lines of css styles can completely replace the pagination plugin. Specific method First, add the following paragraph to the function code template theme functions.php file function pagination($pages = '', $range = 4) { $showitems = ($range * 2)+1; global $paged; if(empty($paged)) $paged = 1; if($pages == '') { global $wp_query; $pages = $wp_query->max_num_pages; if(!$pages) { $pages = 1; } } if(1 != $pages) { echo "<div class=\"pagination\"><span>Page ".$paged." of ".$pages."</span>"; if($paged > 2 && $paged > $range+1 && $showitems < $pages) echo "<a href='".get_pagenum_link(1)."'>« First</a>"; if($paged > 1 && $showitems < $pages) echo "<a href='".get_pagenum_link($paged - 1)."'>‹ Previous</a>"; for ($i=1; $i <= $pages; $i++) { if (1 != $pages &&( !($i >= $paged+$range+1 || $i <= $paged-$range-1) || $pages <= $showitems )) { echo ($paged == $i)? "<span class=\"current\">".$i."</span>":"<a href='".get_pagenum_link($i)."' class=\"inactive\">".$i."</a>"; } } if ($paged < $pages && $showitems < $pages) echo "<a href=\"".get_pagenum_link($paged + 1)."\">Next ›</a>"; if ($paged < $pages-1 && $paged+$range-1 < $pages && $showitems < $pages) echo "<a href='".get_pagenum_link($pages)."'>Last »</a>"; echo "</div>\n"; } } Next, use the following code to replace the original default page template code or pagination plug-in code <?php if (function_exists("pagination")) { pagination($additional_loop->max_num_pages); } ?> Finally the following style code to the style.css in refresh the page, the page with the use of plug-ins to achieve the same effect. .pagination { clear:both; padding:20px 0; position:relative; font-size:11px; line-height:13px; } .pagination span, .pagination a { display:block; float:left; margin: 2px 2px 2px 0; padding:6px 9px 5px 9px; text-decoration:none; width:auto; color:#fff; background: #555; } .pagination a:hover { color:#fff; background: #3279BB; } .pagination .current { padding:6px 9px 5px 9px; background: #3279BB; color:#fff; } ### Heilongjiang Computer Organization Website Recently wrote a website, HCO (Heilongjiang Computer Organization) Website. I have designed UI user experience and background architecture of this site.By practicing this little project I met some students from Heilongjiang Institue of Technogly, Harbin, China. index static page demo archive static page demo single static page demo This is my design sketches. ### Creating a GitHub Pages Blog With Octopress on Ubuntu GitHub Pages are public webpages freely hosted and easily published through our site. You can publish online using the Automatic Page Generator. If you prefer to work locally you can use the GitHub for Mac and Windows apps, or the command line. Pages are served over HTTP, not HTTPs. That doesn't make them inherently less secure, but, it does mean that you shouldn't transact any sensitive information with them (like passwords or credit card numbers). GitHub Official Website github.org Octopress is a framework designed by Brandon Mathis for Jekyll, the blog aware static site generator powering Github Pages. To start blogging with Jekyll, you have to write your own HTML templates, CSS, Javascripts and set up your configuration. But with Octopress All of that is already taken care of. Simply clone or fork Octopress, install dependencies and the theme, and you’re set. Octopress Official Website octopress.org Install Required Package 1 Install Git $ sudo apt-get update && sudo apt-get install git 2 Install Ruby 1.9.3 via your OS package management or rbenv or RVM. $ curl -L https://get.rvm.io | bash -s stable --ruby $ source /home/username/.rvm/scripts/rvm If using package management may need to install ruby-dev Check your Ruby version is at least 1.9.3 and install bundler: $ ruby --version $ sudo gem install bundler Initial Setup Clone the octopress repository and set it up $ git clone git://github.com/imathis/octopress.git octopress $ cd octopress $ bundle install $ rake install We're going to use Github pages. Octopress has some rake tasks to make this easier for you. Your blog will be hosted at http://<username>.github.io and you need to create a new Github repository called username.github.io that github pages will use the master branch as the html source for your blog. $ rake setup_github_pages This rake points our clone to the new repistory we just set up, configures your blog’s URL and sets up a master branch in the _deploy directory for deployment. Totice the blog's URL does not include the end of the .git, it should be like this username.github.io edit _config.yml and fill in your blog name and other details. There’s also some configs for twitter/G+/etc plugins that are worth configuring. Generate and preview the blog $ rake generate $ rake preview This will generate the contents of your blog and allow you to preview it at [http://localhost:4000]. Once you’re happy with the contents we can deploy your blog for the first time. $ rake deploy This will copy the generated files into _deploy/, add them to git, commit and push them up to the master branch. In a few seconds you should get an email from Github telling you that your commit has been received and will be published on your site. Being your first commit it could take 10 minutes for the blog to be available at [http://username.github.com] Don’t forget to commit your changes to the source branch: $ git add . $ git commit -m 'After install' $ git push origin source Finish Now, you have your site running on Github with Octopress. ### WordPress No input file specified Error Occurred Solution The error occurs because In the GoDaddy linux virtual host No input file specified error occurred because php.ini default cgi.fix_pathinfo = 0, causing not recognized pathinfo. Generally appear in yii, ThinkPHP and other MVC framework, and WordPress,the use index.php as a pseudo-static procedures. Solutions Create a new file in the web root directory - php5.ini File contents cgi.fix_pathinfo=1 Then at the same directory, adding the following code in the .htaccess file. addhandler x-httpd-php5-cgi .php5 addhandler x-httpd-php5-cgi .php addhandler x-httpd-php-cgi .php4 ### Adobe Creative Cloud Adobe Creative Cloud offers a full spectrum of professional tools for design, photography, web and video. Adobe Press will help you master all the essentials with books and videos from our acclaimed series and industry experts. Adobe Official Website www.adobe.com Adobe Creative Cloud Official Download Links Acrobat XI Pro Adobe Acrobat is a set of application software developed by Adobe Systems to view, create, manipulate, print and manage files in Portable Document Format (PDF files). All parts of the set, except Adobe Reader (formerly Acrobat Reader, and also referred to by the program name acroread), are proprietary commercial programs, while the latter is available as freeware and can be downloaded from Adobe's website. Adobe Reader enables users to view and print PDF files but has negligible PDF creation capabilities. Acrobat and Reader are widely used as a method of presenting information with a fixed layout similar to a paper publication. Win: http://trials.adobe.com/AdobeProducts/APRO/11/win32/Acrobat_11_0_0_CCM_MUI.7z Win: http://trials.adobe.com/AdobeProducts/APRO/11/win32/Acrobat_11_0_0_CCM_MUI.exe Mac: http://trials.adobe.com/AdobeProducts/APRO/11/osx10/Acrobat_11_0_0_CCM_MUI.dmg After Effects CC Adobe After Effects is a digital motion graphics, visual effects and compositing software package published by Adobe Systems, used in the post-production process of filmmaking and television production. Its main uses are the origination of 2D and 2.5D animation, visual effects compositing and finishing (image adjustment, color correction etc.). After Effects can also be used as a basic non-linear editor and a media transcoder. Win: http://trials3.adobe.com/AdobeProducts/AEFT/12/win64/AfterEffects_12_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/AEFT/12/win64/AfterEffects_12_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/AEFT/12/osx10-64/AfterEffects_12_LS20.dmg Audition CC Adobe Audition (formerly Cool Edit Pro) is a digital audio workstation from Adobe Systems featuring both a multitrack, non-destructive mix/edit environment and a destructive-approach waveform editing view. Win: http://trials3.adobe.com/AdobeProducts/AUDT/6/win64/Audition_6_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/AUDT/6/osx10-64/Audition_6_LS20.dmg Bridge CC Adobe Bridge is an organizational software application created and released by Adobe Systems as a part of the Adobe Creative Suite, beginning with CS2. Its primary purpose is to link the parts of the Creative Suite together using an interface similar to the file browser found in previous versions of Adobe Photoshop. It is accessible from all other components of the Creative Suite (except for the standalone version of Adobe Acrobat 8). Adobe Bridge is also included with the stand-alone Photoshop application, and can perform certain Photoshop processing functions separately (and simultaneously). Bridge has many functions such as batch rename and other utilities to help with organization, and supports editing of digital images in Camera Raw format. Organizational features include assigning colored labels or star ratings to images, the ability to edit the XMP and IPTC Information Interchange Model metadata embedded within an image file, and the ability to work with different versions and alternates of a file that is part of an Adobe Version Cue project. Image files can be shown in different sized thumbnails, slide shows or lists. Each folder, which can be bookmarked, has a cache file for speeding up rendering time of images when viewing a thumbnail. The cache can be in one central location or in individual folders. Adobe Bridge is extensively customizable using JavaScript. The Adobe Stock Photos service was implemented as a Bridge Extension. The Bridge scripting guide is available online as well as in paperback. Adobe Bridge initially allowed for access to Adobe Stock Photos, an online collection of stock photography images drawn from well-known stock photo houses. However, this service was discontinued by Adobe on April 1, 2008. Win x86: http://trials3.adobe.com/AdobeProducts/KBRG/6/win32/Bridge_6_LS20_win32.7z Win x86: http://trials3.adobe.com/AdobeProducts/KBRG/6/win32/Bridge_6_LS20_win32.exe Win x64: http://trials3.adobe.com/AdobeProducts/KBRG/6/win64/Bridge_6_LS20_win64.7z Win x64: http://trials3.adobe.com/AdobeProducts/KBRG/6/win64/Bridge_6_LS20_win64.exe Mac: http://trials3.adobe.com/AdobeProducts/KBRG/6/osx10/Bridge_6_LS20.dmg Captivate CC Adobe Captivate is an electronic learning tool for Microsoft Windows, and from v.5 Mac OS X which can be used to author software demonstrations, software simulations, branched scenarios, and randomized quizzes in .swf format. It can also convert Adobe Captivate generated .swf to .avi which can be uploaded to video hosting websites. For software simulations, Captivate can use left or right mouse clicks, key presses and rollover images. It can also be used for screencasts, podcasts, and the conversion of Microsoft PowerPoint presentations to the Adobe Flash format. Win x86: http://trials3.adobe.com/AdobeProducts/CPTV/7/win32/Captivate_7_LS21.7z Win x86: http://trials3.adobe.com/AdobeProducts/CPTV/7/win32/Captivate_7_LS21.exe Win x64: http://trials3.adobe.com/AdobeProducts/CPTV/7/win64/Captivate_7_x64_LS21.7z Win x64: http://trials3.adobe.com/AdobeProducts/CPTV/7/win64/Captivate_7_x64_LS21.exe Mac: http://trials3.adobe.com/AdobeProducts/CPTV/7/osx10/Captivate_7_LS21.dmg Dreamweaver CC Adobe Dreamweaver is a proprietary web development application developed by Adobe Systems. Dreamweaver was originally developed by Macromedia in 1997, and was maintained by them until Macromedia was acquired by Adobe Systems in 2005. Adobe Dreamweaver is available for both Mac and Windows operating systems. Following Adobe's acquisition of the Macromedia product suite, releases of Dreamweaver subsequent to version 8.0 have been more compliant with W3C standards. Recent versions have improved support for Web technologies such as CSS, JavaScript, and various server-side scripting languages and frameworks including ASP (ASP JavaScript, ASP VBScript, ASP.NET C#, ASP.NET VB), ColdFusion, Scriptlet, and PHP. Win: http://trials3.adobe.com/AdobeProducts/DRWV/13/win32/Dreamweaver_13_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/DRWV/13/osx10/Dreamweaver_13_LS20.dmg Edge Animate CC Adobe Edge Animate, formerly known as simply Adobe Edge, is a web development tool developed by Adobe Systems that uses HTML5, Javascript, and CSS3 functionality. It is now a part of Adobe Edge suite, and is available as a free download from Adobe Creative Cloud. Win: http://trials3.adobe.com/AdobeProducts/EDGE/2/win32/Edge_Animate_2_LS17.exe Mac: http://trials3.adobe.com/AdobeProducts/EDGE/2/osx10/Edge_Animate_2_LS17.dmg Edge Inspect CC Adobe Edge Inspect is a workflow and preview tool for testing and simulating across multiple mobile web browsers. Win: http://trials3.adobe.com/AdobeProducts/SHDW/1/win32/Edge_Inspect_1_3_LREFJ.7z Mac: http://trials3.adobe.com/AdobeProducts/SHDW/1/osx10/Edge_Inspect_1_3_LREFJ.dmg Extension Manager CC Adobe Extension Manager is a stand alone program designed to centralize the management and organization of extensions for adobe software. Extensions are added pieces of software designed to do specific things. These are sometime made by adobe, and sometimes by a third party developer. Win: http://download.macromedia.com/pub/dw_exchange/extension_manager/win/AdobeExtensionManager_CC_LS20.7z OR Win: http://trials3.adobe.com/AdobeProducts/KEMN/7/win32/AdobeExtensionManager_CC_LS20.7z Win: http://download.macromedia.com/pub/dw_exchange/extension_manager/win/AdobeExtensionManager_CC_LS20.exe OR Win: http://trials3.adobe.com/AdobeProducts/KEMN/7/win32/AdobeExtensionManager_CC_LS20.exe Mac: http://download.macromedia.com/pub/dw_exchange/extension_manager/mac/AdobeExtensionManager_CC_LS20.dmg OR Mac: http://trials3.adobe.com/AdobeProducts/KEMN/7/osx10/AdobeExtensionManager_CC_LS20.dmg Flash Builder Premium v4.7 Adobe Flash Builder 4.7 software is a professional-grade development tool for rapidly creating games and applications for the web and mobile devices using ActionScript ® and the open source Flex framework. It includes support for intelligent coding, interactive step-though debugging, and multiplatform publishing workflows. Win x86: http://trials3.adobe.com/AdobeProducts/FLBR/4_7/win32/FlashBuilder_4_7_LS10.exe Win x64: http://trials3.adobe.com/AdobeProducts/FLBR/4_7/win64/FlashBuilder_4_7_LS10_win64.exe Mac: http://trials3.adobe.com/AdobeProducts/FLBR/4_7/osx10/FlashBuilder_4_7_LS10.dmg Flash Professional CC Adobe Flash Professional is a multimedia authoring program used to create content for the Adobe Engagement Platform, such as web applications, games and movies, and content for mobile phones and other embedded devices. Win: http://trials3.adobe.com/AdobeProducts/FLPR/13/win64/Flash_Professional_13_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/FLPR/13/osx10-64/Flash_Professional_13_LS20.dmg InCopy CC Adobe InCopy is a professional word processing application made by Adobe Systems. It is integrated to Adobe InDesign; however, it is sold separately and is not included in any Creative Suite edition; the current version is Adobe InCopy CC. This desktop publishing software is primarily used to publish newspapers and magazines and is used primarily by editors. InCopy allows the editor to write, edit, and design (style, font, etc.) the publishable copy. The software includes standard word processing features such as "spell check", "track changes", and "word count", etc., and has preview viewing modes for fitting copy to the text layout, allowing the editor to visually ascertain every design element fits in the publication layout format used. Commercially, InCopy is a direct competitor to Quark CopyDesk, which was first released in 1991. Win: http://trials3.adobe.com/AdobeProducts/AICY/9/win32/InCopy_9_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/AICY/9/win32/InCopy_9_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/AICY/9/osx10/InCopy_9_LS20.dmg InDesign CC Adobe InDesign is a desktop publishing software application produced by Adobe Systems. It can be used to create works such as posters, flyers, brochures, magazines, newspapers and books. InDesign can also publish content suitable for tablet devices in conjunction with Adobe Digital Publishing Suite. Graphic designers and production artists are the principal users, creating and laying out periodical publications, posters, and print media. It also supports export to EPUB and SWF formats to create digital publications, and content suitable for consumption on tablet computers. The Adobe InCopy word processor uses the same formatting engine as InDesign. Win: http://trials3.adobe.com/AdobeProducts/IDSN/9/win32/InDesign_9_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/IDSN/9/win32/InDesign_9_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/IDSN/9/osx10/InDesign_9_LS20.dmg Illustrator CC Adobe Illustrator is a vector graphics editor developed and marketed by Adobe Systems. The latest version, Illustrator CS6, is the sixteenth generation in the product line. Win: http://trials3.adobe.com/AdobeProducts/ILST/17/win32/Illustrator_17_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/ILST/17/win32/Illustrator_17_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/ILST/17/osx10-64/Illustrator_17_LS20.dmg Lightroom v5 Win: http://trials3.adobe.com/AdobeProducts/LTRM/5/win32/Lightroom_5_LS11.exe Mac: http://trials3.adobe.com/AdobeProducts/LTRM/5/osx10/Lightroom_5_LS11.dmg Muse CC Adobe Muse is a product by Adobe Systems. The software is focused on allowing designers to create websites without having to write any code. This app is available through Adobe Creative Cloud. Win: http://trials3.adobe.com/AdobeProducts/MUSE/5/win32/Muse_5_0_CCM_LS23.exe Mac: http://trials3.adobe.com/AdobeProducts/MUSE/5/osx10/Muse_5_0_CCM_LS23.dmg Photoshop CC Adobe Photoshop is a graphics editing program developed and published by Adobe Systems. Adobe's 2003 "Creative Suite" rebranding led to Adobe Photoshop 8's renaming to Adobe Photoshop CS. Thus, Adobe Photoshop CS6 is the 13th major release of Adobe Photoshop. The CS rebranding also resulted in Adobe offering numerous software packages containing multiple Adobe programs for a reduced price. Adobe Photoshop is released in two editions: Adobe Photoshop, and Adobe Photoshop Extended, with the Extended having extra 3D image creation, motion graphics editing, and advanced image analysis features. Adobe Photoshop Extended is included in all of Adobe's Creative Suite offerings except Design Standard, which includes the Adobe Photoshop edition. Alongside Photoshop and Photoshop Extended, Adobe also publishes Photoshop Elements and Photoshop Lightroom, collectively called "The Adobe Photoshop Family". In 2008, Adobe released Adobe Photoshop Express, a free web-based image editing tool to edit photos directly on blogs and social networking sites; in 2011 a version was released for the Android operating system and the iOS operating system. Adobe distributes Photoshop in Windows and Macintosh versions. Win: http://trials3.adobe.com/AdobeProducts/PHSP/14/win32/Photoshop_14_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/PHSP/14/win32/Photoshop_14_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/PHSP/14/osx10/Photoshop_14_LS20.dmg Prelude CC Adobe Prelude (successor to the now discontinued Adobe OnLocation) is a tool for Windows and Mac to review, import, log, select, and export tapeless media. The software also offers features like speech transcription and rough cut creation. Win: http://trials3.adobe.com/AdobeProducts/PRLD/2/win64/Prelude_2_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/PRLD/2/osx10-64/Prelude_2_LS20.dmg Premiere Pro CC Adobe Premiere Pro is a community site that anyone can contribute to. Discover, share and add your knowledge! Win: http://trials3.adobe.com/AdobeProducts/PPRO/7/win64/PremierePro_7_LS20.7z Win: http://trials3.adobe.com/AdobeProducts/PPRO/7/win64/PremierePro_7_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/PPRO/7/osx10-64/PremierePro_7_LS20.dmg Scout CC Adobe Scout is the newest profiler for Flash content running on both the desktop (Flash Player) and on mobile devices (Adobe AIR). Win: http://trials3.adobe.com/AdobeProducts/ASCT/1_1/win64/Scout_1_1_LREFJC.7z Mac: http://trials3.adobe.com/AdobeProducts/ASCT/1_1/osx10/Scout_1_1_LREFJC.dmg SpeedGrade CC Speedgrade is a professional color correction application that has lots of great benefits and features. Speedgrade is typically used at the beginning and end of of a workflow. Win: http://trials3.adobe.com/AdobeProducts/SPGD/7/win64/SpeedGrade_7_LS20.exe Mac: http://trials3.adobe.com/AdobeProducts/SPGD/7/osx10-64/SpeedGrade_7_LS20.dmg ### Add a launcher application shortcuts to Ubuntu Dash menu and the Dock In /usr/share/applications/directory to create application-name.desktop file, edit the file as follows. [Desktop Entry] Type=Application Name=Application-name GenericName=Application-name Comment=describe-text Exec=/home/application-install-location Terminal=false MimeType=text/plain; Icon=/home/application-icon-location Categories=Internet; StartupNotify=true Actions=Window;Developer; ### Install Google Play Store on Android OS Part 1 Preparation 1.1 Get ROOT permissions 1.2 Install the Google Service Framework 1.3 Checking the Google Service Framework If not install, please download Google Service Framework from here. 1.4 Install Root Explorer Part 2 2.1 Modify the Google_Play_Store.apk name called Vending.apk, if you are using SAMSUNG GalaxyNexus series mobile phone, please name changed Phonesky.apk. 2.2 Using Root Explorer copy to system/app/inside, click Mount R/W 2.3 Then change Permissions. At this time the system will automatically install Google Play Store, if you do not automatically install, restart your mobile phone. Installation is complete. ### Configure LAMP ( Apache, MySQL, PHP ) on RedHat Enterprise Linux 6 Red Hat Enterprise Linux (RHEL) is a Linux-based operating system developed by Red Hat and targeted toward the commercial market. Red Hat Enterprise Linux is released in server versions for x86, x86-64, Itanium, PowerPC and IBM System z, and desktop versions for x86 and x86-64. All of Red Hat's official support and training and the Red Hat Certification Program center around the Red Hat Enterprise Linux platform. Red Hat Enterprise Linux is often abbreviated to RHEL, although this is not an official designation. The first version of Red Hat Enterprise Linux to bear the name originally came onto the market as "Red Hat Linux Advanced Server". In 2003 Red Hat rebranded Red Hat Linux Advanced Server to "Red Hat Enterprise Linux AS", and added two more variants, Red Hat Enterprise Linux ES and Red Hat Enterprise Linux WS. While Red Hat uses strict trademark rules to restrict free re-distribution of their officially supported versions of Red Hat Enterprise Linux, Red Hat freely provides the source code for the distribution's software even for software where this is not mandatory. As a result, several distributors have created re-branded and/or community-supported re-builds of Red Hat Enterprise Linux that can legally be made available, without official support from Red Hat. CentOS and Oracle Linux aim to provide 100% binary compatibility with Red Hat Enterprise Linux. Redhat Official Website www.redhat.com Part 1 System and software preparation System Version RedHat Enterprise Linux 6.0 ( Santiago ) Kernel version 2.6.32-71.el6.x86_64 Part 2 Erection YUM repository Since Redhat's yum online updates for a fee, if it is not registered can not be used, that can not install the software online 2.1 Delete the original redhat yum source # rpm -aq | grep yum|xargs rpm -e --nodeps 2.2 Download new yum install package ( Using CentOS YUM source ) # wget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-3.2.29-40.el6.centos.noarch.rpm # wget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-metadata-parser-1.1.2-16.el6.x86_64.rpm # wget http://mirrors.163.com/centos/6/os/x86_64/Packages/yum-plugin-fastestmirror-1.1.30-14.el6.noarch.rpm # wget http://mirrors.163.com/centos/6/os/x86_64/Packages/python-iniparse-0.3.1-2.1.el6.noarch.rpm 2.3 Install YUM package # rpm -ivh python-iniparse-0.3.1-2.1.el6.noarch.rpm # rpm -ivh yum-metadata-parser-1.1.2-16.el6.x86_64.rpm # rpm -ivh yum-3.2.29-40.el6.centos.noarch.rpm yum-plugin-fastestmirror-1.1.30-14.el6.noarch.rpm Note: The last two installation package put together to be installed, otherwise it will prompt interdependence, the installation fails. 2.4 Change yum source ( We use www.163.com's CentOS mirror source ) # cd /etc/yum.repos.d/ # wget http://mirrors.163.com/.help/CentOS6-Base-163.repo # vim CentOS6-Base-163.repo Edit the file, the file inside the $ releasever replace all the version number (6), save! Or directly to copy the following codes to CentOS6-Base-163.repo file can be (has been modified) ######################################################################### # CentOS-Base.repo # # The mirror system uses the connecting IP address of the client and the # update status of each mirror to pick mirrors that are updated to and # geographically close to the client. You should use this for CentOS updates # unless you are manually picking other mirrors. # # If the mirrorlist= does not work for you, as a fall back you can try the # remarked out baseurl= line instead. # # [base] name=CentOS-6 - Base - 163.com baseurl=http://mirrors.163.com/centos/6/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=os gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 #released updates [updates] name=CentOS-6 - Updates - 163.com baseurl=http://mirrors.163.com/centos/6/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=updates gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 #additional packages that may be useful [extras] name=CentOS-6 - Extras - 163.com baseurl=http://mirrors.163.com/centos/6/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=extras gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 #additional packages that extend functionality of existing packages [centosplus] name=CentOS-6 - Plus - 163.com baseurl=http://mirrors.163.com/centos/6/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=centosplus gpgcheck=1 enabled=0 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 #contrib - packages by Centos Users [contrib] name=CentOS-6 - Contrib - 163.com baseurl=http://mirrors.163.com/centos/6/contrib/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=6&arch=$basearch&repo=contrib gpgcheck=1 enabled=0 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6 ######################################################################### 2.5 Clean the yum cache # yum clean all # yum makecache Part 3 Installation LAMP 3.1 Installation Apache, MySQL, PHP yum -y install httpd php mysql mysql-server php-mysql 3.2 Install MySQL extension yum -y install mysql-connector-odbc mysql-devel libdbi-dbd-mysql 3.3 Install PHP extension yum -y install php-gd php-xml php-mbstring php-ldap php-pear php-xmlrpc 3.4 Install Apache extension yum -y install httpd-manual mod_ssl mod_perl mod_auth_mysql 3.5 Configuration services # /sbin/chkconfig httpd on ( Set apache server httpd service start at boot ) # /sbin/chkconfig --add mysqld ( Add mysql service in the list of services ) # /sbin/chkconfig mysqld on ( Set MySQL service start at boot ) # /sbin/service httpd start ( Starting apache server httpd service ) # /sbin/service mysqld start ( Starting MySQL service ) 3.6 Set mysql database root account password ( quotation marks filled password ) mysqladmin -u root password 'newpassword' Part 4 4.1 In \var\www\html directory to create a file index.php, written the following code in the index.php file, save. <?php phpinfo(); ?> 4.2 In the browser to access http://localhost/. If you get the following information, proved LNMP ( Apache, MySQL, PHP ) has been built successfully. ### WAMP Environment Zend Framework Configuration Zend Framework is an open source framework for developing web applications and services with PHP 5. Zend Framework is implemented using 100% object-oriented code. The component structure of Zend Framework is somewhat unique; each component is designed with few dependencies on other components. This loosely coupled architecture allows developers to use components individually. We often call this a "use-at-will" design. While they can be used separately, Zend Framework components in the standard library form a powerful and extensible web application framework when combined. Zend Framework offers a robust, high performance MVC implementation, a database abstraction that is simple to use, and a forms component that implements HTML form rendering, validation, and filtering so that developers can consolidate all of these operations using one easy-to-use, object oriented interface. Other components, such as Zend_Auth and Zend_Acl, provide user authentication and authorization against all common credential stores. Still others implement client libraries to simply access to the most popular web services available. Whatever your application needs are, you're likely to find a Zend Framework component that can be used to dramatically reduce development time with a thoroughly tested foundation. The principal sponsor of the project 'Zend Framework' is » Zend Technologies, but many companies have contributed components or significant features to the framework. Companies such as Google, Microsoft, and StrikeIron have partnered with Zend to provide interfaces to web services and other technologies that they wish to make available to Zend Framework developers. First, reference Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows to setup development environment. Zend Framework Installation Go to the official website to download the latest version of the installer Zend Framework Official Website framework.zend.com Download Page framework.zend.com/download/latest Development Tools Zend Studio (Eclipse-based development environment for PHP) Zend Official Website www.zend.com/en Download Page www.zend.com/en/downloads Unzip Zend Framework compressed package to E:\server\ZendFramework-1.12.3 Rewrite Rule Configuration Edit Apache httpd.conf file, find the # LoadModule rewrite_module modules/mod_rewrite.so modified to LoadModule rewrite_module modules/mod_rewrite.so PHP Library Configuration Edit PHP's php.ini file, find the ; Windows: "/path1;/path2" ; include_path = ".;c:/php/includes" modified to ; Windows: "\path1;\path2" include_path = ".;E:/server/ZendFramework-1.12.3/library" Restart apache service. Quickstart Windows Start menu, click Run, type CMD, open the Dos Command Window, the current path is switched to ZendFramework/bin path, run the command E: cd E:\server\ZendFramework-1.12.3\bin\ zf.bat create project quickstart E:\htdocs\test1 Project Testing Open a browser and access the http://localhost/test1/public/, you can see the Welcome to the Zend Framework! page. Zend Studio 10.1.0.201307161205 Activation Download com.zend.php.core_10.1.0.201307161205.jar file, copy to Zend/Zend Studio 10.0.1/plugins directory, overwrite files with the same name. License key 15BD3D9635621E3AD4AE0F243B9E76CCFF6383B0E254EF646FD47B1424832332148D73E944AC95FC2A22EAF9965FE00D0BBACAD80E4B73BCFD40AADBEBD606EA4C33A74AEBB140CFB189E545E351774B7A020CDBE4FD159CDFF00F3C6A08C6AAC1A4F811EDDCF59CAFD09D46C849CB780BF6EF5983D3AFC372FC2021714E6B0B Warning Just for learning exchanges, not for commercial, if you like this support and buy genuine, and delete download within 24 hours. ### Installation LNMP ( Nginx+MySQL+PHP ) on Ubuntu Update $ sudo apt-get update Installing Nginx $ sudo apt-get install nginx After installation, the file structure like this: All configuration files are in /etc/nginx, and each virtual host has been arranged in the /etc/nginx/sites-available; Program files in the /usr/sbin/nginx; Log files in /var/log/nginx; And has been in the /etc/init.d/nginx create a startup script; The default virtual host directory is set in the /var/www/nginx-default; Start nginx (Make sure that no other services in 80 ports used) $ sudo /etc/init.d/nginx start or $ sudo service nginx start Test the Nginx configuration file syntax $ sudo nginx -t Restart Nginx $ sudo service nginx restart Open your browser and visit to http://localhost or http://127.0.0.1, see if prompt Welcome to Nginx!, If the display has already proved successful launch. Install latest version of Nginx on Ubuntu In general, you should deploy the Nginx mainline branch at all times. You may wish to use stable if you are concerned about possible impacts of new features, such as incompatibility with third-party modules or the introduction of bugs in new features. First, to avoid missing PGP key during installation and upgrades, install the Nginx team’s package signing key: $ sudo curl http://nginx.org/keys/nginx_signing.key | sudo apt-key add - Add the repo to your apt sources: $ sudo echo -e "deb http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx\ndeb-src http://nginx.org/packages/mainline/ubuntu/ `lsb_release -cs` nginx" > /etc/apt/sources.list.d/nginx.list Resynchronize the package index files from their sources: $ sudo apt-get update If installing Nginx: $ sudo apt-get install nginx or if upgrading: $ sudo apt-get dist-upgrade Done. Installing PHP $ sudo apt-get install php5-fpm Configuration Configuration PHP We need to make one small change in the php configuration. Open up /etc/php5/fpm/php.ini, Find the line, cgi.fix_pathinfo=1, and change the 1 to 0. cgi.fix_pathinfo=0 If this number is kept as 1, the php interpreter will do its best to process the file that is as near to the requested file as possible. This is a possible security risk. If this number is set to 0, conversely, the interpreter will only process the exact file path—a much safer alternative. Save and Exit. We need to make another small change in the php5-fpm configuration.Open up www.conf: $ sudo vim /etc/php5/fpm/pool.d/www.conf Find the line, listen = 127.0.0.1:9000, and change the 127.0.0.1:9000 to /var/run/php5-fpm.sock. listen = /var/run/php5-fpm.sock Save and Exit. Restart php-fpm: $ sudo service php5-fpm restart Configuration Nginx Add the following line to your nginx fastcgi config file /etc/nginx/fastcgi_params: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Open up the default virtual host file $ sudo vim /etc/nginx/sites-available/default The configuration should include the changes below (the details of the changes are under the config information): server { listen 80; root /usr/share/nginx/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # pass the PHP scripts to FastCGI server listening on the php-fpm socket location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Here are the details of the changes: Add index.php to the index line. Change the server_name from local host to your domain name or IP address (replace the example.com in the configuration) Change the correct lines in location ~ \.php$ { section Save and Exit Installing MySQL $ sudo apt-get install mysql-server mysql-client Create a PHP Info Page In /var/www/ directory to create a file info.php, written the following code in the info.php file, save. <?php phpinfo(); If you see the nginx and php-fpm configuration details by visiting http://localhost/info.php, LNMP (Nginx, MySQL, PHP) has been built successfully. See more Enabling SPDY and HSTS on Nginx Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu ### Installation WNMP ( Nginx, MySQL, PHP ) on Microsoft Windows Nginx is an open source web server and a reverse proxy server for HTTP, SMTP, POP3, and IMAP protocols, with a strong focus on high concurrency, performance and low memory usage. It is licensed under a BSD-like license and it runs on Unix, Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, and Microsoft Windows. Part 1 Preparation before installation 1.1 First, go to the official website to download the latest version of the installer: Nginx Official Website nginx.org PHP Official Website php.net Oracle Official Website www.oracle.com Download nginx-1.5.1.zip file from nginx.org/en/download.html. Download php-5.4.16-nts-Win32-VC9-x86.zip windows.php.net/downloads/releases/archives. Download MySQL Community Server 5.6.10.1 from www.mysql.com/downloads/mysql Three software installation package downloaded. 1.2 Install MySQL Community Server. Can refer to Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows  1.3 Unzip NGINX and PHP into your own installation location. For example: Nginx directory C:\server\nginx PHP directory C:\server\php Part 2 Configuration Nginx 2.1 Open the C:\server\nginx directory, run nginx.exe in the folder 2.2 Test whether to start nginx. Open your browser and visit to http://localhost or http://127.0.0.1, see if prompt "Welcome to nginx!", If the display has already proved successful launch. Can't access localhost solutions please refer to WampServer local installation WordPress Note: This site directory in C:\server\nginx\html 2.3 Find the Nginx configuration file nginx.conf location (installation directory conf folder), use a text editor opens. 2.4 Find the following code (About 43 to 45 lines) location / { root html; index index.html index.htm; } Modified to location / { root C:/server/www; # Modify the site file path index index.php index.html index.htm; # Add index.php the default page autoindex on } 2.5 Find the following code (About 63 to 71 lines) # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # # location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; # } Modified to # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root C:/server/www; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Part 3 Configuration PHP 3.1 Copy php.ini-development file, and the file name to php.ini, modify the php configuration file php.ini. 730 lines Find the ;extension_dir = "ext" modified to extension_dir = "C:\server\php\ext" 919 lines Find the ;date.timezone = modified to date.timezone =Asia/Shanghai 736 lines Find the enable_dl = Off modified to enable_dl = On 743 lines Find the ;cgi.force_redirect = 1 modified to cgi.force_redirect = 1 771 lines Find the ;fastcgi.impersonate = 1 modified to fastcgi.impersonate = 1 783 lines Find the ;cgi.rfc2616_headers = 0 modified to cgi.rfc2616_headers = 1 No. 880,881 lines, remove the front ; extension = php_mysql.dll and extension = php_mysqli.dll (supports MySQL database). 3.2 Download RunHiddenConsole.zip extract to C:\server\nginx\RunHiddenConsole.exe 3.3 Create Start bat file @echo off echo Starting PHP FastCGI... RunHiddenConsole.exe c:/server/php/php-cgi.exe -b 127.0.0.1:9000 -c C:/server/php/php.ini echo Starting nginx... C:/server/nginx/nginx.exe 3.4 Create Stop bat file @echo off echo Stopping nginx... taskkill /F /IM nginx.exe > nul echo Stopping PHP FastCGI... taskkill /F /IM php-cgi.exe > nul exit The resulting directory structure. Part 4 4.1 Start nginx. Click start_php.bat 4.2 In C:\server\www directory to create a file index.php, written the following code in the index.php file, save. <?php phpinfo(); ?> 4.3 In the browser to access http://localhost/. If you get the following information, proved WNMP ( Nginx, MySQL, PHP ) has been built successfully. ### Google Chrome Browser Offline Installation Google Chrome Browser offline installation package official download address, and online installation as can be updated automatically. Download from Chrome Release Channels www.chromium.org/getting-involved/dev-channel Download Google Chrome Offline Installer for Windows User www.google.com/chrome/eula.html?hl=en-US&standalone=1 ### VMware ESXi 5 Installation and Basic Setup VMware ESXi Introduction VMware ESX is an enterprise-level computer virtualization product offered by VMware, Inc. ESX is a component of VMware's larger offering, VMware Infrastructure, and adds management and reliability services to the core server product. VMware is replacing the original ESX with ESXi. VMware ESX and VMware ESXi are bare metal embedded hypervisors that are VMware's enterprise software hypervisors for guest virtual servers that run directly on host server hardware without requiring an additional underlying operating system. The basic server requires some form of persistent storage (typically an array of hard disk drives) that store the hypervisor and support files. A smaller footprint variant, ESXi, does away with the first requirement by permitting placement of the hypervisor on a dedicated compact storage device. Both variants support the services offered by VMware Infrastructure. VMware ESXi 5 Installation Before you install Before starting the installation, please go to the official website to download the latest program. VMware Official Website www.vmware.com The downloaded ISO file and burn it, and set the CD boot Install VMware ESXi Appear ESXi-5.0.0.1-469512-standard Boot Menu (ESXi 5 boot menu) Please select the ESXi-5.0.0.1-469512-standard Installer to install The following figure shows the contents of the disc loaded. Please take a little time to wait. The figure is still loading data, the need to be patient. This is the ESXi 5 installation welcome screen, no problem, then press Enter. Terms of use instructions, would agree to press F11. Scanning devices require a little time, please wait. The following is the Scan to HDD device, follow each scan out the results, select the appropriate drive to install ESXi 5. After selection, press Enter. If there is data in the selected hard drive, then there will be the following figure. Tip says: Hard drive of your choice available information, if you choose to continue this hard drive data will be overwritten. OK no problem, then you can press Enter. Here will want to select the preferred keyboard mode, basically choose the default US Default on the line. Please enter the login password for ESXi. Relevant information is being processed, Please Wait. Again a reminder that the hard drive to be partitioned. No problem you can press F11 to open the installation. The following figure shows the installation progress. After installation is complete you will be asked to reboot. Will be prompted to reboot. VMware ESXi Basic Settings After rebooting, the next picture shows the installation is complete, start-up screen. To log in, then configure the settings, press F2. Appears after you press F2 to enter your username and password. Account part: root password part: To just set the password during installation。 Set ip next part. After all, after the VMware vSphere Client will still need to do the connection settings. Please choose "Configure Management Network". Please select "IP Configuration". Will appear to choose to use dynamic IP or static IP. To set a fixed IP. Please select "Set static IP address and network configuration.". Then set the DNS and Hostname, please select "DNS Configuration". In peimary DNS Server and Altenate DNS Server Enter the first set and the second set of DNS Hostname follow each setting. Then configure SSH, please select "Troubleshooting Options". Select Disable SSH on the SSH service, can be used to facilitate future SSH connection problems. VMware vSphere Client Connection Download VMware vSphere Client, the installation is complete. Enter the IP address / Name and the User name and Password relevant information online. Now VMware ESXi 5 installation and basic setup is complete. ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2013 Date: 2013-05-30, 21:00 Location: Schönbrunn Palace Gardens (Vienna, Austria) Conductor: Lorin Maazel Michael Schade, Tenor Program: Giuseppe Verdi: Triumphal March from "Aida" Richard Wagner: Prelude to "Die Meistersinger von Nürnberg" Giuseppe Verdi: "La mia letizia infondere" from "I Lombardi" Giuseppe Verdi: Dances from "Otello" Richard Wagner: Prelude andLiebestod from "Tristan und Isolde" Giuseppe Verdi: Overture to "Luisa Miller" Richard Wagner: Grail Narrative from "Lohengrin" Giuseppe Verdi: Overture to "La Forza del Destino" Richard Wagner: Prelude to Act III of "Die Walküre" (Ride of the Valkyries) Download Vienna Philharmonic Summer Night Concert Schönbrunn 2013 1080P _________________________________________________________________________ WIENER PHILHARMONIKER : Sommernachtskonzert Schönbrunn , 05.30.2013 ________________________________________________________________________ Files..........: Part I & II Size...........: 4.28 GiB Source.........: HDTV 1080i [Mpeg2 video @ 37.4 Mbps] Container......: mkv Resolution.....: 1920 x 1080 progressiv Codec..........: h264 @ 7000 Kbps Format.profile.: High at L4.0 Audio1.Codec...: AC3 Stereo @ 380 kbps Frame Rate.....: 25 fps Runtime........: 1:23:00 _________________________________________________________________________ Part I Giuseppe Verdi Triumphmarsch aus der Oper „Aida“ Richard Wagner Vorspiel zur Oper „Die Meistersinger von Nürnberg“, WWV 96 Part II Richard Wagner Vorspiel und Isoldes Liebestod aus der Oper „Tristan und Isolde“, WWV 90 Giuseppe Verdi Ouvertüre zur Oper „Luisa Miller“ Richard Wagner „In einem fernen Land“, Szene des Lohengrin aus „Lohengrin“ (Gralserzählung) Giuseppe Verdi Ouvertüre zur Oper „La forza del destino“ Richard Wagner Walkürenritt aus der Oper „Die Walküre“ _________________________________________________________________________ Ripped from an HD 1080 satellite feed Encoded and capped by MME (Me, Myself and Eye) Enjoy MAURICELUGHER _________________________________________________________________________ ### Save screen brightness for the Ubuntu 12.04 laptop Open a terminal, enter the command: $ sudo gedit /etc/default/grub Find code like this: $ GRUB_CMDLINE_LINUX="" Modify this code to GRUB_CMDLINE_LINUX="acpi_backlight=vendor" Update GRUB $ sudo update-grub Set initial value of brightness $ sudo gedit /etc/rc.local Add this code before exit 0: $ sudo echo 0 > /sys/class/backlight/acpi_video0/brightness 0-10, which corresponds to 0 to 100%, echo 5 means that half of the luminance. Save the file and exit, restart your computer. ### Integration Proteus and Keil Proteus PCB design combines the ISIS schematic capture and ARES PCB layout programs to provide a powerful, integrated and easy to use suite of tools for professional PCB Design. All Proteus PCB design products include an integrated shape based autorouter and a basic SPICE simulation capability as standard. More advanced routing modes are included in Proteus PCB Design Level 2 and higher whilst simulation capabilities can be enhanced by purchasing the Advanced Simulation option and/or micro-controller simulation capabilities. If the learning and use of microcontroller, in addition to other flexible application Protel schematic and PCB diagram drawn outside, then the Keil C51 software should be mastered, we want to use it to write and debug microcontroller program. Keil C51 microcontroller application development software is one of many excellent software, which combines edit, compile, in one simulation support assembly, PLM language and C language programming, user-friendly, easy to use. Here's three questions were introduced. Proteus Get Started Double-click on the desktop icon or ISIS 7 Professional the bottom left of the screen, click the "Start" → "Programs" → "Proteus 7 Professional" → "ISIS 7 Professional", and then entered the Proteus ISIS integrated environment. Working Interface Proteus ISIS working interface is a standard Windows interface, including: the title bar, the main menu, standard toolbar, the Drawing toolbar, status bar, object selection button to preview the object orientation control buttons, the simulation process control buttons, the preview window, object Selector window graphics editing window. Example Design The required components added to the object selection window. Picking Components into the Schematic click the object selector button. Pop "Pick Devices" page, in the "Keywords" enter AT89C, the system in the object library search find, and search results are displayed in the "Results" . In the "Results" column of the list of items, double-click "AT89C52", can be "AT89C52" added to the object selection window. Then, in "Keywords" field to re-enter LED. Double-click the "LED-BLUE", can be "LED-BLUE" (LED digital tube) added to the object selector window, using the same method, the 10WATT470R resistance added to the object selection window. After the above operation, the object selection window, has been a AT89C52, LED-BLUE, 10WATT470R three components object, if click AT89C52, in the preview window, to see the physical map of AT89C51, click the other two devices , can browse to the physical map. At this point, we have noticed that the components in the drawing toolbar button is selected. Place components to the graphic editor window Placing Components onto the Schematic In the Object Selection window, select the AT89C52, the mouse over the object graphics editing window To put location, click the left mouse button, the object is placed finish, shown in Figure 1-6. . Similarly, the LED-BLUE and 10WATT470R placed into the graphics editor window. If you need to move the position of the object, move your mouse over the object, click the right mouse button, then we have already noted, the object's color has changed to red, indicating that the object has been selected, press the left mouse button, drag mouse, the object to a new location, release the mouse to complete the move operation. Place bus to the graphics editor window. Click the Drawing button on the toolbar of the bus, so that is selected. Mouseover graphics editor window, click the left mouse button to determine the starting position of the bus; moving the mouse, the screen appears fine pink line, find the end of the bus location, click the left mouse button, and then click the right mouse button to to confirm and exit the painting bus operation. Thereafter, pink thin blue line is replaced by coarse straight, Connections between components Wiring Up Components on the Schematic Proteus intelligence can draw the line when you want to be automatically detected. Below, we manipulate the upper end of the resistor R1 is connected to the lower end of D1 digital tube. When the mouse pointer near the top of the connection point R1, followed by the mouse pointer will appear a "×" number that found a connection point R1, click the left mouse button, move the mouse (without dragging the mouse), the mouse the lower end of the pointer near the D1 connection point, the mouse pointer will appear to follow a "×" number that found a connection point D1, appeared on the screen while the pink connection, click the left mouse button, pink connecting line became dark green, then to complete this connection. Proteus has a line automatic path functions (referred WAR), was elected in the two connection points, WAR will select an appropriate path connections. WAR by using standard toolbar "WAR" command button to turn on or off, you can also menu bar "Tools" found under the icon. Similarly, we can complete other connections. At any time during this process, you can press the ESC key or click the right mouse button to draw lines to give up. Components connected to the bus. Click the Drawing toolbar Wire Labels button, so that is selected. Place the cursor on the element graphical editing window at one end, move the mouse, and then connected to the bus, and then move the mouse in the device connected to the bus line to a certain point, there will be a "×" number, as shown. Showed marked wires that can be found, click the left mouse button to bring up the Edit Wire Label window. In the "string" field, enter the label name and click "OK" button, the end of the wire label calibration. Similarly, other leads can mark labels, complete the connection. Note that, in the calibration process of conductor pins, each of the conductors must be marked on the same label name. We know that, with the same reference numerals are electrically connected, which is in protel schematic drawing when was especially evident. So far, we have completed the entire schematic drawing. Keil Double-click on the desktop icon or Keil uVision2 the bottom left of the screen, click the "Start" → "Programs" → "Keil uVision2", then entered the Keil uVision2 integrated environment. Working interface Keil uVision2 working interface is a standard Windows interface, including: the title bar, the main menu, standard toolbar, the code window, etc. Keil & Proteus Debug If Keil C51 and Proteus are properly installed in the C:\Program Files directory where the C:\Program Files\Labcenter Electronics\Proteus 8 Professional\MODELS\VDM51.dll copied to the C:\Program Files\keilC\C51\BIN directory, if there is no "VDM51.dll" file, you can download from here. Use Notepad to open C:\Program Files\keilC\C51\TOOLS.INI documents in [C51] section is added: TDRV5 = BIN \ VDM51.DLL ("Proteus VSM Monitor-51 Driver") Where "TDRVX" in the "X" to write to the actual situation, don't duplicate the original. Keil Configuration Click the "Project Menu / Options for Target" option or click on the toolbar "option for ta rget" button, pop-up window, click the "Debug" button. In the dialog box that appears in the upper right-hand column drop-down menu select "Proteus VSM Monitor a 51 Driver". And also click "Use" show in front of the selected small dots. Then click "Setting" button to set the communication interface, in "Host" later added "127.0.0.1", if you are not using the same computer, you need to add here another computer's IP address (another computer should also be install Proteus). In the "Port" added after "8000." Set a good case as shown, then click the "OK" button. Finally, the project compiled into debug state, and running. Once configured, please re-compile, link, generate an executable file. Proteus Configuration Into the Proteus ISIS, left mouse button click on the menu "Debug", select "use romote debuger monitor", thereafter, can be realized Keil connection with Proteus debugging. Proteus Official Website www.labcenter.com Keil Official Website www.keil.com Download Proteus V8.0 SP1 Download Keil C51 V9.00 File: Keil_C51_V9.00_Installer.zip Size: 30683859 Byte MD5: 34A200FDBA8B834DD788E82A22D2DD13 SHA1: 9C2911B29F8C9D46BE5CCBD95C08ECC6A773FA83 CRC32: 12241756 File: Keil_C51_V9.00_Installer.zip Size: 30683859 Byte MD5: 34A200FDBA8B834DD788E82A22D2DD13 SHA1: 9C2911B29F8C9D46BE5CCBD95C08ECC6A773FA83 CRC32: 12241756 ### How To change Adobe Photoshop CS6 Simplified Chinese version to the English version Find the file tw10428.dat, for example, my file path is: C:\Program Files\Adobe\Adobe Photoshop CS6\Locales\zh_CN\Support Files\tw10428.dat Change the file name tw10428.dat to tw10428.dat.bak ### Solve WordPress correct password can’t login the control panel If you can't login the WordPress control panel with correct password, prompt for password error, please try the following step. Use text editor to open the files in wordpress install directory /wp-includes/pluggable.php, change: setcookie($auth_cookie_name, $auth_cookie, $expire, ADMIN_COOKIE_PATH, COOKIE_DOMAIN, $secure, true); to setcookie($auth_cookie_name, $auth_cookie, $expire, SITECOOKIEPATH, COOKIE_DOMAIN, $secure, true); Save and close the file and login again. Please change before the backup file. ### Upgrade to a Newer Version of Ubuntu Before you begin, make a backup of important files. Before starting any upgrade process, it is recommended to disable all third-party software library. When everything is ready to start using the following steps: For desktop version Open a terminal, run the command $ sudo apt-get update && sudo apt-get dist-upgrade Press Alt F2 key combination to bring up the run command box: $ update-manager –d "Software Update" window will appear. Check for updates, you will see the prompt "Ubuntu 13.04 now available.Click the Upgrade If getting "System program problem detected" pops up regularly after upgrade, open a terminal and remove all old crash-reports that might exist in the crash-report directory: $ sudo rm /var/crash/* After a reboot the popups are gone. For Server version Execute the following commands: $ sudo apt-get update $ sudo apt-get upgrade $ sudo apt-get install update-manager-core $ sudo do-release-upgrade -d If you are using LTS version execute the following commands: $ sudo apt-get update $ sudo apt-get upgrade $ sudo apt-get install update-manager-core $ sudo vim /etc/update-manager/release-upgrades File Prompt = lts Changed Promet = normal Then execute the command: $ sudo do-release-upgrade -d ### WordPress Install Plug-ins Prompted to Enter the FTP Account Information Prompted to enter the FTP account information when you install the plug-in WP background, in fact, this problem is Apache / Nginx execution of the identity of not file owner. If you are using a stand-alone server or VPS, WordPress installation directory is /var/www/wordpress, to modify the website where the directory permissions: $ sudo chmod -R 755 /var/www/wordpress Then the owner of all the files in the site directory to www $ sudo chown -R www /var/www/wordpress If you are using a virtual host, add the following code in wp-config.php define("FS_METHOD", "direct"); define("FS_CHMOD_DIR", 0777); define("FS_CHMOD_FILE", 0777); ### WordPress Database Clean-up Optimization Plugin WP Clean Up can help us to clean up the wordpress database by removing "revision" "draft" "auto draft" "moderated comments" "spam comments" "trash comments" "orphan postmeta" "orphan commentmeta" "orphan relationships" "dashboard transient feed". It allows you to optimize your WordPress database without phpMyAdmin. It adds a settings page to "Dashboard" → "Settings" → "WP Clean Up" where you can clean up the wordpress database. It allows you to remove the redundant data within few clicks. It is a effective tool to keep your database clean and surely save a lot of space of your MySQL Database. Screenshots Office Website ### Share a GIMP Chinese Tutorial GIMP (GNU Image Manipulation Program) is an image retouching and editing tool and is released under the GPLv3 license as free and open-source software. There are versions tailored for most operating systems including Microsoft Windows, OS X and Linux. GIMP has tools used for image retouching and editing, free-form drawing, resizing, cropping, photo-montages, converting between different image formats, and more specialised tasks. Animated images such as GIF and MPEG files can be created using an animation plugin. The developers and maintainers of GIMP strive to create a high-end free software graphics application for the editing and creation of original images, photos, icons, graphical elements of web pages and art for user interface elements. Click here to download ### Sublime Text 2/3 GBK Encoding Support and Input Chinese with ibus on Ubuntu Sublime Text 3 GBK Encoding Support Press Ctrl ~ to open the command line, and then enter the following line of code import urllib.request,os,hashlib; h = '2915d1851351e5ee549c20394736b442' + '8bc59f460fa1548d1514676163dafc88'; pf = 'Package Control.sublime-package'; ipp = sublime.installed_packages_path(); urllib.request.install_opener( urllib.request.build_opener( urllib.request.ProxyHandler()) ); by = urllib.request.urlopen( 'http://packagecontrol.io/' + pf.replace(' ', '')).read(); dh = hashlib.sha256(by).hexdigest(); print('Error validating download (got %s instead of %s), please try manual install' % (dh, h)) if dh != h else open(os.path.join( ipp, pf), 'wb' ).write(by) After running, restart Sublime Text 3, in order to proceed to the next step. Installation ConvertToUTF8 Press Command Shift P (on OS X), Ctrl Shift P (on Linux / Windows) to open the universal search box, then enter the install package return, this time will load a list of all packges. See the list after re-enter ConvertToUTF8 Enter, it will download and install this package. Sublime Text 2 GBK Encoding Support Open Preferences menu,select Browse Packages,the system will open Sublime Text 2 Packages folder,Return to the previous menu,open Installed Packages folder. Download and copy Package Control.sublime-package to the Installed Packages folder. Restart Sublime Text 2 Press Command Shift P (on OS X), Ctrl Shift P (on Linux / Windows) to open a command line mode,input Install Package to search for the Package.Install ConvertToUTF8 and GBK Encoding Support can be properly read and write CJK file format. If you are prompted Encoding GBK is not supported. Please install Codecs26 package. Input Chinese with ibus on Ubuntu Installing A small plugin for chinese input in Sublime Text 2 input helper. Just a little plugin to open new gtk window having text input field which will support Input Method like ibus, then insert it back to Sublime Text 2 when you complete input text. Manual Install:(Running in a terminal) $ cd ~/.config/sublime-text-2/Packages $ git clone https://github.com/xgenvn/InputHelper.git Using Make sure your iBus/scim is working normally on other gtk application Default key combination: Ctrl Shift Z → Input text in the text field → Press Enter or Ctrl Enter to place text → Press Arrow Key or End to place cursor to the last position. Ctrl Enter to reach new line after input text. This plugin is only working on Linux! On Windows or Mac you shouldn't have above issues to use this plugin. ### Ubuntu can not be shut down solutions Step1 In the terminal input: $ sudo gedit /etc/modules Add the following code at the end of the file,then save it. $ apm power_off=1 Step2 In the terminal input: $ sudo gedit /etc/rc0.d/S90halt Find Sec code hddown="-h" if grep -qs '^md.*active' /proc/mdstat then hddown="" fi and netdown="-i" if [ "$NETDOWN" = "no" ]; then netdown="" fi Make to these two pieces of code commented out #hddown="-h" #if grep -qs '^md.*active' /proc/mdstat #then # hddown="" #fi #netdown="-i" #if [ "$NETDOWN" = "no" ]; then # netdown="" #fi Step3 In the terminal input: $ sudo gedit /etc/default/grub Find GRUB_CMDLINE_LINUX="" Modify it to GRUB_CMDLINE_LINUX="noacpi acpi=off acpi=force apm power_off=1" Step4 $ sudo update-grub Reboot you system. ### Install Ubuntu dual graphics laptop computer how to disable discrete graphics I use a laptop computer model is DELL Inspiron 5520 15RD-2518 Integrated graphics : Intel HD Graphics 4000 Discrete graphics : AMD Radeon HD 7670M Ubuntu operating system, support for the ATI graphics card is not very well, and used a graphics card when not many,so discrete graphics off, can reduce system temperature, reducing power consumption. Execute the following command in the terminal in order to view the graphics state. $ sudo cat /sys/kernel/debug/vgaswitcheroo/switch If you display a prompt similar to the following 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 Pwr which means that it is in the opened state,This shows that the computer two graphics cards is turned on,by modifying the /etc/rc.local file to turn off the discrete graphics. Open rc.local file: $ sudo gedit /etc/rc.local Add the following statement before the exit 0 in the /etc/rc.local file: echo IGD > /sys/kernel/debug/vgaswitcheroo/switch #Switch to integrated graphics echo OFF > /sys/kernel/debug/vgaswitcheroo/switch #Close the other is not connected to the graphics card After completion of editing,save and exit the text editor.Restart the computer and then view the graphics state. $ sudo cat /sys/kernel/debug/vgaswitcheroo/switch If you display a prompt similar to the following,Discrete graphics has been closed. 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Off:0000:01:00.0 ### Participate in performances today Today, our school College Military Band to participate in performances, playing "Pirates of the Caribbean". ### Ubuntu Software Installation Reference To share some of the software I use Ubuntu operating system, if you want to recommend to my software, please leave a message tell me. Input Method Platform IBus The Intelligent Input Bus (IBus) is an input method framework for multilingual input in Unix-like operating systems. It's called "Bus" because it has a bus-like architecture. You can use IBus to type in your own language in most GUI applications. Official Website code.google.com/p/ibus Network Application Openfetion OpenFetion is a Fetion client for linux based on GTK+2.0, implementing Fetion Protocol Version 4. It supports most features of Fetion. What's more, it's lightweight and efficient with intuitive interface. OpenFetion is the main branch of the ofetion project which also contains pidgin-openfetion, libofetion and cliofetion. Skype With more than 500 million registered users, Skype is the most popular voice communication service in the world. The Skype service comes with an application software, called a softphone, which is why many people think of Skype as being only a software, overlooking the great service behind. Official Website www.skype.com aria2 aria2 is a lightweight multi-protocol & multi-source command-line download utility. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. aria2 can be manipulated via built-in JSON-RPC and XML-RPC interfaces. Official Website aria2.sourceforge.net/ Web Browser Chrome Web Browser Google Chrome is a browser that combines a minimal design with sophisticated technology to make the web faster, safer, and easier. Official Website www.google.com/intl/en/chrome/browser Mozilla Firefox Mozilla Firefox is one of the best browsers out there on the market, and it's free. Through the unique development methods of Open Source, they are able to make a product with impressive speed and less bugs than programs developed by traditional methods. Mozilla Firefox has a number of unique features, and it is overall a good product. Throughout this series I will try to present Mozilla Firefox for you. Official Website www.mozilla.org FTP Client FileZilla FileZilla is a free and open source FTP client for Windows, Mac and Linux. It is developed and maintained by Tim Kosse and the FileZilla team. Development started in 2001 and it has evolved to become one of the most popular FTP clients in use today. Official Website filezilla-project.org E-mail Client Thunderbird Thunderbird is a free, open-source, cross-platform application for managing email and news feeds. It is a local (rather than a web-based) email application that is powerful yet easy-to-use. Text, office and related processing LibreOffice LibreOffice is a free and open source office suite, developed by The Document Foundation. It is descended from OpenOffice.org, from which it was forked in 2010. The LibreOffice suite comprises programs to do word processing, spreadsheets, slideshows, diagrams, maintain databases, and compose math formulae. Official Website www.libreoffice.org Programmers Text Editor Vim Vim is a highly configurable text editor built to enable efficient text editing. It is an improved version of the vi editor distributed with most UNIX systems. Vim is often called a "programmer's editor," and so useful for programming that many consider it an entire IDE. It's not just for programmers, though. Vim is perfect for all kinds of text editing, from composing email to editing configuration files. Despite what the above comic suggests, Vim can be configured to work in a very simple (Notepad-like) way, called evim or Easy Vim. Official Website www.vim.org gedit gedit is a text editor for the GNOME desktop environment, Mac OS X and Microsoft Windows. Designed as a general purpose text editor, gedit emphasizes simplicity and ease of use. It includes tools for editing source code and structured text such as markup languages. It is designed to have a clean, simple graphical user interface according to the philosophy of the GNOME project, and it is the default text editor for GNOME. Official Website projects.gnome.org/gedit Sublime Text Sublime Text is a cross-platform text and source code editor, with a Python API. The chrome was inspired by Vim.Its functionality is also extendable with sublime-packages; Sublime Text is not open source software nor free software, but some of the extending packages have free-software licenses and are community-built and maintained. Official Website www.sublimetext.com Documents Reader chmsee Reader chm,support to the Chinese. Audio-visual Image Related Shotwell Both Ubuntu and Fedora ship with a application called Shotwell for photo management (-this has basically replaced F-Spot, which did a similar task). It seems to function much like the Organizer in Adobe PhotoShop Elements or similar applications that are bundled with modern digital cameras. Official Website www.yorba.org/projects/shotwell Image Modification / Processing XnView XnView is a cross-platform image viewer used for viewing, converting, organising and editing graphical & video files. It is free of charge for private, educational and non-profit organisations. For commercial use and distribution, the user must register the program. It provides features normally found only in commercial image viewers. Official Website www.xnview.com/en/index.php GIMP GIMP is an acronym for GNU Image Manipulation Program. It is a freely distributed program for such tasks as photo retouching, image composition and image authoring. It has many capabilities. It can be used as a simple paint program, an expert quality photo retouching program, an online batch processing system, a mass production image renderer, an image format converter, etc. GIMP is expandable and extensible. It is designed to be augmented with plug-ins and extensions to do just about anything. The advanced scripting interface allows everything from the simplest task to the most complex image manipulation procedures to be easily scripted. GIMP is written and developed under X11 on UNIX platforms. But basically the same code also runs on MS Windows and Mac OS X. Official Website www.gimp.org Music Play Audacious Audacious is an open source audio player. A descendant of XMMS, Audacious plays your music how you want it, without stealing away your computer’s resources from other tasks. Drag and drop folders and individual song files, search for artists and albums in your entire music library, or create and edit your own custom playlists. Listen to CD’s or stream music from the Internet. Tweak the sound with the graphical equalizer or experiment with LADSPA effects. Enjoy the modern GTK-themed interface or change things up with Winamp Classic skins. Use the plugins included with Audacious to fetch lyrics for your music, to set an alarm in the morning, and more. Official Website audacious-media-player.org Rhythmbox Rhythmbox is an integrated music management application, originally inspired by Apple's iTunes. It is free software, designed to work well under the GNOME Desktop, and based on the powerful GStreamer media framework. Official Website projects.gnome.org/rhythmbox Video Player VLC VLC media player (informally just VLC) is a highly portable free and open-source cross-platform media player and streaming media server written by the VideoLAN project. VLC media player supports many audio and video compression methods and file formats, including DVD-Video, video CD and streaming protocols. It is able to stream over computer network and to transcode multimedia files. Official Website www.videolan.org/vlc Audio&Video Related RecordMyDesktop recordMyDesktop is a free and open source desktop screencasting software application written for GNU/Linux. The program is separated into two parts; a command line tool that performs the tasks of capturing and encoding, and an interface that exposes the program functionality graphically. There are two front-ends written in python with pyGtk (gtk-recordMyDesktop) and pyQt4 (qt-recordMyDesktop). RecordMyDesktop also offers the ability to record audio through ALSA, OSS or the JACK audio server. RecordMyDesktop only outputs to Ogg using Theora for video and Vorbis for audio. Audio Creation Audacity Audacity is a free, easy-to-use and multilingual audio editor and recorder for Windows, Mac OS X, GNU/Linux and other operating systems. NtEd A musical score editor (only) for Linux. Official Website vsr.informatik.tu-chemnitz.de/~jan/nted/nted.xhtml Non-linear Video Editing OpenShot Video Editor OpenShot Video Editor is a free and open-source video editing software package for GNU/Linux, built with Python, GTK, and the MLT Framework. The project was started in August 2008 by Jonathan Thomas, with the objective to provide a stable, free, and friendly to use video editor. Web Server Apache The Apache HTTP Server, commonly referred to as Apache, is a web server software program notable for playing a key role in the initial growth of the World Wide Web.In 2009 it became the first web server software to surpass the 100 million website milestone.Apache was the first viable alternative to the Netscape Communications Corporation web server (currently named Oracle iPlanet Web Server). Typically Apache is run on a Unix-like operating system,and was developed for use on Linux. Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation. The application is available for a wide variety of operating systems, including Unix, FreeBSD, Linux, Solaris, Novell NetWare, OS X, Microsoft Windows, OS/2, TPF, and eComStation. Released under the Apache License, Apache is open-source software. Apache was originally based on NCSA HTTPd code. The NCSA code has since been removed from Apache, due to a rewrite. Since April 1996 Apache has been the most popular HTTP server software in use. As of December 2012 Apache was estimated to serve 63.7% of all active websites and 58.49% of the top servers across all domains. Official Website www.apache.org Database Server MySQL MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack (and other 'AMP' stacks). LAMP is an acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source projects that require a full-featured database management system often use MySQL. Official Website www.oracle.com/index.html IDE Geany Geany is a lightweight cross-platform GTK+ text editor based on Scintilla and including basic Integrated Development Environment (IDE) features. It is designed to have short load times, with limited dependency on separate packages or external libraries. It is available for a wide range of operating systems, such as BSD, Linux, Mac OS X,Solaris and Windows. Among the supported programming languages and markup languages are C, C++, C#, Java, JavaScript, PHP, HTML, LaTeX, CSS, Python, Perl, Ruby, Pascal, Haskell, Erlang, Vala and many others. Official Website www.geany.org OpenJDK Java OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java Platform, Standard Edition (Java SE).It is the result of an effort Sun Microsystems began in 2006. The implementation is licensed under the GNU General Public License (GNU GPL) with a linking exception. The linking exception exempts components making use of the Java class library from the GPL licensing terms. OpenJDK is the official Java SE 7 reference implementation. System Setup and Management Class Synaptic Package Manager Synaptic is a computer program which is a GTK+ graphical user interface front-end to the Advanced Packaging Tool for the Debian package management system. Synaptic is usually used on systems based on deb packages but can also be used on systems based on RPM packages. It can be used to install, remove and upgrade software packages and to add repositories. Official Website www.nongnu.org/synaptic Pulse PulseAudio is a sound server for POSIX and Win32 systems. A sound server is basically a proxy for your sound applications. It allows you to do advanced operations on your sound data as it passes between your application and your hardware. Things like transferring the audio to a different machine, changing the sample format or channel count and mixing several sounds into one are easily achieved using a sound server. Gpick gpick is an advanced color picker used to pick colors from anywhere on the screen, mix them to get new colors, generate shades and tints, and export palettes to common file formats or simply copy them to the clipboard. Official Website code.google.com/p/gpick Ubuntu One Ubuntu One is the personal cloud that brings your digital life together, so you can enjoy your content, your way, wherever you are. With our suite of cloud services including storage, sync, sharing and streaming - you have immediate access to your music collection, favorite photos, videos, important documents and more, at any time and from any device. Official Website one.ubuntu.com VMware WorkStation VMware Workstation is a hypervisor that runs on x64 computers; it enables users to set up multiple virtual machines (VMs) and use them simultaneously along with the actual machine. Each virtual machine can execute its own operating system, such as Microsoft Windows, Linux or BSD variants. As such, VMware Workstation allows one physical machine to run multiple operating systems simultaneously. Workstation is developed and sold by VMware, Inc., a division of EMC Corporation. Official Website www.vmware.com Furius ISO Furius ISO Mount is a free GNU/Linux program to mount (without burning to disk) and burn disk image files (ISO, IMG, BIN, MDF and NRG) using a GTK+ visual interface. Ubuntu Tweak Ubuntu Tweak is an application to config Ubuntu easier for everyone. It provides many useful desktop and system options that the default desktop environment doesn't provide. With its help, you will enjoy with the experience of Ubuntu! Official Website ubuntu-tweak.com Ark Ark is an archiving tool for the KDE Software Compilation, included into kdeutils package. Ark does not intrinsically understand any archive format, but instead acts as a frontend to command-line archivers. It can work with many backends, including 7z, tar, rar, zip, gzip, bzip2, lha, zoo, and ar. Official Website utils.kde.org/projects/ark VNC VNC (an abbreviation for Virtual Network Computing) is a great client/server software package allowing remote network access to graphical desktops. With VNC, you can access your machine from everywhere provided that your machine is connected to the Internet. VNC is free (released under the GNU General Public License) and it's available on most platforms. Official Website www.realvnc.com In addition to the software I am using Ubuntu App, there are many useful software else,More software Ubuntu Software Reference. ### Airdroid - Free Remote Into Your Android From Web Browser Just wanted to post this for everyone, and does function with port forwarding with your router and mydns for remote access for when your out and about or in another city. Links provided. AirDroid is a fast, free app that lets you wirelessly manage & control your Android device from a web browser over-the-air. Enjoy below features after installation from the comfort of your favorite web browser on desktop or other devices: Files Transfer files between Android devices and computers. Cut, copy, paste, search, rename or delete files on the SD card. SMS Read, send, forward or delete SMS messages. Applications Install, uninstall, backup, search apps and do batch processing. Photos Preview, delete, import, export, set as wallpaper and photos slide show from desktop. Contacts Group, search, create contacts, check and delete call logs. Ringtones Search, preview, import, export, delete and customize ringtones for phone calls, notifications and alarms. Music Play, search, import, export, delete, or set as phone call, notification and alarm ringtones. AirDroid also comes with the following inbuilt features: Device Status Real-time ROM, SD Card, Battery, CPU and RAM status report & monitor with charts to demonstrate available/used/total resources, and one-tap memory boost. Tasks Manager Kill or uninstall running apps, batch operation supported. Apps Manager Uninstall, share or check details of user and system apps. Files Control Cut, copy, rename, send/share, delete, sort and create visible or hidden folders and files. Key words:web desktop, browser server, websharing,remote desktop control, web remote desktop, wifi connect, wireless transfer, apps manager, tasks manager, contacts manage, SMS text message, call log delete, photos preview Oficial Website www.airdroid.com ### Install Sublime Text 2 on Ubuntu 12.04 (Unity) Sublime Text is an awesome text editor. If you've never heard of it, you should check it out right now. I've made this tutorial because there's no installer for the Linux versions of Sublime Text. While that's not a real problem, I feel there is a cleaner way to go around this. Also, this post will show you how to integrate Sublime Text to Unity (which, I'm glad to report, has now matured into a fully functional user interface). So let's get on with this. Here is my how to install Sublime Text on Ubuntu tutorial. This tutorial has been updated following feedback from AskUbuntu. Also, you need to do these steps using your login. Do not start by typing sudo -s. Step 1 Download the tarfile that suits you best and extract it. Here's the command to extract tar.bz2 files: $ tar xf Sublime\ Text\ 2.0.1\ x64.tar.bz2 You'll notice that I got the 64-bit version. The reason is that it's lightning fast. So, go for that if you can! Step 2 You'll get a Sublime Text 2 folder after extraction. This folder contains all the files that Sublime Text will need. So we have to move that folder somewhere more appropriate. Like the /opt/ folder : $ sudo mv Sublime\ Text\ 2 /opt/ Step 3 At some point you'd want to be able to call Sublime Text from the Terminal by just typing sublime. To do that, we'll just create a symbolic link in /usr/bin like thus: $ sudo ln -s /opt/Sublime\ Text\ 2/sublime_text /usr/bin/sublime Step 4 Now that our files are at the right place, we need to create a launcher in Unity. To do this, we're going to create a .desktop file in /usr/share/applications: $ sudo sublime /usr/share/applications/sublime.desktop And paste the following content: [Desktop Entry] Version=1.0 Name=Sublime Text 2 # Only KDE 4 seems to use GenericName, so we reuse the KDE strings. # From Ubuntu's language-pack-kde-XX-base packages, version 9.04-20090413. GenericName=Text Editor Exec=sublime Terminal=false Icon=/opt/Sublime Text 2/Icon/48x48/sublime_text.png Type=Application Categories=TextEditor;IDE;Development X-Ayatana-Desktop-Shortcuts=NewWindow [NewWindow Shortcut Group] Name=New Window Exec=sublime -n TargetEnvironment=Unity As you can see, these lines are quite straightforward. Go ahead and experiment a bit with them. Step 5 Now you would probably want to open all text files with Sublime Text 2. The easiest way to do that is to open up the file associations list: $ sudo sublime /usr/share/applications/defaults.list And replace all occurrences of gedit.desktop with sublime.desktop. That's it. You now have Sublime Text 2 installed on Unity on Ubuntu 12.04, like a pro. ### Install Ubuntu 12.04 and Windows 8 dual system Note This method does not support UEFI BIOS + GUID partition table boot。 Before you install 1.Download the installation image releases.ubuntu.com/precise 2.Free hard disk space is greater than10GB; 3.Burn mirror used at least 1G U disk or the size of the blank CD-RW,If your computer supports USB device boot options you can choose U disk to install,otherwise, use the CD to install; 4.EasyBCD, Because different boot Ubuntu and Windows, not recommended to use the Ubuntu boot to start Windows, in the Windows operating system to install EasyBCD; Start the installation 1. Free space on the hard disk partition to install Ubuntu,If you are not using Windows Home versions, don't need to partitioning tool, Start Menu → Right-click on the My Computer → Manage → Storage → Disk Management (Local).If your D drive has 120GB free, you want to install Ubuntu with 60GB, right-click the D drive → compressed volume → In the window in the "Enter the amount of space compression" enter 60000 → Next until the end; 2.Burn Ubuntu Image to your U disk mirroring or CD-RW, you can use the system comes or using Nero to making a DVD. Using UltraISO making a U disk boot disk; 3.Restart the computer, boot from a CD or U disk; At this time, please press the space bar on the keyboard; Then select a language; Select Install Ubuntu; select a install language; Select Install this third-party software; Choose whether to connect to the network; This step is very important, please choose Something else; In the open Disk Management, you can see in Windows Disk Management Compression of free disk space free space, Then in this free disk space create Ubuntu partition disk partition, Ubuntu Reference partition configuration /boot 100MB-250MB; / Install the operating system and applications 10GB-20GB; swap No mount point, virtual memory, generally not more than physical memory; /home (Personal file partition) Choose free space → add New partition size (Fill partition space),Mount point (Fill to mount partitions)→OK /boot , / , /home all created like this,swap To Use as select swap area; Partition setup is complete, make sure you have created /boot 's number, For example, I was /dev/sdb1(You may differ with my number)→In Device for boot loader installation Drop-down box, select the /dev/sdb1→Install Now Next is the time zone, user name, and password settings(Linux Operating system password must be set) After installation is complete, remove the boot media(CD or U disk) → Restart your computer directly into the Windows operating system, because we put the Ubuntu boot settings in the / boot, you need to use EasyBCD to create the boot to it. Open EasyBCD → Add New Entry → Linux/BSD → Item Name custom fill → Drive choose we just created /boot → Add Entry So far, Ubuntu 12.04 and Windows 8 dual system installation is complete. When you're not using Ubuntu when, in the Window's Disk Management to delete the partition where Ubuntu, then remove the Ubuntu boot EasyBCD will not affect the Windows operating system. ### Solve Skype can't enter the Chinese on Ubuntu (kaedah input ibus) Execute commands in a terminal $ sudo apt-get install ibus ibus-qt4 ibus-gtk $ sudo vim /etc/X11/xinit/xinput.d/default Edit file like following content: # # This configuration provides default IM setting (user edittable) # See im-switch(8) and /usr/share/doc/im-switch/README.Debian . # # Define IM for traditional X application with XIM # # XIM server name used for XMODIFIERS="@im=$XIM" # XIM program /path/filename # XIM program command line arguments # # These were traditional setting before uim and scim for CJK languages # Language LC_CTYPE XIM server XMODIFIERS Start key # Japanese ja_JP* kinput2 "@im=kinput2" Shift-Space # Korean ko_KR* ami "@im=Ami" Shift-Space # Chinese(T) zh_TW.Big5 xcin "@im=xcin-zh_TW.big5" Ctrl-Space # Chinese(S) zh_CN.GB2312 xcin "@im=xcin-zh_CN.GB2312" Ctrl-Space # XIM=ibus XIM_PROGRAM=/usr/bin/ibus-daemon XIM_ARGS="--xim" XIM_PROGRAM_XTRA= # Set following variable to non-zero string if program set itself as deamon XIM_PROGRAM_SETS_ITSELF_AS_DAEMON= # # Define GTK and QT IM module # They may or may not be using xim as the IM. # GTK_IM_MODULE=ibus QT_IM_MODULE=ibus # # Define lists of packages neded for above IM to function # DEPENDS="ibus, ibus-gtk, ibus-qt4" # # Define X start up hook script to update IM environment # Registered users of Skype are identified by a unique Skype Name, and may be listed in the Skype directory. Skype allows these registered users to communicate through both instant messaging and voice chat. Voice chat allows telephone calls between pairs of users and conference calling, and uses a proprietary audio codec. Skype's text chat client allows group chats, emoticons, storing chat history and editing of previous messages. Offline messages were implemented in a beta of version 5, but removed after a few weeks without notification. The usual features familiar to instant messaging users — user profiles, online status indicators, and so on — are also included. Official Website www.skype.com Download Windows International Edition download.skype.com/SkypeSetup.exe You can install the latest Skype in Ubuntu, by just adding this PPA to your system repository. To Add this Canonical Partner Repository to your system repository, copy and paste following commands in your terminal. Open your terminal (Ctrl+Alt+T) and use the below command to add the PPA. Issue the password for the user when prompted. Skype For Ubuntu 13.04 $ sudo sh -c "echo 'deb http://archive.canonical.com/ubuntu/ raring partner' >> /etc/apt/sources.list.d/canonical_partner.list" Skype For Ubuntu 12.10 $ sudo sh -c "echo 'deb http://archive.canonical.com/ubuntu/ quantal partner' >> /etc/apt/sources.list.d/canonical_partner.list" Skype For Ubuntu 12.04 $ sudo sh -c "echo 'deb http://archive.canonical.com/ubuntu/ precise partner' >> /etc/apt/sources.list.d/canonical_partner.list" Skype for Ubuntu 11.10 $ sudo sh -c "echo 'deb http://archive.canonical.com/ubuntu/ oneiric partner' >> /etc/apt/sources.list.d/canonical_partner.list" Skype for Ubuntu 11.04 $ sudo sh -c "echo 'deb http://archive.canonical.com/ubuntu/ natty partner' >> /etc/apt/sources.list.d/canonical_partner.list" After adding the above command, update your system repository by using the below command in the terminal. $ sudo apt-get update Once the update is finished, Use the following command to install the latest Skype in Ubuntu. $ sudo apt-get install skype Press Y when you promted, that's it. ### Modify the Hosts on Ubuntu Ubuntu System Hosts need to modify /etc/hosts file and hosts.conf file, modify to restart after the completion of the network. The specific process is as follows Modify the Hosts, execute commands in a terminal: $ sudo gedit /etc/hosts Add a resolution records, save and reboot the network: $ sudo /etc/init.d/networking restart Google Code SmartHosts Project Home: code.google.com/p/smarthosts ### Modify the hosts to youtube upload video Modify the hosts, add the following rules on the end of the hosts # Youtube Upload 203.208.46.181 youtube.com 203.208.46.181 www.youtube.com 203.208.46.181 gdata.youtube.com 203.208.46.181 m.youtube.com 203.208.46.181 help.youtube.com 203.208.46.181 upload.youtube.com 203.208.46.181 accounts.youtube.com 203.208.46.181 insight.youtube.com 203.208.46.181 apiblog.youtube.com 203.208.46.181 clients1.youtube.com 203.208.46.181 s.youtube.com 203.208.46.181 s2.youtube.com 203.208.46.181 i.ytimg.com 203.208.46.181 i1.ytimg.com 203.208.46.181 i2.ytimg.com 203.208.46.181 i3.ytimg.com 203.208.46.181 i4.ytimg.com 203.208.46.181 s.ytimg.com 203.208.46.181 ytimg.l.google.com ### Add Search Engine on Google Chrome Settings → Search → Manage search engines Add a new search engine: google Keyword: google.com URL with %s in place of query: https://www.google.com/search?hl=en&{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s ### Installation LAMP ( Apache+MySQL+PHP ) on Ubuntu Introduction A "LAMP" stack is a group of open source software that is typically installed together to enable a server to host dynamic websites and web apps. This term is actually an acronym which represents the Linux operating system, with the Apache web server. The site data is stored in a MySQL database, and dynamic content is processed by PHP. Environment Versions Operating System : Ubuntu 16.10 Yakkety Yak 64-bit Server Edition Install Tasksel Tasksel package provides a simple interface for users who want to configure their system to perform a specific task. $ sudo apt-get update && sudo apt-get install tasksel Install LAMP Server Run the following command in the terminal: $ sudo apt-get install lamp-server Test LAMP Stack Congratulation's your LAMP setup has completed. To verify open your web browser and access http://localhost/ or http://127.0.0.1. You will get It works!. It showing that Apache is working fine. Now create a phpinfo file /var/www/html/phpinfo.php using following code: <?php phpinfo(); Access this file in browser, You will get PHP info. Install PhpMyAdmin Run the following command in the terminal: $ sudo apt-get install phpmyadmin Need to be configured during installation: Apache → YES → Input MySQL root password → Set up phpMyAdmin password → Confirm Password; Installation LAMP ( Apache + MySQL + PHP ) on Ubuntu finish. ### Installing GoAgent on Ubuntu If you want to use Goagent, and you are using Ubuntu operating system now. If your using OS X operating system, please refer to Installing GoAgent on OS X. Try these steps: Get Goagent Download GoAgent 3.2.3.zip from Google Drive. Filename : GoAgent 3.2.3.zip Version : 3.2.3 MD5 : 8e7a9e7472843a731eb2675598cb4627 SHA : 4d6a9d8aae96ea666778c011f90d90e4711c967c Install Goagent(make sure you have installed python on your Ubuntu) Install gevent: $ sudo apt-get install python-gevent Upload your appid: $ python /path/to/goagent/server/uploader.zip If you don't have it, please check this Install Guide page: code.google.com/p/goagent/wiki/InstallGuide Add extension on your browsers Add Proxy SwitchyOmega addon in Chrome or FoxyProxy in Firefox. Slove "the site's security certificate is not trusted" error when try to visit HTTPS site $ sudo apt-get install libnss3-tools Importing CA to system $ certutil -d sql:~/.pki/nssdb -A -t TC -n "goagent" -i /path/to/goagent/local/CA.crt Run GoAgent just use python command to start it: $ python /path/to/goagent/local/proxy.py Do everything you want :) ### Wubi Installation Ubuntu 12.04.2 LTS on Microsoft Windows Ubuntu is a computer operating system based on the Debian Linux distribution and distributed as free and open source software, using its own desktop environment. It is named after the Southern African philosophy of ubuntu, which can be translated as "humanity towards others" or "the belief in a universal bond of sharing that connects all humanity". As of 2012, according to online surveys, Ubuntu is the most popular Linux-based operating system on desktop/laptop personal computers, and most Ubuntu coverage focuses on its use in that market. However, it is also popular on servers and for cloud computing. Development of Ubuntu is led by Canonical Ltd., a UK-based company owned by South African entrepreneur Mark Shuttleworth. Canonical generates revenue through the sale of technical support and services related to Ubuntu. According to Canonical, the Ubuntu project is committed to the principles of open source development; people are encouraged to use free software, improve it, and distribute it. Recommended to install Ubuntu LTS version ( Long Time Support ), installation, select the partition size 30GB. Before Install Visit Ubuntu Official Website download wubi.exe and ubuntu-12.04.2-wubi-amd64.tar.xz file ( Caution should be downloaded versions are consistent ) Ubuntu Official Website www.ubuntu.com Download the installation image releases.ubuntu.com/precise Copy wubi.exe and ubuntu-12.04-wubi-amd64.tar.xz file to C root directory. Run ( Press Windows + R ) → cmd, run the following command: cd C:\ wubi.exe --dimagepath=c:\ubuntu-12.04.2-wubi-amd64.tar.xz wubi Install Ubuntu Boot menu missing solution After Ubuntu Installed, selected partition will generate a ubuntu folder , first, move this folder to another directory or modify the folder name; Re-install Ubuntu using wubi,note: the installation should be selected when configuring the same configuration with first installed, when the end of the installation will be prompted to "Restart the computer". Then select "I plan to restart later.", then delete the selected partition to install newly created ubuntu folder, make the original ubuntu folder back to its original position. Restart the computer to solve the problem of missing Ubuntu boot menu. ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows WAMPs are packages of independently created programs installed on computers that use a Microsoft Windows operating system. WAMP (computing) is an acronym formed from the initials of the operating system Microsoft Windows and the principal components of the package: Apache, MySQL and one of PHP, Perl or Python. Apache is a web server. MySQL is an open-source database. PHP, Perl and Python are scripting languages that can manipulate information held in a database and generate web pages dynamically each time content is requested by a browser. Other programs may also be included in a package, such as phpMyAdmin which provides a graphical user interface for the MySQL database manager. I use the Windows 8 Professional with Media Center 64Bits OS, In Windows 7, Windows XP Professional and Windows 2003 Server systems tested, theoretically support including 2000, XP, 2003, Vista, 7, 2008, including all the above Windows NT 4.0 operating system. Preparation before installation First, go to the official website to download the latest version of the installer: Apache HTTP Server 2.2.22 Apache Official Website www.apache.org Download Page www.apache.org/dist/httpd/binaries/win32 PHP 5.3.5 (VC6 x86 Thread Safe) PHP Official Website php.net Download Page windows.php.net/downloads/releases/archives MySQL Community Server 5.6.10.1 Oracle Official Website www.oracle.com Download Page www.mysql.com/downloads/mysql Apache HTTP Server We downloaded No SSL version( OpenSSL - The web encrypted transmission, we do not need) , The latest version of MSI installation package. PHP We click on the download page to enter the Windows version. Since we are using Apache 2 environment (instead of IIS), we choose the VC6 latest version of MSI installation package. MySQL Community Server Your computer's CPU processor according to parameter selection 32-bit or 64-bit version of the latest MSI installation package to download. Three software installation package downloaded. 1.Install Apache HTTP Server. Network Domain and Server Name fill in localhost, Administrator’s Email Address fill the administrator's e-mail address. Destination Folder, Fill custom Apache HTTP Server installation path. This is my installation path. 2.Install PHP. Destination Folder, Fill custom PHP installation path. This is my installation path. Web Server Setup, Choose "Apache 2.2.x Module", To mount on the Apache HTTP Server to use. ( PHP installation directory will be installed under the php5apache2_2.dll file ) In the "Apache Configuration Directory" to choose the correct path to the Apache configuration file. Here "Extensions" choose to install all the components. 3.Install MySQL Community Server. Select Install MySQL Products → Skip the check for update (not recommended) → Next. Click the "Custom" installation type. Location fill custom MySQL Community Server installation path. This is my installation path. Click Finish, then entering the MySQL Configuration Wizard. Choose "Developer Machine" - That is used to develop a machine. "Server Machine" - For use in the server configuration, "Dedicated MySQL Server Machine" - All users of computer resources to run the MySQL service. MySQL Services Security Options. In the "root password" fill in the custom administrator password. Set up MySQL Service in Windows, Uncheck "Launch the MySQL Server automatically" - Means that not boot automatically start MySQL service. Ready to Apply Settings, Click "Finish". At this point, MySQL Community Server is installed. 4.Configuring WAMP integrated environment. Find the Apache HTTP Server configuration file httpd.conf location (installation directory conf folder), use a text editor (such as Notepad, I use Sublime Text) opens. Find the DocumentRoot the beginning of a line within quotation marks is the Web root directory path, we modify the custom directory path. Find out <Directory beginning of the line, along a path within quotation marks modification as a custom Web directory path. Find the LoadModule at the beginning of the area, add the following two lines of code, where E:/Program Files/PHP is the PHP installation path corresponding to modify your own. ( E:/Program Files/PHP is my PHP installation path ) LoadModule php5_module "E:/Program Files/PHP/php5apache2_2.dll" PHPIniDir "E:/Program Files/PHP" Find the AddType the beginning of the area, where file type used to identify sites at which to add the following two lines of code. AddType application/x-httpd-php .php AddType application/x-httpd-php .html Find DirectoryIndex line, DirectoryIndex is the default page after the file name, the index.html added before index.php default.php, note separated by a space. Open the "Run" dialog box ( Press Windows + R ), run services.msc. In the "Services" list, find the "Apache2.2" and "MySQL", respectively, to open its Properties dialog box, set the "Startup Type" to "Manual" and click "OK". Create a command file, enter the following four lines of command means stop and start "Apache2.2" and "MySQL" services. In the future you can through the command line switch file operation WAMP environment. Thus, WAMP web environment has been installed successfully and complete the basic configuration. 5.WAMP Environmental officially put into use before further configure. The following settings do not affect the basic PHP + MySQL development. If you need to support more advanced features, please follow the steps below to set. Turn on PHP wrong suggested Function: When a PHP script file processing server generates an error when the HTTP output will send detailed error information. Steps: Open the PHP installation directory, use a text editor to open the php.ini configuration file found display_errors = Off, revised as display_errors = On and save. Turn On short_open_tag Function: Enables PHP code label abbreviations functions. Steps: Open the PHP installation directory, use a text editor to open the php.ini configuration file found short_open_tag = Off, revised to short_open_tag = On and save. Modify PHP default timezone Function: Learn about the server where PHP's default time zone setting will cause no date() and gmdate() functions such as error. Steps: Open the PHP installation directory, use a text editor to open the php.ini configuration file found ; date.timezone =, revised as date.timezone = UTC and save. Modify the PHP file upload size limit Steps: Open the PHP installation directory, use a text editor to open the php.ini configuration file would be upload_max_filesize = 2M in the 2M to modify and save it to the desired number (the value is too large or illegally will not be effective, resulting in automatic use the default 2M). Thus, WAMP web environment can now be officially put into use. Attached: WAMP environment configuration common errors and their solutions PHP: Unable to load dynamic library Description: Due to individual dynamic-link library (DLL) does not exist, documents and other reasons caused by incorrect loading error, leading to fatal error at boot process fails. Documents such as php_oci8.dll common mistakes and so on. Solution: Install Zend Core restart the Apache service. (Zend Core download address: www.zend.com) MySQL service startup error: Can't connect to MySQL server on 'localhost' (10061) Description: limited compatibility with the Windows environment, you can not resolve localhost pointing, causing fatal errors at boot process fails. Solution: Open the MySQL installation directory, use a text editor to open the my.ini configuration file found [mysqld] group, then add a bind-address = 127.0.0.1, save and restart the MySQL service can. Attached: Install phpMyAdmin Download Page www.phpmyadmin.net/home_page/downloads.php Download Multi-language version of the ZIP archive ( Pictured "phpMyAdmin-3.3.9-all-languages.zip" ). After downloading unzip to the Web root directory, and rename it to phpmyadmin. Click to enter, copy the file config.sample.inc.php to config.inc.php, and use a text editor to open in $cfg ['Servers'] at the beginning of the area, add the following two lines of code, respectively. environment for MySQL user name and password. $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = 'root'; Modify $cfg['Servers'][$i]['auth_type'] = 'cookie'; of cookie for the config, meaning through the configuration file to read the user name and password to enter the license phpMyAdmin. Now, phpMyAdmin database management program installed. ### WampServer Local Installation WordPress WampServer is a Windows web development environment. It allows you to create web applications with Apache2, PHP and a MySQL database. Alongside, PhpMyAdmin allows you to manage easily your databases. WampServer Official Website www.wampserver.com WordPress is a free and open source blogging tool and a content management system (CMS) based on PHP and MySQL which runs on a Web hosting service. It has many features including a plug-in architecture and a template system. WordPress is used by over 14.7% of Alexa Internet's "top 1 million" websites and as of August 2011 manages 22% of all new websites. WordPress is currently the most popular blogging system in use on the Web, powering over 60 million websites worldwide. It was first released on May 27, 2003, by founders Matt Mullenweg and Mike Little as a fork of b2/cafelog. As of April 2013, version 3.5 had been downloaded over 18 million times. WordPress Official Website wordpress.org WordPress China Website cn.wordpress.org 1. Windows Apache Mysql PHP Integrated Installation Environment Install WampServer 2. local installation WordPress Unzip the WordPress installation files to wamp's www directory. 3. Establishment of a database. Open WampServer menu, Choose phpMyAdmin Will open web browser, choose Databases Create database WampServer 403 error occurred solution (1) Part Apache: Path: /wamp/bin/apache/Apache2.2.21/conf/httpd.conf Within the segment will deny all Directory conditions are modified to allow from all. For example <Directory "c:/wamp/apps/phpmyadmin3.3.9/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all #Allow from 127.0.0.1 Allow from all (2) Part phpmyadmin: Path: /wamp/alias/phpmyadmin.conf Similarly, within the segment will deny all Directory conditions are modified to allow from all. Finally restart all services. 4. Local installation WordPress In the browser, enter localhost Click create a configuration file → Start now to create → Fill in information in the database to be created Database Name - In the name of the database created by phpmyadmin; User Name - Usually is root; Password - The default is empty; Database Host - The default is localhost Table Prefix - The default is wp_ → Submit → Fill wordpress title, username, password, email address → Next ... → Log → complete Can't access localhost solutions Can't access localhost, or the system tray icon is orange Wampserver not be displayed in green, the reason may be the system port 80 occupied by other programs Solution Press Windows + R Key, Open Run, run cmd, input netstat -ano, You can view the local port 80 to take up the program's PID; Then look at the Task Manager process corresponding to the PID (If no PID entry Task Manager, select View → Column), End the program or turn off the process to unlock the corresponding procedure takes. WampServer local installation WordPress finish. ### Microsoft Windows and Office - MSDN Official Version and Activation Tools Microsoft Windows Microsoft Windows (or simply Windows) is a meta-family of graphical operating systems developed, marketed, and sold by Microsoft. It consists of several families of operating systems, each of which caters to a certain sector of the computing industry. Microsoft Office Microsoft Office is an office suite of applications, servers, and services developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, at COMDEX in Las Vegas. Initially, a marketing term for a bundled set of applications, the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. Download Microsoft Windows and Microsoft Office from MSDN I Tell You msdn.itellyou.cn MTKV 2.6.4 Last updated: May 5, 2018 magnet:?xt=urn:btih:1FB324E0158C8D23FF42922C92731CE1A8D08BF0 Original Source Microsoft Toolkit - Official KMS Solution for Microsoft Products ### C Programming Language Learning Materials [Chinese Simplified] 幽默诙谐,深入浅出,从学生的角度出发,适合新手学习,可结合谭浩强 -《C程序设计》学习,是C语言学习入门经典视频教程。 教程下载 郝斌 – C语言程序设计视频教程 1-10.rar 郝斌 – C语言程序设计视频教程 11-20.rar 郝斌 – C语言程序设计视频教程 21-30.rar 郝斌 – C语言程序设计视频教程 31-39.rar 郝斌 – C语言程序设计视频教程 40-49.rar 郝斌 – C语言程序设计视频教程 50-60.rar 郝斌 – C语言程序设计视频教程 61-70.rar 郝斌 – C语言程序设计视频教程 71-80.rar 郝斌 – C语言程序设计视频教程 81-89.rar 郝斌 – C语言程序设计视频教程 90-100.rar 郝斌 – C语言程序设计视频教程 101-120.rar 郝斌 – C语言程序设计视频教程 121-130.rar 郝斌 – C语言程序设计视频教程 131-150.rar 郝斌 – C语言程序设计视频教程 151-180.rar 郝斌 – C语言程序设计视频教程 补充课程.zip 《C语言程序设计大纲》.pdf 郝斌 – C语言程序设计视频教程 [源代码].zip Microsoft Visual C++ 6.0 中文版( 兼容Windows 8 ) Microsoft Visual C++ 6.0 英文版( 兼容Windows 8 ) 郝斌 – 数据结构视频教程 1-60.rar 郝斌 – 数据结构视频教程 61-78.rar 《数据结构(C语言版)》[编著]严蔚敏 吴伟民(扫描版).pdf ### Avast! – Genuine Excellent Free Anti-Virus Software Avast! is - both freeware and payable - an antivirus computer program with user interface that includes 41 languages, available to Microsoft Windows, OS X and Linux users. The name Avast is an acronym of "Anti-Virus – Advanced Set". The official, and current logo of Avast! is a white orb with the letter 'a' on it and an orange circle around it, sticking out to four directions. Its developer, AVAST Software a.s. (formerly known as ALWIL Software a.s.), has headquartered in Prague, Czech Republic, with offices in Linz, Austria; Friedrichshafen, Germany; and San Mateo, California. It has been awarded VB100 Award by Virus Bulletin multiple times for 100% detection of "in-the-wild" viruses, and also won the Secure Computing Readers' Trust Award. The central scanning engine has been certified by ICSA Labs and West Coast Labs' Checkmark process. Avast! competes in the antivirus industry against Avira, AVG Technologies, Bitdefender, F-Secure, Frisk, Kaspersky, McAfee, Symantec and Trend Micro among others. Avast!Offical Website www.avast.com ### LibreOffice – Free · Cross-platform · Open-source Office Software LibreOffice is a free and open source office suite, developed by The Document Foundation. It is descended from OpenOffice.org, from which it was forked in 2010. The LibreOffice suite comprises programs to do word processing, spreadsheets, slideshows, diagrams, maintain databases, and compose math formulae. It is designed to be compatible with other major office suites, including Microsoft Office, though some Microsoft Office layout features and formatting attributes are handled differently or are unsupported. LibreOffice is available in over 30 languages and for a variety of computing platforms,including Microsoft Windows, Mac OS X 10.4 Tiger or newer, and X11-running Linux with GTK 2 version 2.10.4 or newer.It is the default office suite of popular Linux distributions like Debian, Fedora, openSUSE and Ubuntu. Ports for FreeBSD, NetBSD, and OpenBSD are being maintained by contributors to those projects, respectively. Between January 2011 (its first stable launch) and October 2011, LibreOffice was downloaded approximately 7.5 million times. LibreOffice Offical Website www.libreoffice.org LibreOffice Chinese Website zh-cn.libreoffice.org ### Site Construction [Chinese Simplified] 网站前台编写 一、概述 随着 Web 2.0概念的普及和 W3C 组织的推广,网站重构的影响力正以惊人的速度增长。 XHTML + CSS 布局,DHTML 和 Ajax 像一阵旋风,铺天盖地席卷而来,包括新浪,搜狐,网易,腾讯,淘宝等在内的各种规模的 IT 企业都对自己的网站进行了重构。 网站进行重构原因 优化 - 根据 W3C 标准进行重构后,可以让前端的代码组织更有序,显著改善网站的性能,提高网站访问速度,还能提高可维护性,对搜索引擎也更友好; 加速 - 重构后的网站能带来更好的用户体验,用 XHTML + CSS 重新布局后的页面,文件更小,下载速度更快。 目的 - 重构的本质是构建一个前端灵活的 MVC 框架,即 HTML 作为信息模型(Model),CSS控制样式(View),JavaScript 负责调度数据和实现某种展现逻辑(Controller)。 同时,代码需要具有很好的复用性和可维护性。这是高效率,高质量开发以及协作开发的基础。 DHTML 可以让用户的操作更炫,更吸引眼球;Ajax 可以实现无刷新的数据交换,让用户的操作更流畅。对于普通用户来说,一个网站是否规范,功能是否强大,服务器端是用 J2EE + Oracle 的强大组合,还是用 ASP + Access 的简单组合,并没有太明显的区别。 但是,前端的用户体验却给了用户直观的印象。 Web前端开发技术包括三个要素:HTML,CSS 和 JavaScript,但随着 RIA 的流行和普及,Flash/Flex,Silverlight,XML 和服务器端语言也是应该掌握的。 二、总结 前端开发要用到的技术和语言 基本的 Web 前端开发技术包括:CSS,HTML,DOM,Ajax等,在掌握这些技术的同时,还要清楚地了解它们在不同浏览器上的兼容情况,渲染原理和存在的 Bug; JavaScript,JQuery,ExtJS 等页面效果框架,这是最核心的技术,做页面效果的技术。 如果精通了 JavaScript,后端编程语言,如 .NET,PHP,Ruby 都很容易掌握,因为其原理是一样的; HTML5 与 sencha,是移动设备的页面前端技术; 辅助开发工具:比如处理IE系列浏览器兼容性问题的IE Tester,FF 排错用的 Firebug,Flash Firebug,JavaScript Debugger 等; Dreamweaver,制作网页; 使用和修改开源源码 ThinkPHP 与 Zend 框架; 使用和修改模板语言,如 WordPress 等; 简单使用修图软件,如 Adobe Photoshop,Illustrator; Flash,制作动态 Flash 相册,Flash as3编程; 优化技术:网站性能优化,SEO 和服务器端的基础知识; 理论知识:除了要用到技术层面的知识,还有理论层面的知识,包括代码的可维护性,组件的易用性,分层语义模板和浏览器分级支持,等等。 三、简要介绍前台页面开发的几种语言特点 HTML 的特点 简易性:HTML 版本升级采用超集方式,从而更加灵活方便; 可扩展性:HTML 语言的广泛应用带来了加强功能,增加标识符等要求,HTML 采取子类元素的方式,为系统扩展带来保证; 平台无关性:虽然 PC 机大行其道,但使用 Mac 等其他机器的大有人在,HTML 可以使用在广泛的平台上,这也是盛行的另一个原因。 CSS 的特点 简化了网页格式设计,增强了网页的可维护性; 加强了网页的表现力,CSS 样式属性提供了比 HTML 更多的格式设计功能。例如,可以通过 CSS 样式去掉网页中超级链接的下划线,甚至可以为文字添加阴影,翻转效果等; 增强了网站格式的一致性。将 CSS 样式定义到样式表文件中。然后在多个网页中同时应用样式表文件中的样式,就确保了多个网页具有一致的格式,并且可以随时更新样式表文件,以到达可以自动更新多个网页的格式设置,从而大大降低了网站的开发与维护工作。 JavaScript 的特点 JavaScript是一种基于对象(Object)和事件驱动(Event Driven)并具有安全性能的脚本语言。使用它的目的是与 HTML 超文本标记语言、Java 脚本语言( Java 小程序)一起实现在一个 Web 页面中连接多个对象,与 Web 客户交互作用。从而可以开发客户端的应用程序等。它是通过嵌入或调入到标准的HTML语言中实现的。它的出现弥补了 HTML 语言的缺陷,它是 Java 与 HTML 折衷的选择,具有以下几个基本特点: 是一种脚本编写语言,JavaScript 是一种脚本语言,它采用小程序段的方式实现编程。像其它脚本语言一样,JavaScript 同样已是一种解释性语言,它提供了一个易的开发过程。它的基本结构形式与 C、C++、VB、Delphi 十分类似。但它不像这些语言一样,需要先编译,而是在程序运行过程中被逐行地解释。它与 HTML 标识结合在一起,从而方便用户的使用操作。 基于对象的语言,JavaScript 是一种基于对象的语言,同时以可以看作一种面向对象的。这意味着它能运用自己已经创建的对象。因此,许多功能可以来自于脚本环境中对象的方法与脚本的相互作用; 简单性,JavaScript 的简单性主要体现在:首先它是一种基于 Java 基本语句和控制流之上的简单而紧凑的设计,从而对于学习 Java 是一种非常好的过渡。其次它的变量类型是采用弱类型,并未使用严格的数据类型; 安全性,JavaScript 是一种安全性语言,它不允许访问本地的硬盘,并不能将数据存入到服务器上,不允许对网络文档进行修改和删除,只能通过浏览器实现信息浏览或动态交互。从而有效地防止数据的丢失; 动态性,JavaScript 是动态的,它可以直接对用户或客户输入做出响应,无须经过Web服务程序。它对用户的反映响应,是采用以事件驱动的方式进行的。所谓事件驱动,就是指在主页(Home Page)中执行了某种操作所产生的动作,就称为"事件"(Event)。比如按下鼠标、移动窗口、选择菜单等都可以视为事件。当事件发生后,可能会引起相应的事件响应; 跨平台性,JavaScript 是依赖于浏览器本身,与操作环境无关,只要能运行浏览器的计算机,并支持 JavaScript 的浏览器就可正确执行。从而实现了“编写一次,走遍天下”的梦想。实际上 JavaScript 最杰出之处在于可以用很小的程序做大量的事。无须有高性能的电脑,软件仅需一个字处理软件及一浏览器,无须WEB服务器通道,通过自己的电脑即可完成所有的事情。 综合所述 JavaScript 是一种描述语言,它可以被嵌入到HTML的文件之中。JavaScript 语言可以做到回应使用者的需求事件(如:Form 的输入),而不用任何的网路来回传输资料,所以当一位使用者输入一项资料时,它不用经过传给伺服端(Server)处理,再传回来的过程,而直接可以被客户端 (Client) 的应用程式所处理。 网站后台编写 后台编写语言:ASP、ASP.NET、JSP、Java、PHP、.NET、Ruby、Python 等。 ASP 优点 无需编译; 易于生成; 独立于浏览器; 面向对象; 与任何 ActiveX Scripting 语言兼容; 源程序码不会外漏。 缺点 Windows 本身的所有问题都会一成不变的也累加到了它的身上。安全性、稳定性、跨平台性都会因为与 NT 的捆绑而显现出来; ASP 由于使用了 COM 组件所以它会变的十分强大,但是这样的强大由于 Windows NT 系统最初的设计问题而会引发大量的安全问题。只要在这样的组件或是操作中一不注意,哪么外部攻击就可以取得相当高的权限而导致网站瘫痪或者数据丢失; 由于 ASP 还是一种 Script 语言所没除了大量使用组件外,没有办法提高其工作效率。它必须面对即时编绎的时间考验,同时我们还不知其背后的组件会是一个什么样的状况; 无法实现跨操作系统的应用。当然这也是微软的理由之一,只有这样才能发挥 ASP 最佳的能力。可是我却认为正是 Windows 限制了 ASP,ASP 的概念本就是为一个能让系统运行于一个大的多样化环境而设计的; 还无法完全实现一些企业级的功能:完全的集群、负载均横。 ASP.NET 优点 简洁的设计和实施; 语言灵活,并支持复杂的面向对象特性; 开发环境。 缺点 数据库的连接复杂。 JSP 优点 一处编写随处运行; 系统的多台平支持; 强大的的可伸缩性; 多样化和功能强大的开发工具支持。 缺点 与 ASP 一样,Java 的一些优势正是它致命的问题所在; 缺少系统性的资料; 速度超慢。 PHP 特点与技术优势 跨平台特性:PHP 语言可以运行于 Linux、FreeBSD、OpenBSD、Solaris Unix、AIX、Darwin、Windows 等多种操作系统; 数据库支持:PHP 支持多种主流与非主流的数据库,如:MySQL、SQL Server、Oracle、PostgreSQL 等; 安全性:加密支持,PHP4 实现了完整的加密,这些加密功能是一个完整的 mycrypt 库,并且 PHP 4.0 支持哈希函数。Blowfish,TripleDES,MD5,并且 SHA1 也是可使用的一些加密算法。 使用 PHP 语言运行 Linux、FreeBSD、OpenBSD、Solaris Unix操作系统,不需安装任何杀毒软件及补丁,安全可靠; 扩展性:扩充了 API 模块:PHP 4.0 为扩展的 API 模块的提供了扩展 PHP 接口模块,它比旧的 API 版本显著地快。 PHP 模块已有的及最常用的接口多数被转换到使用这个扩展的接口;另外,PHP 属于开源软件,其源代码完全公开,任何程序员为 PHP 扩展附加功能非常容易使 PHP 有很好的发展空间和扩展性; 执行速度快:PHP 是一种强大的 CGI 脚本语言,语法混合了 C、Java、Perl 和 PHP 式的新语法,执行网页比 CGI、Perl 和 ASP 更快,这是它的第一个突出的特点;内存方面,PHP 4.0 能更有效的使用内存,导致较少的内存占用消耗,这主要归功于引用计数技术的实现; 可移植性:PHP 写出来的 Web 后端 CGI 程序,可以很轻易的移植到不同的操作系统上。例如,Linux 架构的网站,在系统负荷过高时,可以快速地将整个系统移到Sun工作站上,不用重新编译 CGI 程序。面对快速发展的 Internet,这是长期规划的最好选择; 功能全面性:PHP 包括图形处理、编码与解码、压缩文件处理、XML 解析、支持 HTTP 的身份认证、Cookie、POP3、SNMP 等。可以利用 PHP 连接包括 Oracle,Microsoft Access,MySQL 在内的大部分数据库。从对象式的设计、结构化的特性、数据库的处理、网络接口应用、安全编码机制等,PHP 几乎涵盖了所有网站的一切功能; 可伸缩性:传统上网页的交互作用是通过 CGI 来实现的。CGI 程序的伸缩性不很理想,因为它为每一个正在运行的 CGI 程序开一个独立进程。解决方法就是将经常用来编写 CGI 程序的语言的解释器编译进 Web 服务器(比如 mod_perl,JSP),PHP 就可以以这种方式安装。内嵌的 PHP 可以具有更高的可伸缩性; 要发挥PHP语言的优势,Linux + Apache + MySQL + PHP 是非常好的组合。 网站管理工具 关系示意:网站前台 → 数据库 → 网站后台 后台是通过前台更新前台的。 管理系统后台 根据应用层面的不同,后台系统可以被划分为: 重视后台管理的网站后台管理系统; 重视风格设计的网站后台管理系统; 重视前台发布的网站后台管理系统等。 网站后台管理系统开发者的出发点是为了让不熟悉网站的用户有一个直观的表示方法,也让各种网络编程语言用户可以通过简单的方式来开发个性化的网站。 让不同的用户在懂得少许 HTML 语言的基础上,就能够快速的构建一个风格个性化的而功能强大的专业企业网站。 PHP 网站后台管理系统 Joomla!,Drupal,Mambo,DEDECMS,phpcms,Empire CMS,php168,HBcms,SupSite,WordPress 等。 数据库 Oracle 能在所有主要的平台(其中包括 Windows 上运行),并且完全支持所有的工业标准,所以,客户可以利用很多种第三方应用程序,工具,网关和管理实用程序。 Oracle 采用开放策略,它使得客户可以选择一种最适合他们特定需要的解决方案。 SQL Server 只在 Windows 上运行,Microsoft 这种专有策略的目标是将客户锁定到 Windows 环境中。 由于 SQL Server 紧密的捆绑在 Windows 平台上,所以,只有随着 Windows 可靠性,开放性以及性能的改善,SQL Server 的开放性和可靠性才能进一步提高。 优点 Oracle 的稳定性要比 SQL Server 好; Oracle 在导数据工具 sqlload.exe 功能比 SQL Server 的 Bcp 功能强大,Oracle 可以按照条件把文本文件数据导入; Oracle 的安全机制比 SQL Server 好; 在处理大数据方面 Oracle 会更稳定一些; SQL Server 在数据导出方面功能更强一些; 处理速度方面比 Oracle 快一些,和两者的协议有关。 缺点 价格昂贵。 MySQL 的特性 开放性:它所使用的语言 SQL 以 ANSI SQL2 为基础。这个数据库引擎可运行在多个平台上,包括 OS X、Linux、FreeBSD、Solaris 和 Windows。如果没有可用于你的平台的二进制文件,则可以将源代码在相应平台上进行编译; 应用支持:MySQL 为几乎任何编程语言提供了一个 API; 跨数据库连接:可以连接不同数据库的表以建立 MySQL 查询; 外连接支持:MySQL 支持 ANSI 及 ODBC 的左外连接和右外连接; MySQL 廉价而且快速。 ### Ubuntu Remote Connect Troubleshooting If the remote management software (such as SecureCRT, Putty and etc.) can't connect the virtual machine of Ubuntu, you can try the following solutions. 1. First clear you can ping the remote Ubuntu or virtual machine; 2. If you can't login to analyze the reasons that most of them are not really open SSH server service, Ubuntu alreday installed OpenSSH client, if system is not installed, we need use following command to install SSH server service: $ sudo apt-get install openssh-server 3. Confirm whether initiated ssh server service $ ps-e |grep ssh If only the ssh-server that has not yet started, we need to run following command to start it $ sudo /etc/init.d/ssh start 4. SSH server configuration file is located in /etc/ssh/sshd_config. Here you can define the SSH service port (default port is 22), you can define a different port number, such as 222, then restart the SSH service. $ sudo /etc/init.d/ssh restart ### Windows 8 built-in administrator account cannot be used to open Metro program solutions Open Run app by press Windows + R key, input gpedit.msc, will start the Group Policy Editor Expand: Computer Configuration → Windows Setting → Security Setting → Lical Policies → Security Options Finding in the right strategy: User Account Control: Only elevate executales that are signed and validated, double-click to open the property to Enabled, OK to exit logout and log again can be solved Windows 8 built-in administrator account cannot be used to open Metro program. ### Senior High Time flies, college life is over half a year, but three years of high school life is make me miss. This is my web works for Participate in The National Primary and Secondary Computer Production Activities of China on senior high grade two.[Chinese Simplified] ### Wiener Philharmoniker Vienna New Year's Concert 2013 It has long been a Philharmonic tradition at the New Year to present a program consisting of the lively and at the same time nostalgic music from the vast repertoire of the Johann Strauss family and its contemporaries. These concerts not only delight the audiences in the Musikverein in Vienna, but also enjoy great international popularity through the world wide television broadcast, which now reaches over 80 countries. Originating during a dark period of Austria's history, these concerts were initially conceived for a local audience as a reminder of better times and a source of hope for the future. Today millions of people throughout the world are similarly encouraged by the light-hearted yet subtly profound character of this music, and draw joy and optimism for the New Year ahead. It is the desire of the Philharmonic not only to provide musically definitive interpretations of the masterworks of this genre, but at the same time, as musical ambassadors of Austria, to send people all over the world a New Year's greeting in the spirit of hope, friendship and peace. Conductor: Franz Welser-Möst Program Josef Strauss: The Soubrette, Fast Polka, op. 109 Johann Strauss, Jr.: Kiss Waltz, op. 400 Josef Strauss: Theater Quadrille, op. 213 Johann Strauss, Jr.: From the Mountains, Waltz, op. 292 Franz von Suppé: Overture to the Operetta "Light Cavalry" Intermission Josef Strauss: Music of the Spheres, Waltz, op. 235 Josef Strauss: The Spinstress, Polka française, op. 192 Richard Wagner: Prelude to Act III of the Romantic Opera "Lohengrin", WWV 75 Joseph Hellmesberger, Jr.: In Confidence, Polka mazur, op. 15 Josef Strauss: Hesperus’ Paths, Waltz, op. 279 Josef Strauss: The Runners, Fast Polka, op. 237 Joseph Lanner: Styrian Dances, op. 165 Johann Strauss, Jr.: Melodies Quadrille, op.112 Giuseppe Verdi: Prestissimo from the Ballet Music in Act III of the Opera "Don Carlo" Johann Strauss, Jr.: Where the Lemon Trees Bloom, Waltz, op. 364 Johann Strauss, Sr.: Memories of Ernst or The Carnival of Venice, Fantasy, op. 126 Download Franz Welser Moest-Wiener Philharmoniker-Vienna New Year’s Concert 2013.TS[720P] ### Install Windows on Built-key PC Currently on the market are many PC preinstalled Microsoft Windows 8 Chinese version of the operating system, if you want to upgrade to a later version of Microsoft Windows 8, you can enter a new product key to add the functionality to Microsoft Windows 8 upgrade, but if want to use the installation CD or CD image re-upgrade installation, you may enter the product key of the key steps of the installation error message a few days ago as a friend to upgrade the system, they encountered such a situation. Analyze the reasons, pre-installed Microsoft Windows 8 Chinese version of the computer product installation key has been written into the computer's BIOS, so when using the installation CD or installation image to upgrade the installation, the installer reads the computer BIOS installation key, but this does not apply to high installation key versions of Microsoft Windows 8, so there will be installation key error message. understand this principle, you can modify the installation ISO image method to solve this problem. Solution: Use Notepad to create a EI.CFG file (note the file extension CFG, rather than TXT); EI.CFG content: [EditionID]Professional [Channel] Volume [VL] 1 Save the file and close Notepad program that will EI.CFG file to the Microsoft Windows installation CD directory sources folder, ISO repackaging enter the product key can skip steps. ### Blogroll Ri Xu Jun 23, 2017 freemindworld.com wgliang.github.io lambdae.github.io ### Blog Statement The blog is own by a person, without commercial purposes. It compliances with the PRC laws and regulations, the Chinese nation and the basic networks basic moral ethics. It respects for freedom of expression and ideological freedom restraint, but against the radical opposition, destruction, entertainment, advertising or irresponsible speculation words and deeds. All reproduced articles and pictures are meant for illustrative purposes only. If it’s required or deemed appropriate, the source will be identified and signed. Avoid reproduced clearly “not reproduced” statement works. If not a diversion works, please timely notification me. For copyright-free or free of copyright works, the right to revise and dissemination blog, once involving substantive changes to the revised blog will enjoy considerable copyright works. Second, reproduced please unconfirmed either the original authors for their power. The blog of all original works, including text, data, pictures, web page format, reproduced with the author, please indicate the source. ECA permit may be used for profitable purposes. The blog of Chinese intellectual property, the laws, and regulations to protect and safeguard knowledge-sharing, aimed at the destruction of any person shall conduct or seek personal gain. Blog to the English version of this statement is the case that no other language versions. Site content is licensed under a Creative Commons Attribution 4.0 International License, which is an open source license. Ri Xu Apr 29, 2014 ### Privacy Policy Your privacy is very important to me, there a few fundamental principles that I follow on this website: I don't ask you for personal information unless I truly need it. (I can't stand services that ask you for things like your gender or income level for no apparent reason.) I don't share your personal information with anyone except to comply with the law, develop my products, or protect my rights. I don't store personal information on my servers unless required for the on-going operation of my site. Website Visitors Like most website operators, this website collects non-personally-identifying information of the sort that web browsers and servers typically make available, such as the browser type, language preference, referring site, and the date and time of each visitor request. This website's purpose in collecting non-personally identifying information is to better understand how this website's visitors use its website. From time to time, this website may release non-personally-identifying information in the aggregate, e.g., by publishing a report on trends in the usage of its website. This website also collects potentially personally-identifying information like Internet Protocol (IP) addresses for logged in users and for users leaving comments on my blogs. This website only discloses logged in user and commenter IP addresses under the same circumstances that it uses and discloses personally-identifying information as described below, except that blog commenter IP addresses are visible and disclosed to the administrators of the blog where the comment was left. Gathering of Personally-Identifying Information Certain visitors to this website's websites choose to interact with this website in ways that require this website to gather personally-identifying information. The amount and type of information that this website gathers depend on the nature of the interaction. For example, we ask visitors who comment on my blog to provide a username and email address. Those who wish to receive this website updates via email, we collect their emails. In each case, this website collects such information only insofar as is necessary or appropriate to fulfill the purpose of the visitor's interaction with this website. This website does not disclose personally-identifying information other than as described below. And visitors can always refuse to supply personally-identifying information, with the caveat that it may prevent them from engaging in certain website-related activities. Aggregated Statistics This website may collect statistics about the behavior of visitors to its websites. For instance, this website may monitor the most popular pages on the xuri.me site or analytic visitor data by the Google Analytics service. This website may display this information publicly or provide it to others. However, this website does not disclose personally-identifying information other than as described below. Cookies A cookie is a string of information that a website stores on a visitor's computer, and that the visitor's browser provides to the website each time the visitor returns. This site uses cookies to help this site identify and track visitors, their usage of this website, and their website access preferences. This site visitors who do not wish to have cookies placed on their computers should set their browsers to refuse cookies before using this website, with the drawback that certain features of this websites may not function properly without the aid of cookies. Comments Comments and other content submitted to Disqus anti-spam service are not saved on my servers unless they were marked as false positives, in which case we store them long enough to use them to improve the service to avoid future false positives. Privacy Policy Changes Although most changes are likely to be minor, this site may change its Privacy Policy from time to time, and in this site's sole discretion. this site encourages visitors to frequently check this page for any changes to its Privacy Policy. Your continued use of this site after any change in this Privacy Policy will constitute your acceptance of such change. Ri Xu Feb 1, 2013 ### About Hello world, here is Ri Xu (续日), currently working as a software engineer at Microsoft. I enjoy actively contributing to open source projects, and I'm the primary developer of excelize, aurora, TOML-to-Go, xgen and etc. In my spare time, I like listening to classical music, surfing the internet, playing the clarinet, and taking a leisurely walk outdoors. Thanks for coming here. E-mail: xuri.megmail.com 0xba5e5bb1 | 7F19 AF57 1FE0 1935 98BC 46AD BA5E 5BB1 C948 EDF7 ### excelize_banner_1280x640 ### excelize_the_definitive_guide_zh_hans_banner ### excelize_the_definitive_guide_zh_hans ### excelize_8th_stargazers ### excelize_8th_contributors ### Excelize Star History on 5th Anniversary ### Excelize Contributors on 5th Anniversary ### 5th Anniversary of the Excelize Open Source ### Rust Report Card ### Excelize Technology Architecture ### Implicit and Explicit Relationship ### XML Structure and Dependency Processing ### Typical Workbook Scenario ### XML Schema Dependency ### ECMA Office Open XML Specifications ### Talks at Beijing Gopher Meetup ### Wiener Philharmoniker Vienna New Year's Concert 2020 ### Setup Static IP for a Virtual Machine CentOS ### Setup Static IP for a Virtual Machine CentOS ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### How to Install Atlassian Confluence ### Deploy the Kubernetes Web UI Dashboard ### Deploy the Kubernetes Web UI Dashboard ### Deploy Kubernetes Cluster on CentOS ### Cloud-based project engineering practice mapping ### Monitoring with Prometheus and Grafana for Go Microservices ### Monitoring with Prometheus and Grafana for Go Microservices ### Monitoring with Prometheus and Grafana for Go Microservices ### Monitoring with Prometheus and Grafana for Go Microservices ### Monitoring with Prometheus and Grafana for Go Microservices ### Wiener Philharmoniker Vienna New Year's Concert 2019 ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2018 ### Deep Copy Object with Reflecting or gob in Go ### Deep Copy Object with Reflecting or gob in Go ### Alexander Suleiman Solo Recital in CCOM ### Wiener Philharmoniker Vienna New Year's Concert 2018 ### The 10th Annual International Chamber Music Festival of the China Central Conservatory of Music ### Consistent Hashing ### Consistent Hashing ### Nicholas Daniel Solo Recital in China Central Conservatory of Music ### Golang Programs Performance Measurement and Profiling ### All Mozart Program Concert in FCCH ### Alexander Fiterstein Solo Concert in China Central Conservatory of Music ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2017 ### Cincinnati Symphony Orchestra Principals Chamber Music Concert ### Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster” is locked Setup Cassandra Multi-Node Cluster ### Setup Cassandra Multi-Node Cluster ### Profiling PHP Applications with Xdebug ### Profiling PHP Applications with Xdebug ### Wiener Philharmoniker Vienna New Year's Concert 2017 ### Richard Clayderman Great Hall of the People Christmas Piano Recital Concert 2016 ### Global Internet Architecture Conference of China 2016 ### aurora - Cross-platform Beanstalk Queue Server Console ### aurora - Cross-platform Beanstalk Queue Server Console ### aurora - Cross-platform Beanstalk Queue Server Console ### Testing and Benchmark with HPE LoadRunner ### Testing and Benchmark with HPE LoadRunner ### Testing and Benchmark with HPE LoadRunner ### Testing and Benchmark with HPE LoadRunner ### Testing and Benchmark with HPE LoadRunner ### Testing and Benchmark with HPE LoadRunner ### aurora - Cross-platform Beanstalk Queue Server Console ### Excelize - Golang Library for Reading and Writing XLSX Files ### Excelize - Golang Library for Reading and Writing XLSX Files ### Excelize - Golang Library for Reading and Writing XLSX Files ### WebSocket Server Golang Implementation ### WebSocket Server Golang Implementation ### WebSocket Server Golang Implementation ### LFU Cache Elimination Process ### LRU Cache Elimination Process ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Continuous Integration with Jenkins Cluster ### Debugging with Xdebug and PhpStorm on OS X ### Debugging with Xdebug and PhpStorm on OS X ### Debugging with Xdebug and PhpStorm on OS X ### Debugging with Xdebug and PhpStorm on OS X ### Debugging with Xdebug and PhpStorm on OS X ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2016 ### Yan'an Pagoda Hill ### Qingliang Mountain of Yan'an ### The Louguantai Temple ### Chongyang Palace ### Giant Wild Goose Pagoda ### Thi-colored Horse in Shaanxi History Museum ### Bell Tower of Xi'an ### Profiling PHP Applications with XHProf ### Profiling PHP Applications with XHProf ### Golang Programs Performance Measurement and Profiling ### Golang Programs Performance Measurement and Profiling ### Staatskapelle Dresden with Rudolf Buchbinder, National Center for the Performing Arts ### Gopher China 2016 ### Running Apache Spark on YARN with Docker ### ocserv-on-ubuntu-16-04-for-cisco-anyconnect-client-10 Cisco AnyConnect Secure Mobility Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### OCserv on Ubuntu 16.04 for Cisco AnyConnect Client ### Use SSL SNI in Production ### Enable Certificate Transparency for HTTPS ### Enable Certificate Transparency for HTTPS ### Enable Certificate Transparency for HTTPS ### Media Center Sharing Setup for Sony Bravia X8000C Series ### Media Center Sharing Setup for Sony Bravia X8000C Series ### Resolve SVN CornerStone Can't Commit File ### Wiener Philharmoniker Vienna New Year's Concert 2016 ### Lang Lang in Versailles ### Monitor Servers with Zabbix ### Monitor Servers with Zabbix ### Monitor Servers with Zabbix ### Monitor Servers with Zabbix ### Monitor Servers with Zabbix ### PHP UDP Syslog Client ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Create Software RAID on Windows ### Configuration Surge and ShadowSocks on iOS ### Configuration Surge and ShadowSocks on iOS ### Configuration Surge and ShadowSocks on iOS ### Configuration Surge and ShadowSocks on iOS ### Configuration Surge and ShadowSocks on iOS ### Crack Thunder for Mac Highspeed Tunnel ### Crack Thunder for Mac Highspeed Tunnel ### Crack Thunder for Mac Highspeed Tunnel ### Crack Thunder for Mac Highspeed Tunnel ### Crack Thunder for Mac Highspeed Tunnel ### Monitoring Processes with Supervisord ### Enabling HTTP/2 and CHACHA20_POLY1305 on Nginx ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### ShadowSocks, RedSocks2 and ChinaDNS on OpenWRT ### VMWare Fusion Pro 8 Core Keygen Crack ### Elasticsearch ELK Stack Installation ### Elasticsearch ELK Stack Installation ### Elasticsearch ELK Stack Installation ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Install Red Hat Enterprise Linux 7 on VMware and Config EPEL ### Sphinx ### Elasticsearch Cluster on Ubuntu ### Crack MyEclipse 2015 Stable 1.0 ### Crack MyEclipse 2015 Stable 1.0 ### Crack MyEclipse 2015 Stable 1.0 ### The First Day of Internship ### Elasticsearch Cluster on Ubuntu ### Elasticsearch Cluster on Ubuntu ### Sublime Text for Golang Build System ### Sublime Text for Golang Build System ### Laravel Beanstalkd Queuing Service Exception Handling ### Xiaomi Redmi 1 and iPhone 5s ### Logistics Package ### Rebuild iOS Contact Index in English Language ### OS X 10.10.4 Supports TRIM for Third-Party SSD Hard Drives ### Show Differences Between Files with Icdiff ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2015 ### LaTeX with Sublime Text and Skim on Mac ### LaTeX with Sublime Text and Skim on Mac ### 聘爱 ### Nginx ### Introduction to Algorithms, 3rd Edition ### Mastering Algorithms with C ### WordPress Memcached Turbo Boost and Check Hits and Misses ### Benchmark HHVM in FastCGI Mode ### Benchmark HHVM in FastCGI Mode ### Benchmark HHVM in FastCGI Mode ### Custom Emoji to WordPress ### PostgreSQL ### Setup Docker on OS X ### Setup Docker on OS X ### Setup Docker on OS X ### setup-docker-on-os-x-3 ### Setup Docker on OS X ### Setup Docker on OS X ### Anti Bad Bots and Crawlers by User Agent ### hadoop-logo ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### Setup Hadoop on Ubuntu (Single-Node Cluster) ### SPDYCheck.org ### Even Faster Web Sites ### Wiener Philharmoniker Vienna New Year's Concert 2015 ### gentoo-linux-quick-install-guide-5 ### Gentoo Linux Quick Install Guide ### Gentoo Linux Quick Install Guide ### Gentoo Linux Quick Install Guide ### Gentoo Linux Quick Install Guide ### Benchmarking MySQL with JMeter ### Benchmarking MySQL with JMeter ### Benchmarking MySQL with JMeter ### Benchmarking MySQL with JMeter ### Benchmarking MySQL with JMeter ### FastStone Capture – A Powerful Windows Screen Capture Tool ### Diskeeper Professional Edition 12 ### Diskeeper Professional Edition 12 ### ABBYY FineReader OCR Pro for Mac ### Install NI Circuit Design Suite 12 on Windows 8 ### Altera MAX+PLUS II ### Install NI Circuit Design Suite 12 on Windows 8 ### Install NI Circuit Design Suite 12 on Windows 8 ### Wiener Philharmoniker Vienna New Year’s Concert 2013 ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2013 ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2013 ### Wiener Philharmoniker Vienna New Year’s Concert 2014 ### Wiener Philharmoniker Summer Night Concert Schönbrunn 2014 ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Install the Cacti Server Monitor on Ubuntu Server ### Apple Magic Mouse ### Apple Magic Mouse ### Cherry Mouse Pad ### Today went to Harbin University of Commerce ### Zhiyu Zhang's Birthday ### UCAIS 2013 Poster ### iFixit 54 Bit Driver Kit ### iFixit 54 Bit Driver Kit ### Zhiyu Zhang ### Heilongjiang Computer Organization ### UCAIS ### UCAIS ### UCAIS ### KBTalking Pure Pro 60% Cherry MX Blue Mechanical Keyboard ### KBTalking Pure Pro 60% Cherry MX Blue Mechanical Keyboard ### Diary July 3nd, 2014 ### Apple EarPods with Remote and Mic ### Apple iPad Mini MD531ZP/A ### TISSOT T064.210.22.016.00 ### TISSOT T064.210.22.016.00 ### Participate in performances today ### Happy Birthday :) ### Install Apache Tomcat on Ubuntu Server ### Welding & Metallurgy Courses Site ### TimeFragment ### PHP Website Online Packaging Tool ### PHP Website Online Packaging Tool ### Install KVM on Ubuntu Server 12.04 LTS and Create a Virtual Machine ### Install Xen on Ubuntu Desktop 12.04 LTS and Using Virtual Manage Create a Virtual Machine ### Install Xen on Ubuntu Desktop 12.04 LTS and Using Virtual Manage Create a Virtual Machine ### Install MySQL Cluster on Ubuntu 12.04 LTS ### Install MySQL Cluster on Ubuntu 12.04 LTS ### Configure LAMP ( Apache, MySQL, PHP ) on RedHat Enterprise Linux 6 ### Setup Hosting Git Repositories on Ubuntu Server ### Transparent Proxy ### Nagios – Server Monitoring Scheme ### Nagios – Server Monitoring Scheme ### Apache Performance Tuning ### Reverse Proxy ### Reverse Proxy ### Reverse Proxy ### Forward Proxy ### Forward Proxy ### Forward Proxy ### Forward Proxy ### Use Sina Weibo Oauth 2.0 Authorize in Laravel ### Use Tencent QQ API in Laravel ### Custom Pagination in Laravel ### Laravel Ajax Multiple Image Upload and Preview ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WampServer local installation WordPress ### WordPress database clean-up optimization plugin ### Matlab Courses Website ### Revision Matlab Courses Website ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Ubuntu Desktop 12.04 LTS ### Install Oracle Database 12c on Windows ### Install Oracle Database 12c on Windows ### install-oracle-atabase-12c-on-windows-13 ### install-oracle-atabase-12c-on-windows-12 ### install-oracle-atabase-12c-on-windows-11 ### install-oracle-atabase-12c-on-windows-10 ### install-oracle-atabase-12c-on-windows-9 ### install-oracle-atabase-12c-on-windows-8 ### install-oracle-atabase-12c-on-windows-7 ### install-oracle-atabase-12c-on-windows-6 ### install-oracle-atabase-12c-on-windows-5 ### Install Oracle Database 12c on Windows ### install-oracle-atabase-12c-on-windows-3 ### Install Oracle Database 12c on Windows ### Install Oracle Database 12c on Windows ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Installation MAMP ( Apache+MySQL+PHP ) on a Mac with OS X 10.8 + ### Disabling Windows Server 2003 Ctrl+ALT+DEL Logon Prompt ### Disabling Windows Server 2003 Ctrl+ALT+DEL Logon Prompt ### Disabling Windows Server 2003 Ctrl+ALT+DEL Logon Prompt ### Solved Nginx reverse proxy solution caused WordPress dashboard articles list pagination button link error ### Configure IIS to Support .htaccess ### Configure IIS to Support .htaccess ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### install-php-on-iis-6-for-windows-server-4 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install PHP on IIS 6 for Windows Server 2003 ### Install Hyper-V Server 2012 in VMware Workstation ### Install Hyper-V Server 2012 in VMware Workstation ### Install Hyper-V Server 2012 in VMware Workstation ### Install Hyper-V Server 2012 in VMware Workstation ### Install Hyper-V Server 2012 in VMware Workstation ### Windows 8 built-in administrator account cannot be used to open Metro program solutions ### Ubuntu Install openssh-server ### Disable the Shutdown Event Tracker in Windows Server 2003 R2 ### Disable the Shutdown Event Tracker in Windows Server 2003 R2 ### Blackmagic Disk Speed Test ### SSD Information ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Macbook Pro SSD Upgrade ### Leicke Adapter Converter for MacBook/MacBook Pro/MacBook Air ### Samsung 840 EVO 120GB SSD ### Fix OS X Launchpad Pink Icon ### Make a DSDT and SSDT ### Make a DSDT and SSDT ### Make a DSDT and SSDT ### Make a DSDT and SSDT ### Install OS X Mavericks 10.9 on AMD PC ### DELL Inspiron 546s Desktop ### Img2icns - Create Icons for OS X ### OS X Get File Checksum ### OS X Get File Checksum ### macbook-pro-mc374-2 ### MacBook Pro MC374 LL/A ### photoshop-mac-7 ### photoshop-mac-6 ### Crack Adobe PhotoShop CC for Mac ### Crack Adobe PhotoShop CC for Mac ### Crack Adobe PhotoShop CC for Mac ### Crack Adobe PhotoShop CC for Mac ### Crack Adobe PhotoShop CC for Mac ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### Install OS X Mavericks 10.9 on DELL Inspiron 5520 ### DELL Inspiron 5520 Laptop ### Install OS X Mavericks in VMware Virtual Machine ### install-os-x-mavericks-in-vmware-virtual-machine-5 ### install-os-x-mavericks-in-vmware-virtual-machine-4 ### Install OS X Mavericks in VMware Virtual Machine ### Install OS X Mavericks in VMware Virtual Machine ### Install OS X Mavericks in VMware Virtual Machine ### Use Solarized Color Scheme in Mac OS X Terminal ### Solarized Palette ### OS X to read and write NTFS Partition ### Installing GoAgent on OS X ### MacBook Air MB543CH/A ### MacBook Air MB543/A ### MacBook Air MB543/A ### MacBook Air MB543/A ### Install OS X Mountain Lion in VMware Virtual Machine ### install-os-x-mountain-lion-in-vmware-vitrual-machine-14 ### Install OS X Mountain Lion in VMware Virtual Machine ### install-os-x-mountain-lion-in-vmware-vitrual-machine-12 ### install-os-x-mountain-lion-in-vmware-vitrual-machine-11 ### install-os-x-mountain-lion-in-vmware-vitrual-machine-10 ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion in VMware Virtual Machine ### Install OS X Mountain Lion Apple System in VMware Virtual Machine ### OS X PAC Global Automatic Proxy Settings ### centos-13 ### centos-12 ### centos-11 ### centos-10 ### centos-9 ### centos-8 ### centos-7 ### centos-6 ### centos-5 ### centos-4 ### centos-3 ### centos-2 ### Install CentOS 6.4 in Text Mode ### Install Node.js on Windows ### Install Node.js on Windows ### nodejs-windows-4 ### nodejs-windows-3 ### nodejs-windows-2 ### Install Node.js on Windows ### install-software-raid-10-on-ubuntu-12-04-lts-server-79 ### install-software-raid-10-on-ubuntu-12-04-lts-server-78 ### install-software-raid-10-on-ubuntu-12-04-lts-server-77 ### install-software-raid-10-on-ubuntu-12-04-lts-server-76 ### install-software-raid-10-on-ubuntu-12-04-lts-server-75 ### install-software-raid-10-on-ubuntu-12-04-lts-server-74 ### install-software-raid-10-on-ubuntu-12-04-lts-server-73 ### install-software-raid-10-on-ubuntu-12-04-lts-server-72 ### install-software-raid-10-on-ubuntu-12-04-lts-server-71 ### install-software-raid-10-on-ubuntu-12-04-lts-server-70 ### install-software-raid-10-on-ubuntu-12-04-lts-server-69 ### install-software-raid-10-on-ubuntu-12-04-lts-server-68 ### install-software-raid-10-on-ubuntu-12-04-lts-server-67 ### install-software-raid-10-on-ubuntu-12-04-lts-server-66 ### install-software-raid-10-on-ubuntu-12-04-lts-server-65 ### install-software-raid-10-on-ubuntu-12-04-lts-server-64 ### install-software-raid-10-on-ubuntu-12-04-lts-server-63 ### install-software-raid-10-on-ubuntu-12-04-lts-server-62 ### install-software-raid-10-on-ubuntu-12-04-lts-server-61 ### install-software-raid-10-on-ubuntu-12-04-lts-server-60 ### install-software-raid-10-on-ubuntu-12-04-lts-server-59 ### install-software-raid-10-on-ubuntu-12-04-lts-server-58 ### install-software-raid-10-on-ubuntu-12-04-lts-server-57 ### install-software-raid-10-on-ubuntu-12-04-lts-server-56 ### install-software-raid-10-on-ubuntu-12-04-lts-server-55 ### install-software-raid-10-on-ubuntu-12-04-lts-server-54 ### install-software-raid-10-on-ubuntu-12-04-lts-server-53 ### install-software-raid-10-on-ubuntu-12-04-lts-server-52 ### Install Software RAID10 on Ubuntu 12.04 LTS Server ### Install Software RAID10 on Ubuntu 12.04 LTS Server ### install-software-raid-10-on-ubuntu-12-04-lts-server-43 ### install-software-raid-10-on-ubuntu-12-04-lts-server-42 ### install-software-raid-10-on-ubuntu-12-04-lts-server-41-47 ### install-software-raid-10-on-ubuntu-12-04-lts-server-40-46 ### install-software-raid-10-on-ubuntu-12-04-lts-server-39-45 ### install-software-raid-10-on-ubuntu-12-04-lts-server-38-44-50 ### install-software-raid-10-on-ubuntu-12-04-lts-server-37 ### install-software-raid-10-on-ubuntu-12-04-lts-server-36 ### Install Software RAID10 on Ubuntu 12.04 LTS Server ### install-software-raid-10-on-ubuntu-12-04-lts-server-33 ### install-software-raid-10-on-ubuntu-12-04-lts-server-32 ### install-software-raid-10-on-ubuntu-12-04-lts-server-31 ### install-software-raid-10-on-ubuntu-12-04-lts-server-30 ### install-software-raid-10-on-ubuntu-12-04-lts-server-29 ### install-software-raid-10-on-ubuntu-12-04-lts-server-28 ### install-software-raid-10-on-ubuntu-12-04-lts-server-27-34 ### install-software-raid-10-on-ubuntu-12-04-lts-server-26 ### install-software-raid-10-on-ubuntu-12-04-lts-server-25 ### install-software-raid-10-on-ubuntu-12-04-lts-server-24 ### install-software-raid-10-on-ubuntu-12-04-lts-server-23 ### install-software-raid-10-on-ubuntu-12-04-lts-server-22 ### install-software-raid-10-on-ubuntu-12-04-lts-server-21 ### install-software-raid-10-on-ubuntu-12-04-lts-server-20 ### install-software-raid-10-on-ubuntu-12-04-lts-server-19 ### install-software-raid-10-on-ubuntu-12-04-lts-server-18 ### install-software-raid-10-on-ubuntu-12-04-lts-server-17 ### install-software-raid-10-on-ubuntu-12-04-lts-server-16 ### install-software-raid-10-on-ubuntu-12-04-lts-server-15 ### install-software-raid-10-on-ubuntu-12-04-lts-server-14 ### install-software-raid-10-on-ubuntu-12-04-lts-server-13 ### install-software-raid-10-on-ubuntu-12-04-lts-server-12 ### install-software-raid-10-on-ubuntu-12-04-lts-server-11 ### Install Software RAID10 on Ubuntu 12.04 LTS Server ### install-software-raid-10-on-ubuntu-12-04-lts-server-9 ### install-software-raid-10-on-ubuntu-12-04-lts-server-8 ### install-software-raid-10-on-ubuntu-12-04-lts-server-7 ### install-software-raid-10-on-ubuntu-12-04-lts-server-6 ### install-software-raid-10-on-ubuntu-12-04-lts-server-5 ### install-software-raid-10-on-ubuntu-12-04-lts-server-4 ### install-software-raid-10-on-ubuntu-12-04-lts-server-3 ### install-software-raid-10-on-ubuntu-12-04-lts-server-2 ### Install Software RAID10 on Ubuntu 12.04 LTS Server ### RAID Troubleshooting ### RAID Troubleshooting ### RAID Troubleshooting ### RAID Troubleshooting ### RAID Troubleshooting ### CSS: The Definitive Guide ### CSS Mastery: Advanced Web Standards Solutions ### Beginning CSS Web Development From Novice to Professional ### HTML, XHTML, and CSS ### 浪潮之巅 ### Learning the vi and Vim Editors ### Cacti 0.8 Network Monitoring ### Ubuntu Server 最佳方案 ### Ubuntu权威指南 ### 鸟哥的Linux私房菜 基础学习篇 ### When Search Meets Web Usability ### OS X Mountain Lion 高手进阶 ### PHP Objects, Patterns and Practice ### Pro PHP Application Performance: Tuning PHP Web Projects for Maximum Performance ### Pro PHP and jQuery ### MySQL Developer's Library ### Beginning PHP and MySQL From Novice to Professional ### PHP for the Web: Visual QuickStart Guide ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Install Ubuntu 12.04 and Windows 8 dual system ### Creating a GitHub Pages Blog With Octopress on Ubuntu ### WAMP environment Zend Framework Configuration ### WAMP environment Zend Framework Configuration ### Installation WNMP ( Nginx, MySQL, PHP ) on Microsoft Windows ### Installation WNMP ( Nginx, MySQL, PHP ) on Microsoft Windows ### Installation WNMP ( Nginx, MySQL, PHP ) on Microsoft Windows ### Configure virtual machines in VMware vSphere boot start automatically ### Configure virtual machines in VMware vSphere boot start automatically ### Configure virtual machines in VMware vSphere boot start automatically ### Install VMware Workstation on Ubuntu ### Install VMware vSphere Client on Windows 8 ### Install VMware vSphere Client on Windows 8 ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### VMware ESXi 5 Installation and Basic Setup ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### Installation WAMP ( Apache, MySQL, PHP ) on Microsoft Windows ### LibreOffice – Free · Cross-platform · Open-source Office Software ### C Programming Language Learning Materials [Chinese Simplified] ### Windows 8.1 · Office 2013 - Microsoft MSDN Official Version and Activation Tools ### Avast! – Genuine excellent free anti-virus software ### Crack TuneUp Utilities ### WampServer Local Installation WordPress ### Diskeeper Professional Edition 12 ### VMware ESXi 5 Installation and Basic Setup ### Adobe Creative Cloud ### Creating a GitHub Pages Blog With Octopress on Ubuntu ### Install VMware Workstation on Ubuntu ### Install Node.js on Windows ### IBM eServer xSeries 335/336 Server Create and Delete RAID Disk Array ### Install OS X Mountain Lion in VMware Virtual Machine ### Install Hyper-V Server 2012 in VMware Workstation ### Setup Hosting Git Repositories on Ubuntu Server ### Install OpenVZ on CentOS ### Install MySQL Cluster on Ubuntu 12.04 LTS ### Install Xen on Ubuntu Desktop 12.04 LTS and Using Virtual Manage Create a Virtual Machine ### Install KVM on Ubuntu Server 12.04 LTS and Create a Virtual Machine ### Install OS X Mavericks in VMware Virtual Machine ### High Performance MySQL, 3rd Edition ### 鸟哥的Linux私房菜:服务器架设篇(第3版) ### Img2icns - Create Icons for OS X ### Final Cut Pro X - By Apple ### Motion - By Apple ### Compressor - By Apple ### Logic Pro X - By Apple ### MainStage - By Apple ### Luxurioust ### Laravel E-Commerce with Alipay Dualfun Develope ### ShadowSocks Setup Guide ### ShadowSocks Setup Guide ### ShadowSocks Setup Guide ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### Set Up Apache with a Free Signed SSL Certificate on a VPS ### SSL ### The Internet's Own Boy: The Story of Aaron Swartz ### VMWare Fusion Pro 7 Core Keygen Crack ### Laravel 4 Real Time Chat ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Configure Git and Github ### Creative Commons ### Automatically Start MySQL Server on OS X Yosemite ### Remove and Rebuild Macintosh Recovery HD Partition ### Fix ERROR InnoDB: The InnoDB memory heap is disabled ### Install Google Play Store on Android OS ### Install Google Play Store on Android OS ### Install Google Play Store on Android OS ### Install Google Play Store on Android OS ### Using Dnsmasq Configure Wildcard DNS Record on Mac ### avoid-vmware-esxi-loading-module-ipmi_si_drv ### Tumblr-icon ### Linkedin-icon ### Pinterest-icon ### Twitter-icon ### Stumbleupon-icon ### Reddit-icon ### Google-plus-icon ### Facebook-icon ### Digg-icon ### Ri Xu