Blog

  • Brave-Fox

    Important

    Yea… this isn’t getting updated anytime soon.


    Brave-Fox Banner

    Brave-Fox

    Brave-Fox is a Firefox Theme that brings Brave’s design elements into Firefox.

    Versions

    There are two versions of Brave-Fox: Overflow & Non-Overflow.

    Overflow vs Non-Overflow

    Chromium-based browsers do this thing where with every new tab, each other tab gets smaller and smaller, till you open enough tabs that newer ones stop displaying after a certain number of tabs. Firefox said “nah”, and just decided to add a scroll wheel onto the tab bar.

    Adding the Remove Overflow.css file to the Brave-Fox folder will disable Firefox’s tab scrolling and enable chromium-like tab behaviour.

    Explanation

    I highly recommend you read the documentation I spent hours on as it explains every line of code in each file, some also have before and after pictures to show off differences.

    Installation

    1. Install the pack you’d like to use, (including the Non-Overflow version if you’d like) (other than ReadMe.md).
    2. Move all files to Chrome Folder
    3. Add the following code to your userChrome.css & userContent.css files:
    /*--------------------------------------------- Brave Fox --------------------------------------------*/
    @import url("Brave-Fox/Import.css");
    /*----------------------------------------------------------------------------------------------------*/
    1. Save & restart Firefox.

    Extras

    Fluent Reveal Tabs

    This adds Chromium’s “flashlight” hover effect on tabs, just like Brave & Chrome have. Be warned tho, this is a JS script and needs a script manager

    Visit original content creator repository https://github.com/Soft-Bred/Brave-Fox
  • Brave-Fox

    Important

    Yea… this isn’t getting updated anytime soon.


    Brave-Fox Banner

    Brave-Fox

    Brave-Fox is a Firefox Theme that brings Brave’s design elements into Firefox.

    Versions

    There are two versions of Brave-Fox: Overflow & Non-Overflow.

    Overflow vs Non-Overflow

    Chromium-based browsers do this thing where with every new tab, each other tab gets smaller and smaller, till you open enough tabs that newer ones stop displaying after a certain number of tabs. Firefox said “nah”, and just decided to add a scroll wheel onto the tab bar.

    Adding the Remove Overflow.css file to the Brave-Fox folder will disable Firefox’s tab scrolling and enable chromium-like tab behaviour.

    Explanation

    I highly recommend you read the documentation I spent hours on as it explains every line of code in each file, some also have before and after pictures to show off differences.

    Installation

    1. Install the pack you’d like to use, (including the Non-Overflow version if you’d like) (other than ReadMe.md).
    2. Move all files to Chrome Folder
    3. Add the following code to your userChrome.css & userContent.css files:
    /*--------------------------------------------- Brave Fox --------------------------------------------*/
    @import url("Brave-Fox/Import.css");
    /*----------------------------------------------------------------------------------------------------*/
    1. Save & restart Firefox.

    Extras

    Fluent Reveal Tabs

    This adds Chromium’s “flashlight” hover effect on tabs, just like Brave & Chrome have. Be warned tho, this is a JS script and needs a script manager

    Visit original content creator repository https://github.com/Soft-Bred/Brave-Fox
  • sample-unity3D-nft-metaverse-template

    NFT-Unity3D-Metaverse-Template

    Adv. Playground : Gallery – Customizable Unity Template

    Made with Unity Made with NFTPort Join Discord


    Advanced Playground showcases number of zones providing templates and demos for how to read/write-mint/update etc NFT data in game engine Unity.

    Advanced Playground Mint zone:

    Explore NFT minting shop template made in Unity3D to create cross-chain web3 games/metaverses including on Solana, Ethereum, polygon and more. In this showcase we generate a runtime custom NFT according to user input including procedural metadata, 3D object, NFT image and host it over IPFS, finally minting to our deployed collection to the connected players wallet.

    Screenshot 2022-09-25 163411

    3

    Advanced Playground gallery template/demo for NFTPort unity SDK, showcases NFT Data fetch, Asset Downloads, Connect Wallet, Custom Mint 3D Assets to player wallet and more will be added.

    Gallery

    A fully composable and ready to use gallery. Gallery Frames can be reduced or increased indefinately and put in any shape or formations according to your #metaverse needs.


    ezgif-1-41d05440f3 ezgif-1-c6950c812e gallery 1 sz 4 projects_m00qr_images_ezgif-3-96144ac59f Screenshot 2022-06-25 034312


    Visit original content creator repository https://github.com/nftport/sample-unity3D-nft-metaverse-template
  • roguelike

    Phaser 3 Webpack Project Template

    A Phaser 3 project template with ES6 support via Babel 7 and Webpack 4 that includes hot-reloading for development and production-ready builds.

    This has been updated for Phaser 3.50.0 version and above.

    Loading images via JavaScript module import is also supported, although not recommended.

    Requirements

    Node.js is required to install dependencies and run scripts via npm.

    Available Commands

    Command Description
    npm install Install project dependencies
    npm start Build project and open web server running project
    npm run build Builds code bundle with production settings (minification, uglification, etc..)

    Writing Code

    After cloning the repo, run npm install from your project directory. Then, you can start the local development server by running npm start.

    After starting the development server with npm start, you can edit any files in the src folder and webpack will automatically recompile and reload your server (available at http://localhost:8080 by default).

    Customizing the Template

    Babel

    You can write modern ES6+ JavaScript and Babel will transpile it to a version of JavaScript that you want your project to support. The targeted browsers are set in the .babelrc file and the default currently targets all browsers with total usage over “0.25%” but excludes IE11 and Opera Mini.

    "browsers": [
     ">0.25%",
     "not ie 11",
     "not op_mini all"
    ]
    

    Webpack

    If you want to customize your build, such as adding a new webpack loader or plugin (i.e. for loading CSS or fonts), you can modify the webpack/base.js file for cross-project changes, or you can modify and/or create new configuration files and target them in specific npm tasks inside of `package.json’.

    Deploying Code

    After you run the npm run build command, your code will be built into a single bundle located at dist/bundle.min.js along with any other assets you project depended.

    If you put the contents of the dist folder in a publicly-accessible location (say something like http://mycoolserver.com), you should be able to open http://mycoolserver.com/index.html and play your game.

    roguelike

    Visit original content creator repository
    https://github.com/Paaaaak/roguelike

  • circuitpython-sync

    CircuitPython Sync

    GitHub License GitHub package.json version GitHub Issues npm Downloads

    Node module that synchronizes the files on a connected CircuitPython device to a local project folder. It provides a one-way sync from the CircuitPython device to the local project folder. Technically it does a copy rather than a sync, but if I included copy in the name, it would be cpcopy or cp-copy which looks like a merger of the Linux copy command cp plus the DOS copy command copy and that would be confusing.

    When you work with a CircuitPython device, you generally read and write executable Python files directly from/to the device; there’s even a Python editor called Mu built just for this use case.

    Many more experienced developers work with a local project folder then transfer the source code to a connected device, as you do when working with Arduino and other platforms. This module allows you to do both:

    1. Read/write source code from/to a connected Circuit Python device using Mu or other editors (even Visual Studio Code plugins).
    2. Automatically copy source files from the device to a local project folder whenever the change on the device.

    Here’s how it works:

    1. Create a local Python project with all the other files you need (like a readme.md file, or a .gitignore).
    2. Connect a CircuitPython device to your computer.
    3. Open a terminal window or command prompt and execute the module specifying the CircuitPython drive path and local project path as command line arguments.
    4. The module copies all of the files from the connected CircuitPython device to the specified project folder.
    5. Open any editor you want and edit the Python source code files (and any other file) on the connected device.
    6. When you save any modified files on the connected CircuitPython device, the module automatically copies the modified file(s) to the project folder.

    See the module in action on YouTube

    Installation

    To install globally, open a command prompt or terminal window and execute the following command:

    npm install -g cpsync

    You’ll want to install globally since CircuitPython projects don’t generally use Node modules (like this one) so a package.json file and node_modules folder will look weird in your project folder.

    Usage

    To start the sync process, in a terminal window execute the following command:

    cpsync <device_path> <sync_path> [-d | --debug] [-i | --ignore]

    Arguments:

    • <device_path> is the drive path for a connected CircuitPython device
    • <sync_path> is the local project folder where you want the module to copy the files from the connected CircuitPython device

    Both command arguments are required (indicated by angle brackets < and >). Square brackets ([ and ])indicate optional parameters.

    Options:

    • -d or --debug enables debug mode which writes additional information to the console as the module executes
    • -i or --ignore instructs the module to ignore the internal files typically found on a CircuitPython device.

    A CircuitPython device hosts several internal use or housekeeping files that you don’t need copied into your local project. When you enable ignore mode (by passing the -i option on the command line), the module ignores the following when synchronizing files from the CircuitPython device to your local project folder:

    const ignoreFiles = [
      'boot_out.txt',
      'BOOTEX.LOG',
      '.DS_Store',
      '.metadata_never_index',
      'System Volume Information',
      'test_results.txt',
      '.Trashes'
    ] as const;
    
    const ignoreFolders = [
      '.fseventsd',
      'System Volume Information',
      '.Trashes'
    ] as const;

    If you find other device-side housekeeping files, let me know and I’ll update the ignore arrays in the module.

    Examples

    If you don’t want to install the module globally, you can execute the module on the fly instead using:

    npx cpsync <device_path> <sync_path>

    On Windows, the device appears as a drive with a drive letter assignment. So, assuming it’s drive H (your experience may vary but that’s how it shows up on my Windows system) start the module with the following command:

    cpsync h: c:\dev\mycoolproject

    Assuming you’ll launch the module from your project folder, use a . for the current folder as shown in the following example:

    cpsync h: .

    On macOS, it mounts as a drive and you can access it via /Volumes folder. On my system, the device mounts as CIRCUITPY, so start the sync process using:

    cpsync /Volumes/CIRCUITPY .

    On Windows I like to execute the module from the terminal prompt in Visual Studio Code, but keep the terminal available to execute other commands, so I start the module using the following:

    start cpsync <device_path> <sync_path>

    This starts the module in a new/separate terminal window, leaving the Visual Studio terminal available to me to execute additional commands.

    For example, if I execute the following command:

    start cpsync h: . -i

    A new window opens as shown in the following figure

    Windows Terminal Example

    The CircuitPython device shows up as drive H: and the . tells the module to copy the files to the current folder.

    Every time you change the file contents on the device, the module copies the modified files to the local project folder.

    Getting Help Or Making Changes

    Use GitHub Issues to get help with this module.

    Pull Requests gladly accepted, but only with complete documentation of what the change is, why you made it, and why you think its important to have in the module.

    If this code helps you: Buy Me A Coffee

    Visit original content creator repository https://github.com/johnwargo/circuitpython-sync
  • zotero-pdf-custom-rename

    Zotero PDF Rename

    zotero target version Latest release code size Downloads latest release License Using Zotero Plugin Template

    This is a Zotero plugin that allows you to rename PDF files in your Zotero library using custom rules.

    Note: This plugin only works on Zotero 7.0 and above.

    Usage

    Select one or more items in your Zotero library and right click to open the context menu. Select Rename PDF attachments from the menu.

    image

    Then the PDF files will be renamed according to the custom rules you set in the plugin preferences(not implemented yet).

    Default rules

    This plugin will read the journal name and year from the metadata of the item and rename the PDF file as follows:

    {short journal name}_{year}_{short title}.pdf
    

    For example, the PDF file of the item below will be renamed as TPAMI_2016_Go-ICP.pdf.

    The short title is read from the Short Title field of the item. If the Short Title field is empty, the plugin will use the Title field instead.

    Journal tags

    The short journal name is generated by selecting the first capital letter of each word in the journal name. For example, IEEE Transactions on Pattern Analysis and Machine Intelligence will be converted to TPAMI, while IEEE will be ignored.

    However, ACM Transactions on Graphics will be converted to TG rather than TOG in current version. This is because the word on is ignored in the conversion. A better method is manually adding the short name of the journal in the Tags of the item.

    For example, you can add Jab/#TOG to the Tags of the item, and the plugin will use TOG as the short name of the journal.

    Note: the plugin will first read the Jab/# tag in the Tags as the short name. If there is no Jab/# tag, the plugin will automatically extract the short name from the full name of the journal.

    PS: It is recommended to install the plugin MuiseDestiny/zotero-style for a better experience.

    Xnip2023-06-22_21-14-44

    Short Cut

    Now, we can use control+D to rename the PDF files. Moreover, we can customize the short cut in the Preferences of Zotero.

    The custom short cut can be a combination of the modifier keys and another key. The modifier keys can be alt, control, meta and accel, while another key can be any key on the keyboard.

    The following table shows the corresponding modifier keys on Windows and Mac.

    modifier Windows Mac
    alt Alt ⌥ Option
    control Ctrl ⌃ Control
    meta Not supported ⌘ Command
    accel Ctrl ⌘ Command

    Future work

    • Add a short cut for the renaming function
    • Preferences panel to allow users to customize the rules.
    • Better way to extract the short name of the journal.
    Visit original content creator repository https://github.com/Theigrams/zotero-pdf-custom-rename
  • proto2gql

    proto2gql

    The project has been migrated to https://github.com/EGT-Ukraine/go2gql.

    Tool, which generates graphql-go schema for .proto file.

    Installation

    $ go get github.com/saturn4er/proto2gql/cmd/proto2gql

    Usage

    To generate GraphQL fields by .proto

    $ ./proto2gql
    

    Generation process

    Generation process

    Config example

    paths:                         # path, where parser will search for imports
      - "${GOPATH}/src/"     
    generate_tracer: true          # if true, generated code will trace  all functions calls
    
    output_package: "graphql"      # Common Golang package for generated files 
    output_path: "./out"           # Path, where generator will put generated files
    
    imports:                       # .proto files imports settings
      output_package: "imports"    # Golang package name for generated imports
      output_path: "./out/imports" # Path, where generator will put generated imports files
      aliases:                     # Global aliases for imports. 
        google/protobuf/timestamp.proto:  "github.com/gogo/protobuf/protobuf/google/protobuf/timestamp.proto"
      settings:
        "${GOPATH}src/github.com/gogo/protobuf/protobuf/google/protobuf/timestamp.proto":
          go_package: "github.com/gogo/protobuf/types"   # golang package, of generated .proto file
          gql_enums_prefix: "TS"                         # prefix, which will be added to all generated GraphQL Enums
          gql_messages_prefix: "TS"                      # prefix, which will be added to all generated GraphQL Messages(including maps)
           
    
    protos:
      - proto_path: "./example/example.proto"           # path to .proto file              
        output_path: "./schema/example"                 # path, where generator will put generated file
        output_package: "example"                       # Golang package for generated file
        paths:                                          # path, where parser will search for imports.  
          - "${GOPATH}/src/github.com/saturn4er/proto2gql/example/"
        gql_messages_prefix: "Example"                  # prefix, which will be added to all generated GraphQL Messages(including maps)
        gql_enums_prefix: "Example"                     # prefix, which will be added to all generated GraphQL Enums
        imports_aliases:                                # imports aliases
          google/protobuf/timestamp.proto:  "github.com/google/protobuf/google/protobuf/timestamp.proto"
        services:             
          ServiceExample:
            alias: "NonServiceExample"                  # service name alias
            methods:  
              queryMethod:                              
                alias: "newQueryMethod"                 # method name alias
                request_type: "QUERY"                   # GraphQL query type (QUERY|MUTATION)
        messages:
          MessageName:
            error_field: "errors"                       # recognize this field as payload error. You can access it in interceptors
            fields:
              message_field: {context_key: "ctx_field_key"}  # Resolver, will try to fetch this field from context instead of fetching it from arguments
              
    schemas:  
      - name: "SomeSchema"                  # Schema name
        output_path: "./out/schema.go"      # Where generator will put fabric for this schema
        output_package: "test_schema"       # Go package name for schema file
        queries:
          type: "SERVICE"                   
          proto: "Example"
          service: "ServiceExample"
          filter_fields:
            - "MsgsWithEpmty"
          exclude_fields:
            - "excludedField"
    
        mutations:
          type: "OBJECT"
          fields:
            - field: "nested_example_mutation"
              type: "OBJECT"
              object_name: "NestedExampleMutation"
              fields:
                - field: "ExampleService"
                  type: "SERVICE"
                  object_name: "ServiceExampleMutations"
                  proto: "Example"
                  service: "ServiceExample"
                  filter_fields:
                    - "MsgsWithEpmty"
     

    Interceptors

    There’s two types of Interceptors. The first one can do some logic while parsing GraphQL arguments into request message and the second one, which intercept GRPC call. Here’s an example, how to work with it

    package main
    
    import (
    	"fmt"
    	
    	"google.golang.org/grpc"
    	"github.com/saturn4er/proto2gql/api/interceptors"
    )
    	
    
    func main(){
        ih := interceptors.InterceptorHandler{}
        ih.OnResolveArgs(func(ctx *interceptors.Context, next interceptors.ResolveArgsInvoker) (result interface{}, err error) {
        	fmt.Println("Before resolving request message")
        	req, err := next()
        	fmt.Println("After resolving request message")
        	return req, err
        })
        ih.OnCall(func(ctx *interceptors.Context, req interface{}, next interceptors.CallMethodInvoker, opts ...grpc.CallOption) (result interface{}, err error) {
            fmt.Println("Before GRPC Call")
            res, err := next(req, opts...)
            fmt.Println("After GRPC Call")
            return res, err
        })
        // queriesFields := GetSomeServiceQueriesFields(someClient, ih)
        // create other schema...
    }

    How generated code works

    workflow

    Todo

    • fields generation
    • schema generation
    • bytes fields
    • test resolvers
    • other languages support ???
    Visit original content creator repository https://github.com/saturn4er/proto2gql
  • fm–time-tracking-dashboard

    Frontend Mentor – Time tracking dashboard

    Design preview for the Time tracking dashboard coding challenge

    Welcome! 👋

    Thanks for checking out this front-end coding challenge.

    Frontend Mentor challenges help you improve your coding skills by building realistic projects.

    To do this challenge, you need a basic understanding of HTML, CSS and JavaScript.

    The challenge

    Your challenge is to build out this dashboard and get it looking as close to the design as possible.

    You can use any tools you like to help you complete the challenge. So if you’ve got something you’d like to practice, feel free to give it a go.

    If you would like to practice working with JSON data, we provide a local data.json file for the activities. This means you’ll be able to pull the data from there instead of using the content in the .html file.

    Your users should be able to:

    • View the optimal layout for the site depending on their device’s screen size
    • See hover states for all interactive elements on the page
    • Switch between viewing Daily, Weekly, and Monthly stats

    Want some support on the challenge? Join our community and ask questions in the #help channel.

    Expected behaviour

    • The text for the previous period’s time should change based on the active timeframe. For Daily, it should read “Yesterday” e.g “Yesterday – 2hrs”. For Weekly, it should read “Last Week” e.g. “Last Week – 32hrs”. For monthly, it should read “Last Month” e.g. “Last Month – 19hrs”.

    Where to find everything

    Your task is to build out the project to the designs inside the /design folder. You will find both a mobile and a desktop version of the design.

    The designs are in JPG static format. Using JPGs will mean that you’ll need to use your best judgment for styles such as font-size, padding and margin.

    If you would like the design files (we provide Sketch & Figma versions) to inspect the design in more detail, you can subscribe as a PRO member.

    You will find all the required assets in the /images folder. The assets are already optimized.

    There is also a style-guide.md file containing the information you’ll need, such as color palette and fonts.

    Building your project

    Feel free to use any workflow that you feel comfortable with. Below is a suggested process, but do not feel like you need to follow these steps:

    1. Initialize your project as a public repository on GitHub. Creating a repo will make it easier to share your code with the community if you need help. If you’re not sure how to do this, have a read-through of this Try Git resource.
    2. Configure your repository to publish your code to a web address. This will also be useful if you need some help during a challenge as you can share the URL for your project with your repo URL. There are a number of ways to do this, and we provide some recommendations below.
    3. Look through the designs to start planning out how you’ll tackle the project. This step is crucial to help you think ahead for CSS classes to create reusable styles.
    4. Before adding any styles, structure your content with HTML. Writing your HTML first can help focus your attention on creating well-structured content.
    5. Write out the base styles for your project, including general content styles, such as font-family and font-size.
    6. Start adding styles to the top of the page and work down. Only move on to the next section once you’re happy you’ve completed the area you’re working on.

    Deploying your project

    As mentioned above, there are many ways to host your project for free. Our recommended hosts are:

    You can host your site using one of these solutions or any of our other trusted providers. Read more about our recommended and trusted hosts.

    Create a custom README.md

    We strongly recommend overwriting this README.md with a custom one. We’ve provided a template inside the README-template.md file in this starter code.

    The template provides a guide for what to add. A custom README will help you explain your project and reflect on your learnings. Please feel free to edit our template as much as you like.

    Once you’ve added your information to the template, delete this file and rename the README-template.md file to README.md. That will make it show up as your repository’s README file.

    Submitting your solution

    Submit your solution on the platform for the rest of the community to see. Follow our “Complete guide to submitting solutions” for tips on how to do this.

    Remember, if you’re looking for feedback on your solution, be sure to ask questions when submitting it. The more specific and detailed you are with your questions, the higher the chance you’ll get valuable feedback from the community.

    Sharing your solution

    There are multiple places you can share your solution:

    1. Share your solution page in the #finished-projects channel of the community.
    2. Tweet @frontendmentor and mention @frontendmentor, including the repo and live URLs in the tweet. We’d love to take a look at what you’ve built and help share it around.
    3. Share your solution on other social channels like LinkedIn.
    4. Blog about your experience building your project. Writing about your workflow, technical choices, and talking through your code is a brilliant way to reinforce what you’ve learned. Great platforms to write on are dev.to, Hashnode, and CodeNewbie.

    We provide templates to help you share your solution once you’ve submitted it on the platform. Please do edit them and include specific questions when you’re looking for feedback.

    The more specific you are with your questions the more likely it is that another member of the community will give you feedback.

    Got feedback for us?

    We love receiving feedback! We’re always looking to improve our challenges and our platform. So if you have anything you’d like to mention, please email hi[at]frontendmentor[dot]io.

    This challenge is completely free. Please share it with anyone who will find it useful for practice.

    Have fun building! 🚀

    Visit original content creator repository https://github.com/sutaC/Time-tracking-dashboard
  • functional-input-GP

    Functional-Input Gaussian Processes with Applications to Inverse Scattering Problems (Reproducibility)

    Chih-Li Sung December 1, 2022

    This instruction aims to reproduce the results in the paper “Functional-Input Gaussian Processes with Applications to Inverse Scattering Problems” by Sung et al. (link).  Hereafter, functional-Input Gaussian Process is abbreviated by FIGP.

    The following results are reproduced in this file

    • The sample path plots in Section S8 (Figures S1 and S2)
    • The prediction results in Section 4 (Table 1, Tables S1 and S2)
    • The plots and prediction results in Section 5 (Figures 2, S3 and S4 and Table 2)
    Step 0.1: load functions and packages
    library(randtoolbox)
    library(R.matlab)
    library(cubature)
    library(plgp)
    source("FIGP.R")                # FIGP 
    source("matern.kernel.R")       # matern kernel computation
    source("FIGP.kernel.R")         # kernels for FIGP
    source("loocv.R")               # LOOCV for FIGP
    source("KL.expan.R")            # KL expansion for comparison
    source("GP.R")                  # conventional GP
    Step 0.2: setting
    set.seed(1) #set a random seed for reproducing
    eps <- sqrt(.Machine$double.eps) #small nugget for numeric stability

    Reproducing Section S8: Sample Path

    Set up the kernel functions introduced in Section 3. kernel.linear is the linear kernel in Section 3.1, while kernel.nonlinear is the non-linear kernel in Section 3.2.

    kernel.linear <- function(nu, theta, rnd=5000){
      x <- seq(0,2*pi,length.out = rnd)
      R <- sqrt(distance(x*theta))
      Phi <- matern.kernel(R, nu=nu)
      a <- seq(0,1,0.01)
      n <- length(a)
      A <- matrix(0,ncol=n,nrow=rnd)
      for(i in 1:n)  A[,i] <- sin(a[i]*x)
      K <- t(A) %*% Phi %*% A / rnd^2
      return(K)
    }
    kernel.nonlinear <- function(nu, theta, rnd=5000){
      x <- seq(0,2*pi,length.out = rnd)
      a <- seq(0,1,0.01)
      n <- length(a)
      A <- matrix(0,ncol=n,nrow=rnd)
      for(i in 1:n)  A[,i] <- sin(a[i]*x)
      R <- sqrt(distance(t(A)*theta)/rnd)
      
      K <- matern.kernel(R, nu=nu)
      return(K)
    }
    Reproducing Figure S1

    Consider a linear kernel with various choices of parameter settings, including nu, theta, s2.

    • First row: Set theta=1 and s2=1 and set different values for nu, which are 0.5, 3, and 10.
    • Second row: Set nu=2.5 and s2=1 and set different values for theta, which are 0.01, 1, and 100.
    • Third row: Set nu=2.5 and theta=1 and set different values for s2, which are 0.01, 1, and 100.
    theta <- 1
    s2 <- 1
    nu <- c(0.5,3,10)
    K1 <- kernel.linear(nu=nu[1], theta=theta)
    K2 <- kernel.linear(nu=nu[2], theta=theta) 
    K3 <- kernel.linear(nu=nu[3], theta=theta) 
    
    par(mfrow=c(3,3), mar = c(4, 4, 2, 1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(nu==1/2))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(nu==3))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(nu==10))
    
    nu <- 2.5
    theta <- c(0.01,1,100)
    s2 <- 1
    K1 <- kernel.linear(nu=nu, theta=theta[1])
    K2 <- kernel.linear(nu=nu, theta=theta[2]) 
    K3 <- kernel.linear(nu=nu, theta=theta[3])
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(theta==0.01))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(theta==1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(theta==100))
    
    nu <- 2.5
    theta <- 1
    s2 <- c(0.1,1,100)
    K1 <- kernel.linear(nu=nu, theta=theta)
    K2 <- kernel.linear(nu=nu, theta=theta) 
    K3 <- kernel.linear(nu=nu, theta=theta) 
    
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[1]*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(sigma^2==0.1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[2]*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(sigma^2==1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[3]*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(sigma^2==100))

    Reproducing Figure S2

    Consider a non-linear kernel with various choices of parameter settings, including nu, gamma, s2.

    • First row: Set gamma=1 and s2=1 and set different values for nu, which are 0.5, 2, and 10.
    • Second row: Set nu=2.5 and s2=1 and set different values for gamma, which are 0.1, 1, and 10.
    • Third row: Set nu=2.5 and gamma=1 and set different values for s2, which are 0.1, 1, and 100.
    gamma <- 1
    s2 <- 1
    nu <- c(0.5,2,10)
    K1 <- kernel.nonlinear(nu=nu[1], theta=gamma)
    K2 <- kernel.nonlinear(nu=nu[2], theta=gamma) 
    K3 <- kernel.nonlinear(nu=nu[3], theta=gamma) 
    
    par(mfrow=c(3,3), mar = c(4, 4, 2, 1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(nu==1/2))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(nu==2))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(nu==10))
    
    nu <- 2.5
    gamma <- c(0.1,1,10)
    s2 <- 1
    K1 <- kernel.nonlinear(nu=nu, theta=gamma[1])
    K2 <- kernel.nonlinear(nu=nu, theta=gamma[2]) 
    K3 <- kernel.nonlinear(nu=nu, theta=gamma[3])
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(gamma==0.1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(gamma==1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(gamma==10))
    
    nu <- 2.5
    gamma <- 1
    s2 <- c(0.1,1,100)
    K1 <- kernel.nonlinear(nu=nu, theta=gamma)
    K2 <- kernel.nonlinear(nu=nu, theta=gamma) 
    K3 <- kernel.nonlinear(nu=nu, theta=gamma) 
    
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[1]*K1)), type="l", col=1, lty=1, 
            xlab=expression(alpha), ylab="y", main=expression(sigma^2==0.1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[2]*K2)), type="l", col=2, lty=2, 
            xlab=expression(alpha), ylab="y", main=expression(sigma^2==1))
    matplot(seq(0,1,0.01), t(rmvnorm(8,sigma=s2[3]*K3)), type="l", col=3, lty=3, xlab=expression(alpha), 
            ylab="y", main=expression(sigma^2==100))

    Reproducing Section 4: Prediction Performance

    Three different test functions are considered:

    • $f_1(g)=\int\int g$
    • $f_2(g)=\int\int g^3$
    • $f_3(g)=\int\int \sin(g^2)$

    Eight training functional inputs are

    • $g(x_1,x_2)=x_1+x_2$
    • $g(x_1,x_2)=x_1^2$
    • $g(x_1,x_2)=x_2^2$
    • $g(x_1,x_2)=1+x_1$
    • $g(x_1,x_2)=1+x_2$
    • $g(x_1,x_2)=1+x_1x_2$
    • $g(x_1,x_2)=\sin(x_1)$
    • $g(x_1,x_2)=\cos(x_1+x_2)$

    The domain space of $x$ is $[0,1]^2$.

    Test functional inputs are

    • $g(x_1,x_2)=\sin(\alpha_1x_1+\alpha_2x_2)$
    • $g(x_1,x_2)=\beta +x_1^2+x_2^3$
    • $g(x_1,x_2)=\exp(-\kappa x_1x_2)$

    with random $\alpha_1,\alpha_2, \beta$ and $\kappa$ from $[0,1]$.

    # training functional inputs (G)
    G <- list(function(x) x[1]+x[2],
              function(x) x[1]^2,
              function(x) x[2]^2,
              function(x) 1+x[1],
              function(x) 1+x[2],
              function(x) 1+x[1]*x[2],
              function(x) sin(x[1]),
              function(x) cos(x[1]+x[2]))
    n <- length(G)
    # y1: integrate g function from 0 to 1
    y1 <- rep(0, n) 
    for(i in 1:n) y1[i] <- hcubature(G[[i]], lower=c(0, 0),upper=c(1,1))$integral
    
    # y2: integrate g^3 function from 0 to 1
    G.cubic <- list(function(x) (x[1]+x[2])^3,
                     function(x) (x[1]^2)^3,
                     function(x) (x[2]^2)^3,
                     function(x) (1+x[1])^3,
                     function(x) (1+x[2])^3,
                     function(x) (1+x[1]*x[2])^3,
                     function(x) (sin(x[1]))^3,
                     function(x) (cos(x[1]+x[2]))^3)
    y2 <- rep(0, n) 
    for(i in 1:n) y2[i] <- hcubature(G.cubic[[i]], lower=c(0, 0),upper=c(1,1))$integral
    
    # y3: integrate sin(g^2) function from 0 to 1
    G.sin <- list(function(x) sin((x[1]+x[2])^2),
                  function(x) sin((x[1]^2)^2),
                  function(x) sin((x[2]^2)^2),
                  function(x) sin((1+x[1])^2),
                  function(x) sin((1+x[2])^2),
                  function(x) sin((1+x[1]*x[2])^2),
                  function(x) sin((sin(x[1]))^2),
                  function(x) sin((cos(x[1]+x[2]))^2))
    y3 <- rep(0, n) 
    for(i in 1:n) y3[i] <- hcubature(G.sin[[i]], lower=c(0, 0),upper=c(1,1))$integral
    Reproducing Table S1
    Y <- cbind(y1,y2,y3)
    knitr::kable(round(t(Y),2))
    y1 1.00 0.33 0.33 1.50 1.50 1.25 0.46 0.50
    y2 1.50 0.14 0.14 3.75 3.75 2.15 0.18 0.26
    y3 0.62 0.19 0.19 0.49 0.49 0.84 0.26 0.33

    Now we are ready to fit a FIGP model. In each for loop, we fit a FIGP for each of y1, y2 and y3. In each for loop, we also compute LOOCV errors by loocv function.

    loocv.l <- loocv.nl <- rep(0,3)
    gp.fit <- gpnl.fit <- vector("list", 3)
    set.seed(1)
    for(i in 1:3){
      # fit FIGP with a linear kernel
      gp.fit[[i]] <- FIGP(G, d=2, Y[,i], nu=2.5, nug=eps, kernel="linear")
      loocv.l[i] <- loocv(gp.fit[[i]])
      
      # fit FIGP with a nonlinear kernel
      gpnl.fit[[i]] <- FIGP(G, d=2, Y[,i], nu=2.5, nug=eps, kernel="nonlinear")
      loocv.nl[i] <- loocv(gpnl.fit[[i]])
    }

    As a comparison, we consider two basis expansion approaches. The first method is KL expansion.

    # for comparison: basis expansion approach
    # KL expansion that explains 99% of the variance
    set.seed(1)
    KL.out <- KL.expan(d=2, G, fraction=0.99, rnd=1e3)
    B <- KL.out$B
      
    KL.fit <- vector("list", 3)
    # fit a conventional GP on the scores
    for(i in 1:3) KL.fit[[i]] <- sepGP(B, Y[,i], nu=2.5, nug=eps)

    The second method is Taylor expansion with degree 3.

    # for comparison: basis expansion approach
    # Taylor expansion coefficients for each functional input
    taylor.coef <- matrix(c(0,1,1,rep(0,7),
                            rep(0,4),1,rep(0,5),
                            rep(0,5),1,rep(0,4),
                            rep(1,2),rep(0,8),
                            1,0,1,rep(0,7),
                            1,0,0,1,rep(0,6),
                            0,1,rep(0,6),-1/6,0,
                            1,0,0,-1,-1/2,-1/2,rep(0,4)),ncol=10,byrow=TRUE)
    
    TE.fit <- vector("list", 3)
    # fit a conventional GP on the coefficients
    for(i in 1:3) TE.fit[[i]] <- sepGP(taylor.coef, Y[,i], nu=2.5, nug=eps, scale.fg=FALSE, iso.fg=TRUE)

    Let’s make predictions on the test functional inputs. We test n.test times.

    set.seed(1)
    n.test <- 100
    
    alpha1 <- runif(n.test,0,1)
    alpha2 <- runif(n.test,0,1)
    beta1 <- runif(n.test,0,1)
    kappa1 <- runif(n.test,0,1)
    
    mse.linear <- mse.nonlinear <- mse.kl <- mse.te <- 
      cvr.linear <- cvr.nonlinear <- cvr.kl <- cvr.te <- 
      score.linear <- score.nonlinear <- score.kl <- score.te <-rep(0,3)
    
    # scoring rule function
    score <- function(x, mu, sig2){
      if(any(sig2==0)) sig2[sig2==0] <- eps
      -(x-mu)^2/sig2-log(sig2)
    }
    
    for(i in 1:3){
      mse.linear.i <- mse.nonlinear.i <- mse.kl.i <- mse.te.i <- 
        cvr.linear.i <- cvr.nonlinear.i <- cvr.kl.i <- cvr.te.i <- 
        score.linear.i <- score.nonlinear.i <- score.kl.i <- score.te.i <- rep(0, n.test)
      for(ii in 1:n.test){
        gnew <- list(function(x) sin(alpha1[ii]*x[1]+alpha2[ii]*x[2]),
                     function(x) beta1[ii]+x[1]^2+x[2]^3,
                     function(x) exp(-kappa1[ii]*x[1]*x[2]))    
        if(i==1){
          g.int <- gnew
        }else if(i==2){
          g.int <- list(function(x) (sin(alpha1[ii]*x[1]+alpha2[ii]*x[2]))^3,
                        function(x) (beta1[ii]+x[1]^2+x[2]^3)^3,
                        function(x) (exp(-kappa1[ii]*x[1]*x[2]))^3)
        }else if(i==3){
          g.int <- list(function(x) sin((sin(alpha1[ii]*x[1]+alpha2[ii]*x[2]))^2),
                        function(x) sin((beta1[ii]+x[1]^2+x[2]^3)^2),
                        function(x) sin((exp(-kappa1[ii]*x[1]*x[2]))^2))
        }
        
        n.new <- length(gnew)
        y.true <- rep(0,n.new)
        for(iii in 1:n.new) y.true[iii] <- hcubature(g.int[[iii]], lower=c(0, 0),upper=c(1,1))$integral
        
        # FIGP: linear kernel
        ynew <- pred.FIGP(gp.fit[[i]], gnew)
        mse.linear.i[ii] <- mean((y.true - ynew$mu)^2)
        lb <- ynew$mu - qnorm(0.975)*sqrt(ynew$sig2)
        ub <- ynew$mu + qnorm(0.975)*sqrt(ynew$sig2)
        cvr.linear.i[ii] <- mean(y.true > lb & y.true < ub)
        score.linear.i[ii] <- mean(score(y.true, ynew$mu, ynew$sig2))
        
        # FIGP: nonlinear kernel
        ynew <- pred.FIGP(gpnl.fit[[i]], gnew)
        mse.nonlinear.i[ii] <- mean((y.true - ynew$mu)^2)
        lb <- ynew$mu - qnorm(0.975)*sqrt(ynew$sig2)
        ub <- ynew$mu + qnorm(0.975)*sqrt(ynew$sig2)
        cvr.nonlinear.i[ii] <- mean(y.true > lb & y.true < ub)
        score.nonlinear.i[ii] <- mean(score(y.true, ynew$mu, ynew$sig2))
        
        # FPCA
        B.new <- KL.Bnew(KL.out, gnew)
        ynew <- pred.sepGP(KL.fit[[i]], B.new)
        mse.kl.i[ii] <- mean((y.true - ynew$mu)^2)
        lb <- ynew$mu - qnorm(0.975)*sqrt(ynew$sig2)
        ub <- ynew$mu + qnorm(0.975)*sqrt(ynew$sig2)
        cvr.kl.i[ii] <- mean(y.true > lb & y.true < ub)
        score.kl.i[ii] <- mean(score(y.true, ynew$mu, ynew$sig2))
        
        # Taylor expansion
        taylor.coef.new <- matrix(c(0,alpha1[ii],alpha2[ii],0,0,0,alpha1[ii]^2*alpha2[ii]/2,alpha1[ii]*alpha2[ii]^2/2,alpha1[ii]^3/6,alpha2[ii]^3/6,
                                    beta1[ii],rep(0,3),1,rep(0,4),1,
                                    1,0,0,-kappa1[ii],rep(0,6)),ncol=10,byrow=TRUE)
        ynew <- pred.sepGP(TE.fit[[i]], taylor.coef.new)
        mse.te.i[ii] <- mean((y.true - ynew$mu)^2)
        lb <- ynew$mu - qnorm(0.975)*sqrt(ynew$sig2)
        ub <- ynew$mu + qnorm(0.975)*sqrt(ynew$sig2)
        cvr.te.i[ii] <- mean(y.true > lb & y.true < ub)
        score.te.i[ii] <- mean(score(y.true, ynew$mu, ynew$sig2))
      }
      mse.linear[i] <- mean(mse.linear.i)
      mse.nonlinear[i] <- mean(mse.nonlinear.i)
      mse.kl[i] <- mean(mse.kl.i)
      mse.te[i] <- mean(mse.te.i)
      cvr.linear[i] <- mean(cvr.linear.i)*100
      cvr.nonlinear[i] <- mean(cvr.nonlinear.i)*100
      cvr.kl[i] <- mean(cvr.kl.i)*100
      cvr.te[i] <- mean(cvr.te.i)*100
      score.linear[i] <- mean(score.linear.i)
      score.nonlinear[i] <- mean(score.nonlinear.i)
      score.kl[i] <- mean(score.kl.i)
      score.te[i] <- mean(score.te.i)
    }
    Reproducing Table 1
    out <- rbind(format(loocv.l,digits=4),
                 format(loocv.nl,digits=4),
                 format(mse.linear,digits=4),
                 format(mse.nonlinear,digits=4),
                 format(sapply(gp.fit,"[[", "ElapsedTime"),digits=4),
                 format(sapply(gpnl.fit,"[[", "ElapsedTime"),digits=4))
    rownames(out) <- c("linear LOOCV", "nonlinear LOOCV", "linear MSE", "nonlinear MSE", "linear time", "nonlinear time")
    colnames(out) <- c("y1", "y2", "y3")
    knitr::kable(out)
    y1 y2 y3
    linear LOOCV 7.867e-07 1.813e+00 4.541e-01
    nonlinear LOOCV 2.150e-06 2.274e-01 1.662e-02
    linear MSE 6.388e-10 1.087e+00 1.397e-01
    nonlinear MSE 3.087e-07 1.176e-02 1.640e-02
    linear time 8.650 8.488 8.450
    nonlinear time 0.728 0.908 0.972
    Reproducing Table S2
    select.idx <- apply(rbind(loocv.l, loocv.nl), 2, which.min)
    select.mse <- diag(rbind(mse.linear, mse.nonlinear)[select.idx,])
    select.cvr <- diag(rbind(cvr.linear, cvr.nonlinear)[select.idx,])
    select.score <- diag(rbind(score.linear, score.nonlinear)[select.idx,])
    
    out <- rbind(format(select.mse,digits=4),
                 format(mse.kl,digits=4),
                 format(mse.te,digits=4),
                 format(select.cvr,digits=4),
                 format(cvr.kl,digits=4),
                 format(cvr.te,digits=4),
                 format(select.score,digits=4),
                 format(score.kl,digits=4),
                 format(score.te,digits=4))
    rownames(out) <- c("FIGP MSE", "Basis MSE", "T3 MSE", 
                       "FIGP coverage", "Basis coverage", "T3 coverage", 
                       "FIGP score", "Basis score", "T3 score")
    colnames(out) <- c("y1", "y2", "y3")
    knitr::kable(out)
    y1 y2 y3
    FIGP MSE 6.388e-10 1.176e-02 1.640e-02
    Basis MSE 0.0001827 0.1242804 0.0227310
    T3 MSE 0.09349 1.27116 0.04747
    FIGP coverage 96.33 100.00 100.00
    Basis coverage 100.00 92.33 76.00
    T3 coverage 100.00 98.33 100.00
    FIGP score 14.899 2.571 3.458
    Basis score 6.6306 1.2074 0.2902
    T3 score 1.064 -1.364 2.047

    Reproducing Section 5: Inverse Scattering Problems

    Now we move to a real problem: inverse scattering problem. First, since the data were generated through Matlab, we use the function readMat in the package R.matlab to read the data. There were ten training data points, where the functional inputs are

    • $g(x_1,x_2)=1$
    • $g(x_1,x_2)=1+x_1$
    • $g(x_1,x_2)=1-x_1$
    • $g(x_1,x_2)=1+x_1x_2$
    • $g(x_1,x_2)=1-x_1x_2$
    • $g(x_1,x_2)=1+x_2$
    • $g(x_1,x_2)=1+x_1^2$
    • $g(x_1,x_2)=1-x_1^2$
    • $g(x_1,x_2)=1+x_2^2$
    • $g(x_1,x_2)=1-x_2^2$
    Reproducing Figure 2

    The outputs are displayed as follows, which reproduces Figure 2.

    func.title <- c("g(x1,x2)=1", "g(x1,x2)=1+x1", "g(x1,x2)=1-x1","g(x1,x2)=1+x1x2",
                    "g(x1,x2)=1-x1x2","g(x1,x2)=1+x2","g(x1,x2)=1+x1^2","g(x1,x2)=1-x1^2",
                    "g(x1,x2)=1+x2^2","g(x1,x2)=1-x2^2")
    
    output.mx <- matrix(0,nrow=10,ncol=32*32)
    par(mfrow=c(2,5))
    par(mar = c(1, 1, 2, 1))
    for(i in 1:10){
      g.out <- readMat(paste0("DATA/q_func",i,".mat"))$Ffem
      image(Re(g.out), zlim=c(0.05,0.11),yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main=func.title[i])
      contour(Re(g.out), add = TRUE, nlevels = 5)
      output.mx[i,] <- c(Re(g.out))
    }

    We perform PCA (principal component analysis) for dimension reduction, which shows that only three components can explain more than 99.99% variation of the data.

    pca.out <- prcomp(output.mx, scale = FALSE, center = FALSE)
    n.comp <- which(summary(pca.out)$importance[3,] > 0.9999)[1]
    print(n.comp)
    ## PC3 
    ##   3
    
    Reproducing Figure S3

    Plot the three principal components, which reproduces Figure S3.

    par(mfrow=c(1,3))
    par(mar = c(1, 1, 2, 1))
    for(i in 1:n.comp){
      eigen.vec <- matrix(c(pca.out$rotation[,i]), 32, 32)
      image(eigen.vec,yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main=paste("PC",i))
      contour(eigen.vec, add = TRUE, nlevels = 5)
    }

    Now we are ready to fit the FIGP model on those PC scores. Similarly, we fit the FIGP with a linear kernel and a nonlinear kernel.

    # training functional inputs (G)
    G <- list(function(x) 1,
              function(x) 1+x[1],
              function(x) 1-x[1],
              function(x) 1+x[1]*x[2],
              function(x) 1-x[1]*x[2],
              function(x) 1+x[2],
              function(x) 1+x[1]^2,
              function(x) 1-x[1]^2,
              function(x) 1+x[2]^2,
              function(x) 1-x[2]^2)
    n <- length(G)
    
    set.seed(1)
    gp.fit <- gpnl.fit <- vector("list",n.comp)
    for(i in 1:n.comp){
      y <- pca.out$x[,i]
      # fit FIGP with a linear kernel  
      gp.fit[[i]] <- FIGP(G, d=2, y, nu=2.5, nug=eps, kernel = "linear")
      # fit FIGP with a nonlinear kernel    
      gpnl.fit[[i]] <- FIGP(G, d=2, y, nu=2.5, nug=eps, kernel = "nonlinear")
    }

    Perform a LOOCV to see which kernel is a better choice.

    loocv.recon <- sapply(gp.fit, loocv.pred) %*% t(pca.out$rotation[,1:n.comp])
    loocv.linear <- mean((loocv.recon - output.mx)^2)
    
    loocv.nl.recon <- sapply(gpnl.fit, loocv.pred) %*% t(pca.out$rotation[,1:n.comp])
    loocv.nonlinear <- mean((loocv.nl.recon - output.mx)^2)
    
    out <- c(loocv.linear, loocv.nonlinear)
    names(out) <- c("linear", "nonlinear")
    print(out)
    ##       linear    nonlinear 
    ## 3.648595e-06 1.156923e-05
    

    We see linear kernel leads to a smaller LOOCV, which indicates that it’s a better choice.

    Reproducing Figure S4

    Thus, we perform the predictions on a test input using the FIGP model with the linear kernel, which is

    • $g(x_1,x_2)=1-\sin(x_2)$
    # test functional inputs (gnew)
    gnew <- list(function(x) 1-sin(x[2]))
    n.new <- length(gnew)
    
    # make predictions using a linear kernel
    ynew <- s2new <- matrix(0,ncol=n.comp,nrow=n.new)
    for(i in 1:n.comp){
      pred.out <- pred.FIGP(gp.fit[[i]], gnew)
      ynew[,i] <- pred.out$mu
      s2new[,i] <- pred.out$sig2
    }
    
    # reconstruct the image
    pred.recon <- ynew %*% t(pca.out$rotation[,1:n.comp])
    s2.recon <- s2new %*% t(pca.out$rotation[,1:n.comp]^2)
    
    # FPCA method for comparison
    KL.out <- KL.expan(d=2, G, fraction=0.99, rnd=1e3)
    B <- KL.out$B
    B.new <- KL.Bnew(KL.out, gnew)
    
    ynew <- s2new <- matrix(0,ncol=n.comp,nrow=n.new)
    KL.fit <- vector("list", n.comp)
    for(i in 1:n.comp){
      KL.fit[[i]] <- sepGP(B, pca.out$x[,i], nu=2.5, nug=eps)
      pred.out <- pred.sepGP(KL.fit[[i]], B.new)
      ynew[,i] <- drop(pred.out$mu)
      s2new[,i] <- drop(pred.out$sig2)
    }
    
    # reconstruct the image
    pred.KL.recon <- ynew %*% t(pca.out$rotation[,1:n.comp])
    s2.KL.recon <- s2new %*% t(pca.out$rotation[,1:n.comp]^2)
    
    # Taylor method for comparison
    ynew <- s2new <- matrix(0,ncol=n.comp,nrow=n.new)
    taylor.coef <- matrix(c(c(1,1,0,0,0,0,0),
                          c(1,-1,0,0,0,0,0),
                          c(1,0,0,1,0,0,0),
                          c(1,0,0,-1,0,0,0),
                          c(1,0,1,0,0,0,0),
                          c(1,0,-1,0,0,0,0),
                          c(1,0,0,0,1,0,0),
                          c(1,0,0,0,-1,0,0),
                          c(1,0,0,0,0,1,0),
                          c(1,0,0,0,0,-1,0)),ncol=7,byrow=TRUE)
    taylor.coef.new <- matrix(c(1,0,-1,0,0,0,1/6),ncol=7)
    
    TE.fit <- vector("list", n.comp)
    for(i in 1:n.comp) {
      TE.fit[[i]] <- sepGP(taylor.coef, pca.out$x[,i], nu=2.5, nug=eps, scale.fg=FALSE, iso.fg=TRUE)
      pred.out <- pred.sepGP(TE.fit[[i]], taylor.coef.new)
      ynew[,i] <- drop(pred.out$mu)
      s2new[,i] <- drop(pred.out$sig2)
    }
    
    # reconstruct the image
    pred.TE.recon <- ynew %*% t(pca.out$rotation[,1:n.comp])
    s2.TE.recon <- s2new %*% t(pca.out$rotation[,1:n.comp]^2)
    
    # true data on the test data
    gnew.true <- matrix(0, ncol=n.new, nrow=32*32)
    gnew.dat <- readMat(paste0("DATA/q_sine.mat"))$Ffem
    gnew.true[,1] <- c(Re(gnew.dat))
    
    
    # plot the result
    par(mfrow=c(3,3))
    par(mar = c(1, 1, 2, 1))
    
    mse.figp <- mse.kl <- mse.taylor <- 
      score.figp <- score.kl <- score.taylor <- rep(0, n.new)
    
    for(i in 1:n.new){
      image(matrix(gnew.true[,i],32,32), zlim=c(0.05,0.11),yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main=ifelse(i==1, "g(x1,x2)=1-sin(x2)", "g(x1,x2)=1"))
      contour(matrix(gnew.true[,i],32,32), add = TRUE, nlevels = 5)
      
      image(matrix(pred.recon[i,], 32, 32), zlim=c(0.05,0.11),yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main="FIGP prediction")
      contour(matrix(pred.recon[i,], 32, 32), add = TRUE, nlevels = 5)
      
      image(matrix(log(s2.recon[i,]), 32, 32), zlim=c(-16,-9), yaxt="n",xaxt="n",
            col=cm.colors(12, rev = FALSE),
            main="FIGP log(variance)")
      contour(matrix(log(s2.recon[i,]), 32, 32), add = TRUE, nlevels = 5)
      
      mse.figp[i] <- mean((gnew.true[,i]-pred.recon[i,])^2)
      score.figp[i] <- mean(score(gnew.true[,i], pred.recon[i,], s2.recon[i,]))
      
      # empty plot
      plot.new()
      
      image(matrix(pred.KL.recon[i,], 32, 32), zlim=c(0.05,0.11),yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main="FPCA prediction")
      contour(matrix(pred.KL.recon[i,], 32, 32), add = TRUE, nlevels = 5)
      mse.kl[i] <- mean((gnew.true[,i]-pred.KL.recon[i,])^2)
      score.kl[i] <- mean(score(gnew.true[,i], pred.KL.recon[i,], s2.KL.recon[i,]))
      
      image(matrix(log(s2.KL.recon[i,]), 32, 32), zlim=c(-16,-9), yaxt="n",xaxt="n",
            col=cm.colors(12, rev = FALSE),
            main="FPCA log(variance)")
      contour(matrix(log(s2.KL.recon[i,]), 32, 32), add = TRUE, nlevels = 5)
      
      # empty plot
      plot.new()
      
      image(matrix(pred.TE.recon[i,], 32, 32), zlim=c(0.05,0.11),yaxt="n",xaxt="n",
            col=heat.colors(12, rev = FALSE),
            main="T3 prediction")
      contour(matrix(pred.TE.recon[i,], 32, 32), add = TRUE, nlevels = 5)
      
      image(matrix(log(s2.TE.recon[i,]), 32, 32), zlim=c(-16,-9), yaxt="n",xaxt="n",
            col=cm.colors(12, rev = FALSE),
            main="T3 log(variance)")
      contour(matrix(log(s2.TE.recon[i,]), 32, 32), add = TRUE, nlevels = 5)
      
      mse.taylor[i] <- mean((gnew.true[,i]-pred.TE.recon[i,])^2)
      score.taylor[i] <- mean(score(gnew.true[,i], pred.TE.recon[i,], s2.TE.recon[i,]))
    }

    Reproducing Table 2

    The prediction performance for the test data is given below.

    out <- cbind(mse.figp, mse.kl, mse.taylor)
    out <- rbind(out, cbind(score.figp, score.kl, score.taylor))
    colnames(out) <- c("FIGP", "FPCA", "T3")
    rownames(out) <- c("MSE", "score")
    knitr::kable(out)
    FIGP FPCA T3
    MSE 0.0000011 0.000107 0.0000906
    score 12.1301269 6.890083 6.3916707
    Visit original content creator repository https://github.com/ChihLi/functional-input-GP
  • Contentful2Hugo

    Contentful to Hugo

    This is a example of how to use Foopipes to create a static but automatically updated website by using Contentful together with Hugo.

    Read more about Contentful CMS: http://contentful.com/

    Read more about Hugo static website generator: http://gohugo.io/

    Read more about Foopipes: http://foopipes.com/

    Quick start

    Prerequisite: Install Docker https://www.docker.com/

    From Powershell:

    $env:spaceId="<contentful_space_id>"
    $env:accessToken="<contentful_acessToken>"
    docker-compose up
    

    Bash:

    export spaceId=<contentful_space_id>
    export accessToken=<contentful_acessToken>
    docker-compose up
    

    Wait a moment and then browse to http://localhost/

    Or in more detail how to start the Foopipes container manually:

    docker run --rm -it -v ${Pwd}:/project -v ${Pwd}/hugo_src/content:/var/output ${Pwd}/hugo_src/static/images/:/var/images -p 80 -e "spaceId=<contentful_space_id>" -e "accessToken=<contentful_acessToken>" --name contentful2hugo aretera/foopipes:latest-sdk --verbose=off
    
    • --rm – Remove container when finished
    • -it – Interactive
    • -v – Mount volume
    • -e – Set environment variable
    • -p – Expose port to host
    • --name – Set container name

    Modifying Markdown output

    Foopipes generates markdown from the content found in Contentful CMS. You can easily change how the markdown is generated for different content types by modifying the Typescript file ./modules_src/hugo.ts.

    No need to rebuild or compile, just restart the Foopipes Docker container. (ctrl-c then docker-compose up)

    How does it work?

    Foopipes uses the configuration in foopipes.yml to fetch and process the content from Contentful. It then invokes the Node module hugo to convert it markdown and then stores it to disk. Images (assets) are downloaded.

    Hugo watches for changes and generates html files from the markdown.

    Contentful webhooks

    To avoid delays when editing in Contentful you can use Webhooks for immediate publish of changes. In Contentful, configure a webhook endpoint to /contentfulwebhook in the Foopipes container port 80. (see below)

    Contentful webhooks using ngrok

    A helpful tool to allow incoming trafic through firewalls etc is https://ngrok.com/ which can be run inside a Docker container.

    docker run -d -p 4040:4040 --link contentful2hugo --name contentful2hugo_ngrok wernight/ngrok ngrok http contentful2hugo:80
    

    Find the ngrok public url by either browsing to http://localhost:4040/status or

    curl localhost:4040/api/tunnels
    

    In Contentful admin, under Settings -> Webhooks add a new Webhook with the ngrok public url with the path /contentfulwebhook

    Contentful webhook screen

    For more information about running ngrok in Docker check out https://github.com/wernight/docker-ngrok

    Docker compose

    A Docker Compose configuration file is ready for use in this repository. It configured to run Hugo, Foopipes, ngrok and a nginx webserver.

    Just start it with

    Powershell:

    $env:spaceId="<contentful_space_id>"
    $env:accessToken="<contentful_acessToken>"
    docker-compose up
    

    Bash:

    export spaceId=<contentful_space_id>
    export accessToken=<contentful_acessToken>
    docker-compose up
    

    Important note about security

    The webhook endpoint is not access restricted and open for anyone. If you exposed it to the internet, either directly or via ngrok, you must consider the security issues that araises. It is recommended that you use some kind of proxy in front of the exposed endpoint to limit access.

    Questions?

    Feel free to send me a message!

    Visit original content creator repository https://github.com/AreteraAB/Contentful2Hugo