Category: Blog

  • kirby-navigation-groups

    Kirby Navigation Groups

    License Version

    A plugin for Kirby CMS that allows you to organize your navigation items into groups.

    Cover Kirby Navigation Groups

    Features

    • Create and manage navigation groups
    • Drag & drop interface for organizing pages
    • Sync sort order with folder structure
    • Multi-language support (EN, DE, FR, ES, IT)
    • Customizable group fields

    Installation

    Download

    Download and copy this repository to /site/plugins/kirby-navigation-groups

    Composer

    composer require philippoehrlein/kirby-navigation-groups

    Usage

    1. In your blueprint, add a field of type navigationgroups:
    fields:
      navigation:
        label: Navigation
        type: navigationgroups
    1. Optional: Add custom fields to your groups and filter pages by status:
    fields:
      navigation:
        label: Navigation
        type: navigationgroups
        status: listed
        fields:
          description:
            type: textarea
            label: Description
          toggle:
            label: Toggle group
            type: toggle
            text:
              - "no"
              - "yes"

    Options

    The plugin supports the following options:

    • status: Filter for page status (‘all’, ‘listed’, ‘unlisted’, ‘published’, default: ‘listed’)
    • fields: Additional fields for groups

    Field Methods

    The plugin provides the Field Method toGroupItems() to access the stored navigation items:

    <?php
    $items = $page->navigation()->toGroupItems();
    ?>
    
    <nav class="navigation">
      <ul>
      <?php foreach ($items as $item): ?>
      <?php if ($item->type() == "https://github.com/philippoehrlein/group'): ?>
        <li>
          <h2><?= $item->title() ?></h2>
          <ul>
            <?php foreach ($item->pages() as $subPage): ?>
              <li><a href="https://github.com/philippoehrlein/<?= $subPage->url() ?>"><?= $subPage->title() ?></a></li>
            <?php endforeach; ?>
          </ul>
        </li>
      <?php else: ?>
        <li><a href="https://github.com/philippoehrlein/<?= $item->url() ?>"><?= $item->title() ?></a></li>
      <?php endif; ?>
      <?php endforeach; ?>
      </ul>
    </nav>

    Development

    If you want to contribute to the development of this plugin, follow these steps:

    1. Clone the repository.
    2. Install dependencies using Composer.
    3. Make your changes and test them in your Kirby installation.

    License

    This plugin is open-source and available under the MIT License.

    Visit original content creator repository https://github.com/philippoehrlein/kirby-navigation-groups
  • InteractionPlugin

    Interaction plug-in for the Unreal Engine.

    The Goal of this plug-in is to handle the interaction between player and objects/actors in the game by implementing a component based architecture. There are two main components, Interactor Component which is added to the player and Interaction Component for the Intractable Objects.

    Instant Interaction

    Hold Interaction

    Introduction

    The Interaction System is designed and developed based on a Component to Component communication architecture. All of the required logic during the interaction process is handled and processed by the components attached to the owners such as Characters and Interactive objects. The system is mainly made of two important components, Interactor Component which is added to the character or the player pawn and interaction component that is required by an interactive object.

    Road map / Features:

    1. Interaction Types
      • Instant [DONE]
      • Hold [DONE]
    2. Condition Based Interaction
      • Single Interaction [DONE]
      • Multiple Interaction [DONE]
      • Custom Defined Condition E.g Team Id [DONE]
    3. Multiplayer/Network Support
      • Interaction States Replication [DONE]
      • Interaction Process [DONE]
      • Notifications for Animations [DONE]
    4. Documentation [DONE]
      • Showcase Level [DONE]

    Getting up and Running

    1. Add an Interactor Component to the Character or Player Pawn
    2. Add an Interaction Component to the Interactive Object (e.g Door)
    3. Setup an Input key to Invoke TryStartInteraction on the Interactor Component Setup Interactor
    4. Bind to the OnInteractorStateChanged Delegate on Interactor Component to Receive Interaction Results

    Interaction Components

    Class: UInteractionComponent

    Interaction Component is added to an Interact-able object (E.g Doors,Pickup). There are currently two types of Interaction Components.

    • InteractionComponent_Instant:
      Class: UInteractionComponent_Instant
      Interaction is Instantly Completed upon Initiation by an Interactor.
    • InteractionComponent_Hold:
      Class: UInteractionComponent_Hold
      Hold Interaction is a Duration based interaction that requires the Interactor to actively interact with the object for the duration.

    Multiple Interaction

    Interaction Components can allow multiple interactions simultaneously. A configuration Boolean on the interaction component named bMultipleInteraction, controls and determines whether Interaction components can allow simultaneous interaction or only one interaction at a time. This does not imply to Instant interaction, as the interaction is completed instantly.

    Condition Based Interaction

    Class: IInteractionInterface

    At times, Custom Conditions are required to be met before starting an interaction, for example, a lock system on a chest or team only buildings and equipment. In order to handle such custom conditions both Interactor and Interaction Components will Execute an Interface call on their owners ICanInteractWith(Actor* OtherOwner) Passing the other party actor, this interface then returns a Boolean determining whether the interaction can be initiated or not. ** However This does not mean that the interface has to be always implemented on the owner even if custom conditions are not required, the components will simply ignore the interface call if the owner does not implement it.**

    E.g: In order to implement a team only chest, we simply add an Interaction Interface on the chest actor. Then we override the ICanInteractWith Function in the interface tab. Then inside the function, we get the team id of the OtherOwner then return true if the team id is equal to the chest team id.

    Example of Interaction Interface

    Interaction Result

    Both Interaction and Interactor Components implement Interaction Results and State. Interaction Results are enums that are meant to provide more information during the interaction process. These results are broadcasted through delegates when an Interaction is initiated up to completion.

    EInteractionResult:

    • None [IR_None]: Unknown Result
    • Started [IR_Started]: Interaction Started
    • Successful [IR_Successful]: Interaction Successfully Completed
    • Failed [IR_Failed]: Interaction Failed due to Conditions Returning False
    • Interrupted [IR_Interrupted]: Interaction Interrupted due to Player Pawn or Character Looking away and Going out of Reach during Interaction. Moreover, Interruptions Happen after removal of Interaction or Interactor Components during Interaction process.

    These Interaction Results are received and broadcasted on Both Components through delegates (Blueprint Event Dispatcher).

    • Interaction Component: OnInteractionStateChanged
    • Interactor Component: OnInteractorStateChanged

    In a networked Environments, These results are send to the clients through Remote Procedure Calls (RPC). But in Some cases this Information may not be relevant to all the clients or on the other hand, all the clients should be aware of these results. This can be controlled and configured on each component by changing the InteractorStateNetMode or InteractionStateNetMode.

    • None : None of the Clients Receive the Result Update
    • OwnerOnly : Only the Local Owner of the Component Will Receive the Update
    • All : All Clients With this Instance of the Component Will Receive the Update

    Interaction Focus

    It is important to be able to notify and inform the player of an interactive object or even show and interaction widget (Press E to Interact). This can be easily implemented by binding/listening to any of these delegates.

    OnNewInteraction: This Delegate is broadcasted by the Interactor Component when a New Interactive object comes into the focus or leaves the reach of the player.

    OnInteractionFocusChanged: Delegate Implemented by the Interaction Component. Broadcasted whenever the interaction object comes into the focus of a player.

    Interaction Direction

    In Some Cases, the direction of the interaction is important. Some Interactive Objects may require the Players to look at the face of the object in order to be able to interact. But for other interactive objects this may not be a requirement. This behavior can be configured on each Interaction Component Config setting by changing the Boolean variable named OnlyFaceInteraction. Setting this variable to true will require the player to look at the face of the object.

    Showcase

    You can Download the Showcase Level here.

    Author

    Contact me on Twitter.

    Visit original content creator repository https://github.com/Amirans/InteractionPlugin
  • ftkv

    ftkv

    一种具有强一致性的分布式键值对存储系统,基于raft共识算法提供一种可靠的方式来存储需要由分布式系统或机器集群访问的数据。它可以在网络分区期间进行领导者选举,并可以容忍机器故障。 系统分为三个部部分:Server、Client、Router。Router是全局的控制中心,管理Raft Groups的数据分片。Server、Router都使用Raft进行复制。Client支持基本的故障转移。

    Note: The main branch may be in an unstable or even broken state during development. For stable versions, seereleases.

    背景

    这个raft虽然实现了论文中提到的大部分功能,如领导者选举、日志共识、日志压缩等,kvservice也能够利用raft达到强一致性和高可用,但也存在不足之处: raft并没有实现真正的持久化,使用的rpc也不是真正的RPC调用,kvservice不支持删除等等。本着让kvservice成为一个真正可用的键值对存储系统的想法,ftkv由此而来。

    系统架构

    kvservice_diagram.jpg

    安装与使用

    前提条件

    请确保系统在运行前已安装1.18及以上版本的golang,安装指导文档可以参照Download and install。 例如,在linux上安装1.20.1版本的golang:

    wget https://go.dev/dl/go1.20.1.linux-amd64.tar.gz
    rm -rf /usr/local/go && tar -C /usr/local -xzf go1.20.1.linux-amd64.tar.gz
    # 临时
    export PATH=$PATH:/usr/local/go/bin
    # 永久
    sudo vim ~/.bashrc
    ## 在文件尾添加这三行配置
    export GOPATH=$HOME/gopath
    export GOROOT=/usr/local/go
    export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
    ## 更新配置
    source ~/.profile
    go version
    1. 运行以下命令克隆本仓库代码
    git clone https://github.com/chong-chonga/ftkv.git
    1. 克隆成功后,会得到一个名为ftkv的文件夹,使用cd命令进入main文件夹
    cd ftkv/main/
    1. 运行以下命令后,会在本地启动三个KVServer,彼此之间使用raft算法达成共识;可通过命令行交互来测试基础功能
    go run main.go

    开发笔记

    在开发FT-KVService时,存在很多问题,以下是我的思考

    幂等性接口

    Raft论文中有提到要实现幂等性接口,在Get、Put、Append、Delete这四种操作种,Append操作不具备幂等性。 由于分布式系统固有的不可靠性,客户端可能会重试请求,因此实验要求我们能够检测到重复的请求,并确保一个请求只会被执行一次。这其实就是设计幂等性接口,这种需求经常出现在”按钮“的场景中, 比如下单、支付、浏览器刷新重新提交表单等。 单机系统中的幂等性接口比较好设计,可以在提交请求前让服务端生成一个标识,服务端在执行完请求后就删除这个标识,表示请求已经被执行过了。 而在分布式系统中,还需要考虑标识的唯一性。为了对请求进行重复性检测,就需要为每个请求生成一个唯一标识。一个客户端会执行发起多次请求,因此可以采用全局唯一客户端标识+序列号来标识每个请求。 具体来说,客户端在发出请求前先分配一个全局唯一标识,在随后的请求中,客户端同时使用自身递增的序列号标识每个请求。这样就只需要在请求的一开始生成一个标识即可,后续的序列号由客户端生成。

    在生成全局唯一标识方面,主要有以下几种方式:

    1. 使用UUID+时间戳/随机数
    2. 使用雪花算法
    3. 分布式ID生成器

    第一种方式生成的表示不能确保全局唯一,尤其是在分布式系统中。第二种方式比较常用,在这里不考虑。可以利用Raft来构建一个具备容错能力的分布式ID生成器。客户端首先向ID生成器发出请求, ID生成器利用Raft共识算法对请求进行commit后,对ID进行递增后,将ID返回给客户端。ID是int64类型时,可以生成2^63-1个不同的ID。 如此一来,就解决了为每个客户端生成一个全局唯一标识的问题。由于客户端的序列号也是递增的,因此 服务端可以通过客户端ID+请求序列号来判断请求是否重复,如果序列号不大于 记录的序列号,则说明请求重复。但是,当客户端同时发送多个不同序列号的请求时(比如序列号1-5),问题会更复杂。

    1. 无论请求何时到达,这些请求都会被执行。优点:允许客户端同时发送多个请求。缺点:服务端需要记录客户端所有的请求序列号,内存开销会很大。
    2. 服务端只能按递增顺序处理请求,后面的请求只能等待前面的请求完成后才能执行。优点:对于每个客户端,只需要记录上次执行的请求的序列号,内存开销小。缺点:只允许客户端一次发送一个请求。

    因此,lab3给出了”客户端一次只会发送一个请求,如果请求失败,将会一直重试”这样的假设。 而ftkv的目标是提供像Redis那样的功能,如Get、Set、Delete操作,不包括Append操作。 Get、Set、Delete本身就带有幂等性。因此无需实现幂等性接口。而Router的Join、Leave不具备幂等性,但基本是由系统管理员所使用,使用频率较低,对幂等性要求较强,因此需要采用幂等性设计。

    Server的安全性

    • 起因:我们不希望自己的KVServer被随意访问,因此想要给KVServer加上一层认证措施,只有符合条件的客户端的请求才能被处理。 这里当然会想到给KVServer设置密码,只有客户端提供正确的密码后,才是通过认证的客户端。 在通过认证后,分配给客户端一个唯一的标识;客户端后续请求时就携带上这个标识,表明客户端已经通过认证了(因为不希望每次请求都要携带上密码)。 参照Redis这样的设置,将密码保存在一个配置文件中,KVServer启动时就读取配置文件里保存的密码,并将其与客户端提供的密码进行核对。
    • 需求:标识应当无规律,难以通过暴力尝试手段得到正确的标识,还需要确保生成的客户端标识的是分布式唯一的。
    • 思路:B/S架构下的SessionId是一个参考,可以考虑给每个通过认证的Client生成一个唯一的SessionId(随机串,比如uuid),根据客户端提供的SessionId参数来验证会话的有效性。 但是在分布式情境下,即使是基于时间戳并在同一台机器上生成uuid,也是有重复的可能。一般采用的策略是是选择雪花算法(SnowFlake),亦或者利用分布式锁来生成。 基于Raft提供的强一致性保证,我们可以对标识达成共识,但标识是有可能重复的。可以选择对后续重复的标识回传一个结果,以指示该标识重复,需要重新生成,但这样做会浪费一次 共识所需的时间。不妨换个思路,利用共识在集群中生成一个唯一的int64整数。我们可以将这个整数作为SessionId的前缀,uuid作为SessionId的后缀。 而者通过非数字字符相连组成SessionId。 这样生成的uuid发生重复也是没关系的,因为前缀必定是不同的。

    细节问题:记录SessionId的数据需要持久化吗(写入到快照)?

    可以不持久化,也就是说,Server对SessionId的记录是可以丢失的,下面是我的理解:

    首先考虑单Server的情况: 当Server崩溃重启时,会读取log并重放,因此Client与Server通信会出现以下几种情况:

    1. Client无法与Server通信,则Client将当前SessionId作废。
    2. Client发送请求给Server,Server重放日志后,Server仍然有SessionId的记录,那么Server是可以处理Client的请求的(就好像Server没有崩溃一样)。
    3. 否则此时SessionId是无效的,Server会拒绝Client的请求,因此Client也会将当前SessionId作废。

    即使持久化了,还是会出现上面三种情况(假如Client给Server发请求时,Server还没有将日志完全重放,则SessionId还是无效的) Server崩溃了,Client与Server的连接也会断开,RPC调用就会直接失败,是否可以通过这个来直接作废SessionId? 对于作废的SessionId,由go routine定时清理

    再考虑集群的情况: 集群相较于单主机可能会复杂一点,但是有一点可以明确:只有Leader才能处理Client的请求。Leader是集群中log最为完整的。 基于Raft提供的强一致性保证,如果Leader没有发生切换,则Client发送给Leader的请求,情况和单Server是一样的。 假如当前Leader崩溃了,那么Client会找到新的Leader,而该Leader的日志至少与前Leader一样新,因此情况和单Server还是一样的。

    再思考深一点,如果就是想保证生成的uuid就是唯一的呢,可否用现有的Raft做到?

    故障模型

    KVServer依靠Raft共识算法来达到强一致性,对抗网络分区、宕机等情况。KVServer对外暴露接口供客户端进行RPC调用。 一般情况下,KVServer是以集群的形式存在的,而根据Raft共识算法,只有集群中的Leader才能处理请求。 因此对KVServer的RPC调用在一开始很可能不会成功,所以需要对客户端进行一定的封装,才能更方便地使用KVServer。 在封装Client的过程中,需要给Client的使用者提供一个一致的错误模型。 对KVServer的RPC调用可能出现以下情况:

    1. KVServer宕机或Client无法连接到KVServer时,RPC调用无响应。
    2. KVServer不是Leader
    3. KVServer是Leader并提交了客户端的命令,但可能由于网络延迟等原因,导致命令在很长一段时间内都没有执行完成(也就是没有达成共识)。
    4. 客户端的请求执行成功

    当有多个KVServer时,对于第一、二种情况来说,Client应尝试调用其他KVServer,只有调用过其他KVServer也无法找到Leader时,才应当认定服务器出现故障。 而对于第三种情况来说,客户端的请求可能会执行也可能不会执行;对客户端而言,命令没有达成共识和网络延迟是一样的情况,请求是否执行对于客户端来说也是不确定的。

    有哪些地方是可以改进的?

    从整个系统层面来看,raft和上层的service就是一个Producer-Consumer关系。raft负责对日志进行一致性复制,并生产committed log给service;service负责消费committed log并执行相应的操作。 另外,raft还负责持久化log和snapshot,某种层面上说,raft也相当于一个存储引擎。因此优化可以分别从两方面入手:生产消息和持久化消息。

    raft层

    1. commit log效率低下:根据raft共识算法,log从提交到commit至少需要两轮RPC。如果依赖于leader是100ms发送一次AppendEntriesRPC,可能需要花费200ms左右;而如果在收到新的log就启动一次 AppendEntriesRPC的话,当提交的log比较频繁时,RPC开销会很大。因此在RPC方面,可以考虑批量发送log(batch),比如在收到一定量的log后再启动RPC。某种程度上,类似于OS的缓冲区思想;另外,如kafka等消息队列的生产者也做了这个优化。 系统之间的设计相通+1
    2. log内存复用:go语言的内存也是自动管理的,所以也会进行垃圾回收。因此在快照时截断log的操作可以对log的内存进行复用,而无需释放内存。

    RPC层

    可以把RPC看成消息,因此可以对消息进行压缩,从而降低RPC的开销。而GRPC框架提供了高效的序列化。

    存储层

    raft对log和snapshot的持久化无需考虑到查找效率,因此也无需采用B+Tree的形式存储,另外,raft的持久化考虑更多的数据的可靠性,因此也无需采用LSM Tree的存储方式。 但可以考虑对数据进行分片,每个raft负责存储一部分数据,以提高系统的吞吐量。

    借鉴其他系统的思想

    • kafka这样的消息队列,也是按照消息队列的三部分行优化的:生产-存储-消费。在生产消息方面,进行了批处理、压缩、序列化、复用内存。 在存储消息方面,将同一个topic分成多个partition,每个partition类似于raft group,都是负责一部分数据。一个partition的数据又分为多个Segment,每个Segment都负责存储一部分数据(是不是有点像ConcurrentHashMap?)。
    • Redis这样的内存NoSQL数据库,提供了支持数据分片的Cluster模式、利用IO多路复用监听事件、基于写时复制的后台持久化等等。
    • MySQL在数据量比较大的时候,会采用字段的垂直拆分和数据量的水平拆分。同时,MySQL、kafka、Redis的Sentinel模式都是采用Primary-Backup模式复制副本,提高系统的可用性。系统之间的设计相通+2

    实践思想

    1. 基于log’s index和log’s term的生产者-消费者模型

    在处理请求方面,基于Raft的KVServer相较于传统的KVServer有很大不同。KVServer是需要等待命令达成共识才能执行请求的。 KVServer将Client的请求包装为一个Command提交给Raft, Raft会将达成共识的KVServer通过Channel发送给KVServer。 这里就引出了一个问题:对于每个请求,KVServer如何知晓这个请求执行是否成功? 换个说法,如何确定从channel接收到的log和提交的log的对应关系? Service向Raft提交Command时,Raft将Command包装为log,并会返回对应log的indexterm;根据Raft共识算法,index和term确定了log的唯一性。 img.png 图片来源于Raft lecture (Raft user study)

    为什么index和term就可以确定唯一的log呢?

    因为follower在收到leader的AppendEntries RPC进行日志复制时,会检查PrevLogIndex处的log的term与leader的是否一致; 如果不一致,follower将会拒绝本次的请求,leader会根据follower回传的信息,选择是发送快照还是将PrevLogIndex减小。 具体可看下面这段Raft代码:

    func (rf *Raft) AppendEntries(args *AppendEntriesArgs, reply *AppendEntriesReply) error {
    	idx := 0
    	i := 0
    	//prevLogIndex := args.PrevLogIndex - rf.lastIncludedIndex - 1
    	offset := args.PrevLogIndex - rf.lastIncludedIndex
    	if offset > 0 {
    		/// offset > 0:需要比较第 offset 个 log 的 term,这里减1是为了弥补数组索引,lastIncludedIndex 初始化为 -1 也是如此
    		offset -= 1
    		// if term of log entry in prevLogIndex not match prevLogTerm
    		// set XTerm to term of the log
    		// set XIndex to the first entry in XTerm
    		// reply false (§5.3)
    		if rf.log[offset].Term != args.PrevLogTerm {
    			reply.XTerm = rf.log[offset].Term
    			for offset > 0 {
    				if rf.log[offset-1].Term != reply.XTerm {
    					break
    				}
    				offset--
    			}
    			reply.XIndex = offset + rf.lastIncludedIndex + 1
    			rf.resetTimeout()
    			return nil
    		}
    		// match, set i to prevLogIndex + 1, prepare for comparing the following logs
    		i = offset + 1
    	} else {
    		// offset <= 0:说明log在snapshot中,则令idx加上偏移量,比较idx及其之后的log
    		idx -= offset
    	}
    }

    因此KVServer可以通过index来等待请求执行完成的signal,假如回传的命令的term与等待的不符,则说明等待的命令没有达成共识。 在这里我使用的还是go中的channel(chan),KVServer使用map数据结构记录等待中的channel(map使用方便)。 KVServer处理Raft回传的Command程序如下:

    // startApply listen to the log sent from applyCh and execute the corresponding command.
    func (kv *KVServer) startApply() {
    	for {
    		msg := <- kv.applyCh
    		if msg.CommandValid {
    			op := msg.Command.(Op)
    			commandType := op.OpType
    			requestId := op.RequestId
    			result := ApplyResult{
    				Term: msg.CommandTerm,
    			}
    			// ...
    			// ..。
    			if pb.Op_PUT == commandType {
    				kv.tab[op.Key] = op.Value
    			} else if pb.Op_APPEND == commandType {
    				v := kv.tab[op.Key]
    				v += op.Value
    				kv.tab[op.Key] = v
    			} else if pb.Op_DELETE == commandType {
    				delete(kv.tab, op.Key)
    			} else if GET != commandType {
    			}
    			kv.commitIndex = msg.CommandIndex
    			if ch, _ := kv.replyChan[kv.commitIndex]; ch != nil {
    				ch <- result
    				close(ch)
    				delete(kv.replyChan, kv.commitIndex)
    			}
    			// ...
    			kv.mu.Unlock()
    		} else if msg.SnapshotValid {
    			// snapshot...
    		} else {
    			log.Fatalf("[%d] receive unknown type log!", kv.me)
    		}
    	}
    }
    1. 从applyCh接收命令,根据命令类型执行相应的操作。
    2. 会判断对应index是否有channel正在等待;有的话就回传ApplyResult(包含了Term),随后从map中删除相关记录,最后close。

    对于命令的处理流程,前后修改过很多,两个版本都是直接用本地变量ch来接收signal,而不是再用map中的channel(方便清理map中不用的channel) 只有Leader能提交请求,提交请求后会设置相应的channel,并让线程等待直到超时。

    第一版

    func (kv *KVServer) submit(op Op) (*ApplyResult, pb.ErrCode) {
    	commandIndex, commandTerm, isLeader := kv.rf.Start(op)
    	if !isLeader {
    		return nil, pb.ErrCode_WRONG_LEADER
    	}
    
    	kv.mu.Lock()
    	if c, _ := kv.replyChan[commandIndex]; c != nil {
    		kv.mu.Unlock()
    		return nil, pb.ErrCode_TIMEOUT
    	}
    	ch := make(chan ApplyResult, 1)
    	kv.replyChan[commandIndex] = ch
    	kv.mu.Unlock()
    
    	var res ApplyResult
    	select {
    	case res = <-ch:
    		break
    	case <-time.After(RequestTimeout):
    		kv.mu.Lock()
    		if _, deleted := kv.replyChan[commandIndex]; deleted {
    			kv.mu.Unlock()
    			res = <-ch
    			break
    		}
    		delete(kv.replyChan, commandIndex)
    		kv.mu.Unlock()
    		close(ch)
    		return nil, errCode
    	}
    	if res.Term == commandTerm {
    		return &res, pb.ErrCode_OK
    	} else {
    		return nil, pb.ErrCode_WRONG_LEADER
    	}
    }

    有两个问题:

    1. 超时时间不应该由KVServer来决定,而应该由Client来决定。
    2. c, _ := kv.replyChan[commandIndex]; c != nil代码存在bug。

    当多个Client向同一个Leader提交请求时,获得的commandIndex会不会相同? 设leader1是term1的leader,假如出现了网络分区(Server之间的网络存在故障)且Leader1不处于主分区(它和绝大多数Server通信存在网络故障)。 Leader1仍然认为自己是leader(而此时主分区在term2选举出了leader2,term2 > term1),并提交来自客户端的请求,很明显,这些请求不会commit。 leader2在commit一些命令后,与leader1的通信恢复正常。按照Raft共识算法,leader1会trim掉与leader2发生冲突的log,并append来自leader2的log。 只要append的没有trim掉的多,也就说明leader1的log长度减小了。leader1在term3重新成为leader,则会出现commandIndex相同的情况。 这种情况一出现,就说明先前客户端的命令不可能commit;这时,只需要回传一个result(回传的term必定大于前面等待term)即可。

    第二版

    func (kv *KVServer) submit(op Op) (*ApplyResult, pb.ErrCode) {
    	commandIndex, commandTerm, isLeader := kv.rf.Start(op)
    	if !isLeader {
    		return nil, pb.ErrCode_WRONG_LEADER
    	}
    	kv.mu.Lock()
    	if c, _ := kv.replyChan[commandIndex]; c != nil {
    		c <- ApplyResult{Term: commandTerm}
    		close(c)
    	}
    	ch := make(chan ApplyResult, 1)
    	kv.replyChan[commandIndex] = ch
    	kv.mu.Unlock()
    
    	res := <-ch
    	if res.Term == commandTerm {
    		return &res, pb.ErrCode_OK
    	} else {
    		return nil, pb.ErrCode_WRONG_LEADER
    	}
    }

    2. 原子性持久化

    Lab 1: MapReduce中,Workermapreduce操作的最后需要将数据持久化, 持久化的流程如下:

    1. 创建临时文件
    2. 将数据写入临时文件
    3. 使用系统调用Rename将临时文件重命名为目标文件

    这种处理流程可以保证覆写是原子性的,可以保证对单个文件的写入是原子操作。其实,常用的KV数据库-RedisRDB持久化也是这样做的, RDB持久化的代码(5.0)版本在 rdb.c文件中,rdbSave源码如下:

    /* Save the DB on disk. Return C_ERR on error, C_OK on success. */
    int rdbSave(char *filename, rdbSaveInfo *rsi) {
        char tmpfile[256];
        char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */
        FILE *fp;
        rio rdb;
        int error = 0;
    
        snprintf(tmpfile,256,"temp-%d.rdb", (int) getpid());
        fp = fopen(tmpfile,"w");
        if (!fp) {
            char *cwdp = getcwd(cwd,MAXPATHLEN);
            serverLog(LL_WARNING,
                "Failed opening the RDB file %s (in server root dir %s) "
                "for saving: %s",
                filename,
                cwdp ? cwdp : "unknown",
                strerror(errno));
            return C_ERR;
        }
    
        rioInitWithFile(&rdb,fp);
    
        if (server.rdb_save_incremental_fsync)
            rioSetAutoSync(&rdb,REDIS_AUTOSYNC_BYTES);
    
        if (rdbSaveRio(&rdb,&error,RDB_SAVE_NONE,rsi) == C_ERR) {
            errno = error;
            goto werr;
        }
    
        /* Make sure data will not remain on the OS's output buffers */
        if (fflush(fp) == EOF) goto werr;
        if (fsync(fileno(fp)) == -1) goto werr;
        if (fclose(fp) == EOF) goto werr;
    
        /* Use RENAME to make sure the DB file is changed atomically only
         * if the generate DB file is ok. */
        if (rename(tmpfile,filename) == -1) {
            char *cwdp = getcwd(cwd,MAXPATHLEN);
            serverLog(LL_WARNING,
                "Error moving temp DB file %s on the final "
                "destination %s (in server root dir %s): %s",
                tmpfile,
                filename,
                cwdp ? cwdp : "unknown",
                strerror(errno));
            unlink(tmpfile);
            return C_ERR;
        }
    
        serverLog(LL_NOTICE,"DB saved on disk");
        server.dirty = 0;
        server.lastsave = time(NULL);
        server.lastbgsave_status = C_OK;
        return C_OK;
    
    werr:
        serverLog(LL_WARNING,"Write error saving DB on disk: %s", strerror(errno));
        fclose(fp);
        unlink(tmpfile);
        return C_ERR;
    }

    从这段源码可以看出,Redis的RDB持久化也是先创建一个临时文件,随后调用rioInitWithFile初始化rio,并根据配置判断是否开启自动刷盘。 然后会调用rdbSaveRio执行具体的数据持久化操作,随后将数据刷新到磁盘上并关闭该文件;最后调用rename将文件命令重命名为默认为dump.rdb

    因此FaultTolerantKVService持久化raft statesnapshot也是使用了这样的方式。

    Raft在两种情况下要进行持久化:

    1. raft本身状态发生改变时,持久化raft state,较为频繁
    2. raft安装snapshot时(快照),持久化raft state 和 snapshot,相对较少 这两种操作都必须是原子的且必须等待数据真正写入到磁盘,尤其是第二种操作,必须保证raft state和snapshot是一致的。

    既然有两种数据需要持久化,因此就引出了另一个问题:将数据保存在一个文件还是多个文件?

    ● 单个文件:将raft state和snapshot一起存储,就能保证每次写入的原子性,不用担心部分文件写入过程中崩溃,从而导致多个文件保存的数据不一致。缺点就是单次写入的数据量增多,因为每次持久化都要写入snapshot;当snapshot很大时且写入频繁时,写入开销会很大,因此要控制snapshot的大小。

    ● 多个文件:只持久化raft state时,写入一个文件;同时持久化raft state和snapshot时,写入另一个文件;为了确定两个文件中的raft state哪个更新,可以使用在写入时使用版本号来进行标识。这样写入的好处是避免写入不必要的数据,snapshot的大小不会影响 raft state的写入速度。缺点就是,增大了持久化的复杂度,且读取raft state时要读取两个文件才能确定哪个raft state的版本更大。

    将数据写入磁盘所耗费的时间中,大多数情况下数据传输时间占比较小,寻道时间和磁盘旋转时间占了绝大部分。 当写入数据量较小时,更应当确保数据是顺序写入的;如果要在多个不同文件写入数据,则耗费的时间可能比写入单个文件更多。但在上述两种情况中,不管是保存在单个文件还是多个文件,每次写入只会打开一个文件写入,可以认为它们的寻道时间和磁盘旋转时间是一样的,因此它们的不同点在于写入数据量的大小。因此,我选择第二种方案为raft提供持久化。 如果要追求更极致的速度,可以借用FaRM的例子,将所有内容都保存在NVDRAM中,在DRAM电源故障时,备用电源会将数据全部保存在SSD中。 由于DRAM和SSD之间的速度差距,使用DRAM的确会非常的快;但不是每个人都能使用NVDRAM存储。 只有在非常追求性能时才能采用。 持久化raft statesnapshot的代码如下:

    type errWriter struct {
    	file *os.File
    	e    error
    	wr   *bufio.Writer
    }
    
    func newErrWriter(file *os.File) *errWriter {
    	return &errWriter{
    		file: file,
    		wr:   bufio.NewWriter(file),
    	}
    }
    
    func (ew *errWriter) write(p []byte) {
    	if ew.e == nil {
    		_, ew.e = ew.wr.Write(p)
    	}
    }
    
    func (ew *errWriter) writeString(s string) {
    	if ew.e == nil {
    		_, ew.e = ew.wr.WriteString(s)
    	}
    }
    
    func clone(data []byte) []byte {
    	d := make([]byte, len(data))
    	copy(d, data)
    	return d
    }
    
    // atomicOverwrite write the buffered data to disk and overwrite the file corresponding to the path
    func (ew *errWriter) atomicOverwrite(path string) error {
    	err := ew.e
    	if err != nil {
    		return err
    	}
    	err = ew.wr.Flush()
    	if err != nil {
    		return err
    	}
    	err = ew.file.Sync()
    	if err != nil {
    		return err
    	}
    	// close will return an error if it has already been called, ignore
    	_ = ew.file.Close()
    	err = os.Rename(ew.file.Name(), path)
    	if err != nil {
    		// deletion failure will not affect, just ignore
    		_ = os.Remove(ew.file.Name())
    	}
    	return err
    }
    
    // SaveStateAndSnapshot save both Raft state and K/V snapshot as a single atomic action
    // to keep them consistent.
    func (ps *Storage) SaveStateAndSnapshot(state []byte, snapshot []byte) error {
    	tmpFile, err := os.CreateTemp("", "raft*.rfs")
    	if err != nil {
    		return &StorageError{Op: "save", Target: "raft state and snapshot", Err: err}
    	}
    	writer := newErrWriter(tmpFile)
    	writer.writeString(fileHeader)
    	ps.writeRaftState(writer, state)
    	ps.writeSnapshot(writer, snapshot)
    	err = writer.atomicOverwrite(ps.snapshotPath)
    	if err != nil {
    		return &StorageError{Op: "save", Target: "raft state and snapshot", Err: err}
    	}
    	ps.raftState = clone(state)
    	ps.snapshot = clone(snapshot)
    	return nil
    }
    
    func (ps *Storage) writeRaftState(writer *errWriter, state []byte) {
    	writer.writeString(strconv.FormatInt(ps.nextRaftStateVersion, 10) + "\t")
    	raftStateSize := len(state)
    	writer.writeString(strconv.Itoa(raftStateSize) + "\t")
    	if raftStateSize > 0 {
    		writer.write(state)
    	}
    	ps.nextRaftStateVersion++
    }
    
    func (ps *Storage) writeSnapshot(writer *errWriter, snapshot []byte) {
    	snapshotSize := len(snapshot)
    	writer.writeString(strconv.Itoa(snapshotSize) + "\t")
    	if snapshotSize > 0 {
    		writer.write(snapshot)
    	}
    }

    SaveStateAndSnapshot方法也是先创建一个临时文件,先写入文件头”RAFT”,然后写入raft state的大小、版本号、数据,再写入snapshot的大小、数据; 最后,将数据刷新到磁盘上,最后再使用os.Reanme将临时文件重命名为目标文件名。

    在这个持久化过程中,每次写入都有可能返回error,因此将Writer包装为errWriter,可以将错误留到最后时处理,而不用每次写入都需要对错误进行判断。 这个处理错误的思想来源于go官方的blog:errors-are-values

    与Redis的持久化对比

    要说到键值对存储系统,那就必须得提到redis了。 我们知道,Redis有RDB和AOF两种持久化方式,RDB是将Redis所有的数据都保存在文件中,而AOF是将Redis执行的写命令保存在文件中。 RDB的每次写入都是全量的数据写入,随着数据逐渐增多,耗费时间也会增多,文件大小也会越大。

    • 为何Raft不采用追加写? AOF则是采用追加写(append)来保存写命令,需要注意的是,Redis不是像write-ahead log那样在执行命令前就写日志,而是在命令执行后才会写日志,具体可见sever.c文件中的call方法。 很显然,Raft的log的是可以采用上述的追加写的方式进行持久化的,这样就不用每次都重写整个文件的内容了;但鉴于Raft还有诸如votedForcurrentTerm这样的字段要和log一并保存, 因此不好采用这种追加写的方式来持久化。
    • Raft的刷盘时机 我们知道redis是一个高性能的键值对存储数据库;但同时我们也知道,将数据写入磁盘是非常耗时的。 因此,redis是先将 命令的log写入到aof_buf,然后再将aof_buf中的数据写入到磁盘。redis提供了三种刷盘策略:alwayseverysecnoneredis默认配置是everysec,即每秒钟将aof_buf写入到磁盘,这是对性能和可靠性的折中,具体可见aof.c中的flushAppendOnlyFile方法。 Raft共识算法要求log必须在返回response之前就将数据保存在磁盘上,因此采用类似于always策略。

    参考

    如果想要进一步了解Raft共识算法,建议不要先看知乎或者博客,可以先看原作者Raft lecture (Raft user study) 另外,想要了解Paxos共识算法,也可以看看他的Paxos lecture (Raft user study); 我的亲身经历告诉我,这篇Paxos算法的乱七八糟讲解的博客的内容和原作者的视频的内容极为相似, 相似就算了,关键是内容有错误,看了文章后对算法本身认识就有误区,后来还是看原作者的视频才醒悟过来。

    阅读Redis5.0的源码对理解实际的键值对数据库是如何设计的有很大帮助: redis5.0

    Visit original content creator repository https://github.com/chong-chonga/ftkv
  • Project—Books-Management

    Project – Books Management

    Key Points

    • Create a group database groupXDatabase. You can clean the db you previously used and resue that.
    • This time each group should have a single git branch. Coordinate amongst yourselves by ensuring every next person pulls the code last pushed by a team mate. You branch will be checked as part of the demo. Branch name should follow the naming convention project/booksManagementGroupX
    • Follow the naming conventions exactly as instructed.

    Models

    • User Model

    { 
      title: {string, mandatory, enum[Mr, Mrs, Miss]},
      name: {string, mandatory},
      phone: {string, mandatory, unique},
      email: {string, mandatory, valid email, unique},
       password: {string, mandatory, minLen 8, maxLen 15},
      address: {
    
        street: {string},
        city: {string},
        pincode: {string}
      },
      createdAt: {timestamp},
      updatedAt: {timestamp}
    }
    • Books Model

    { 
      title: {string, mandatory, unique},
      excerpt: {string, mandatory}, 
      userId: {ObjectId, mandatory, refs to user model},
      ISBN: {string, mandatory, unique},
      category: {string, mandatory},
      subcategory: {string, mandatory},
      reviews: {number, default: 0, comment: Holds number of reviews of this book},
      deletedAt: {Date, when the document is deleted}, 
      isDeleted: {boolean, default: false},
      releasedAt: {Date, mandatory},
      createdAt: {timestamp},
      updatedAt: {timestamp},
    }
    • Review Model (Books review)

    {
      bookId: {ObjectId, mandatory, refs to book model},
      reviewedBy: {string, mandatory, default 'Guest', value: reviewer's name},
      reviewedAt: {Date, mandatory},
      rating: {number, min 1, max 5, mandatory},
      review: {string, optional}
    }

    User APIs

    POST /register

    • Create a user – atleast 5 users
    • Create a user document from request body.
    • Return HTTP status 201 on a succesful user creation. Also return the user document. The response should be a JSON object like this
    • Return HTTP status 400 if no params or invalid params received in request body. The response should be a JSON object like this

    POST /login

    • Allow an user to login with their email and password.
    • On a successful login attempt return a JWT token contatining the userId, exp, iat. The response should be a JSON object like this
    • If the credentials are incorrect return a suitable error message with a valid HTTP status code. The response should be a JSON object like this

    Books API

    POST /books

    • Create a book document from request body. Get userId in request body only.
    • Make sure the userId is a valid userId by checking the user exist in the users collection.
    • Return HTTP status 201 on a succesful book creation. Also return the book document. The response should be a JSON object like this
    • Create atleast 10 books for each user
    • Return HTTP status 400 for an invalid request with a response body like this

    GET /books

    • Returns all books in the collection that aren’t deleted. Return only book _id, title, excerpt, userId, category, releasedAt, reviews field. Response example here
    • Return the HTTP status 200 if any documents are found. The response structure should be like this
    • If no documents are found then return an HTTP status 404 with a response like this
    • Filter books list by applying filters. Query param can have any combination of below filters.
      * By userId
      * By category
      * By subcategory example of a query url: books?filtername=filtervalue&f2=fv2
    • Return all books sorted by book name in Alphabatical order

    GET /books/:bookId

    • Returns a book with complete details including reviews. Reviews array would be in the form of Array. Response example here
    • Return the HTTP status 200 if any documents are found. The response structure should be like this
    • If the book has no reviews then the response body should include book detail as shown here and an empty array for reviewsData.
    • If no documents are found then return an HTTP status 404 with a response like this

    PUT /books/:bookId

    • Update a book by changing its
      • title
      • excerpt
      • release date
      • ISBN
    • Make sure the unique constraints are not violated when making the update
    • Check if the bookId exists (must have isDeleted false and is present in collection). If it doesn’t, return an HTTP status 404 with a response body like this
    • Return an HTTP status 200 if updated successfully with a body like this
    • Also make sure in the response you return the updated book document.

    DELETE /books/:bookId

    • Check if the bookId exists and is not deleted. If it does, mark it deleted and return an HTTP status 200 with a response body with status and message.
    • If the book document doesn’t exist then return an HTTP status of 404 with a body like this

    Review APIs

    POST /books/:bookId/review

    • Add a review for the book in reviews collection.
    • Check if the bookId exists and is not deleted before adding the review. Send an error response with appropirate status code like this if the book does not exist
    • Get review details like review, rating, reviewer’s name in request body.
    • Update the related book document by increasing its review count
    • Return the updated book document with reviews data on successful operation. The response body should be in the form of JSON object like this

    PUT /books/:bookId/review/:reviewId

    • Update the review – review, rating, reviewer’s name.
    • Check if the bookId exists and is not deleted before updating the review. Check if the review exist before updating the review. Send an error response with appropirate status code like this if the book does not exist
    • Get review details like review, rating, reviewer’s name in request body.
    • Return the updated book document with reviews data on successful operation. The response body should be in the form of JSON object like this

    DELETE /books/:bookId/review/:reviewId

    • Check if the review exist with the reviewId. Check if the book exist with the bookId. Send an error response with appropirate status code like this if the book or book review does not exist
    • Delete the related reivew.
    • Update the books document – decrease review count by one

    Authentication

    • Make sure all the book routes are protected.

    Authorisation

    • Make sure that only the owner of the books is able to create, edit or delete the book.
    • In case of unauthorized access return an appropirate error message.

    Testing

    • To test these apis create a new collection in Postman named Project 4 Books Management
    • Each api should have a new request in this collection
    • Each request in the collection should be rightly named. Eg Create user, Create book, Get books etc
    • Each member of each team should have their tests in running state
    • Refer below sample A Postman collection and request sample

    Response

    Successful Response structure

    {
      status: true,
      message: 'Success',
      data: {
    
      }
    }

    Error Response structure

    {
      status: false,
      message: ""
    }

    Collections

    • Users

    {
      _id: ObjectId("88abc190ef0288abc190ef02"),
      title: "Mr",
      name: "John Doe",
      phone: 9897969594,
      email: "johndoe@mailinator.com", 
      password: "abcd1234567",
      address: {
        street: "110, Ridhi Sidhi Tower",
        city: "Jaipur",
        pincode: 400001
      },
      "createdAt": "2021-09-17T04:25:07.803Z",
      "updatedAt": "2021-09-17T04:25:07.803Z",
    }
    • Books

    {
      "_id": ObjectId("88abc190ef0288abc190ef55"),
      "title": "How to win friends and influence people",
      "excerpt": "book body",
      "userId": ObjectId("88abc190ef0288abc190ef02"),
      "ISBN": "978-0008391331",
      "category": "Book",
      "subcategory": "Non fiction",
      "deleted": false,
      "reviews": 0,
      "deletedAt": "", // if deleted is true deletedAt will have a date 2021-09-17T04:25:07.803Z,
      "releasedAt": "2021-09-17T04:25:07.803Z"
      "createdAt": "2021-09-17T04:25:07.803Z",
      "updatedAt": "2021-09-17T04:25:07.803Z",
    }`
    • Reviews

    {
      "_id": ObjectId("88abc190ef0288abc190ef88"),
      bookId: ObjectId("88abc190ef0288abc190ef55"),
      reviewedBy: "Jane Doe",
      reviewedAt: "2021-09-17T04:25:07.803Z",
      rating: 4,
      review: "An exciting nerving thriller. A gripping tale. A must read book."
    }

    Response examples

    Get books response

    {
      status: true,
      message: 'Books list',
      data: [
        {
          "_id": ObjectId("88abc190ef0288abc190ef55"),
          "title": "How to win friends and influence people",
          "excerpt": "book body",
          "userId": ObjectId("88abc190ef0288abc190ef02")
          "category": "Book",
          "reviews": 0,
          "releasedAt": "2021-09-17T04:25:07.803Z"
        },
        {
          "_id": ObjectId("88abc190ef0288abc190ef56"),
          "title": "How to win friends and influence people",
          "excerpt": "book body",
          "userId": ObjectId("88abc190ef0288abc190ef02")
          "category": "Book",
          "reviews": 0,
          "releasedAt": "2021-09-17T04:25:07.803Z"
        }
      ]
    }

    Book details response

    {
      status: true,
      message: 'Books list',
      data: {
        "_id": ObjectId("88abc190ef0288abc190ef55"),
        "title": "How to win friends and influence people",
        "excerpt": "book body",
        "userId": ObjectId("88abc190ef0288abc190ef02")
        "category": "Book",
        "subcategory": "Non fiction", "Self Help"],
        "deleted": false,
        "reviews": 0,
        "deletedAt": "", // if deleted is true deletedAt will have a date 2021-09-17T04:25:07.803Z,
        "releasedAt": "2021-09-17T04:25:07.803Z"
        "createdAt": "2021-09-17T04:25:07.803Z",
        "updatedAt": "2021-09-17T04:25:07.803Z",
        "reviewsData": [
          {
            "_id": ObjectId("88abc190ef0288abc190ef88"),
            bookId: ObjectId("88abc190ef0288abc190ef55"),
            reviewedBy: "Jane Doe",
            reviewedAt: "2021-09-17T04:25:07.803Z",
            rating: 4,
            review: "An exciting nerving thriller. A gripping tale. A must read book."
          },
          {
            "_id": ObjectId("88abc190ef0288abc190ef89"),
            bookId: ObjectId("88abc190ef0288abc190ef55"),
            reviewedBy: "Jane Doe",
            reviewedAt: "2021-09-17T04:25:07.803Z",
            rating: 4,
            review: "An exciting nerving thriller. A gripping tale. A must read book."
          },
          {
            "_id": ObjectId("88abc190ef0288abc190ef90"),
            bookId: ObjectId("88abc190ef0288abc190ef55"),
            reviewedBy: "Jane Doe",
            reviewedAt: "2021-09-17T04:25:07.803Z",
            rating: 4,
            review: "An exciting nerving thriller. A gripping tale. A must read book."
          },
          {
            "_id": ObjectId("88abc190ef0288abc190ef91"),
            bookId: ObjectId("88abc190ef0288abc190ef55"),
            reviewedBy: "Jane Doe",
            reviewedAt: "2021-09-17T04:25:07.803Z",
            rating: 4,
            review: "An exciting nerving thriller. A gripping tale. A must read book."
          }, 
        ]
      }
    }

    Book details response no reviews

     {
      status: true,
      message: 'Books list',
      data: {
        "_id": ObjectId("88abc190ef0288abc190ef55"),
        "title": "How to win friends and influence people",
        "excerpt": "book body",
        "userId": ObjectId("88abc190ef0288abc190ef02")
        "category": "Book",
        "subcategory": "Non fiction", "Self Help"],
        "deleted": false,
        "reviews": 0,
        "deletedAt": "", // if deleted is true deletedAt will have a date 2021-09-17T04:25:07.803Z,
        "releasedAt": "2021-09-17T04:25:07.803Z"
        "createdAt": "2021-09-17T04:25:07.803Z",
        "updatedAt": "2021-09-17T04:25:07.803Z",
        "reviewsData": []
      }
    }

    Visit original content creator repository
    https://github.com/rav8657/Project—Books-Management

  • awesome-category-theory

    awesome-category-theory

    A curated list of awesome Category Theory resources.

    Table of contents

    Archive

    • Functor theory – Explores the concept of exact categories and the theory of derived functors, building upon earlier work by Buchsbaum. Freyd investigates how properties and statements applicable to abelian groups can extend to arbitrary exact categories. Freyd aims to formalize this observation into a metatheorem, which would simplify categorical proofs and predict lemmas. Peter J. Freyd’s dissertation, presented at Princeton University (1960)

    • Algebra valued functors in general and tensor products in particular – Discusses the concept of valued functors in category theory, particularly focusing on tensor products. Freyd explores the application of algebraic theories in non-standard categories, starting with the question of what constitutes an algebra in the category of sets, using category predicates without elements. The text outlines the axioms of a group using category theory language, emphasizing objects and maps. Peter Freyd (1966)

    • Continuous Yoneda Representation of a small category – Discusses the embedding of a small category A into the category of contravariant functors from A to Set (the category of sets), which preserves inverse limits but does not generally preserve direct limits. Kock introduces a “codensity monad” for any functor from a small category to a left complete category and explores the universal generator for this monad. He demonstrates that the Yoneda embedding followed by this generator provides a full and faithful embedding that is both left and right continuous. Additionally, the relationship with Isbell’s adjoint conjugation functors and the definition of generalized (direct and inverse) limit functors are addressed, by Anders Kock (1966).

    • Abstract universal algebra – Explores advanced subjects in the realm of universal algebra. The core content is organized into two chapters, each addressing different aspects of universal algebra within the framework of category theory. The first chapter introduces the concept of triplable categories, inspired by the theory of modules over a ring, and explores the equivalence between categories of triples in any given category and theories over that category. In the second chapter, Davis shifts focus to equational systems of functors, a more generalized approach to algebra that encompasses both the triplable and structure category theories. Dissertation by Robert Clay Davis (1967)

    • A triple miscellany: some aspects of the theory of algebras over a triple – Explores the field of universal algebra with a particular focus on the concept of algebras over a triple. The work is grounded in the realization that categories of algebras, traditionally defined with finitary operations and satisfying a set of equations, can be extended to include infinitary operations as well, thereby broadening the scope of universal algebra. Manes starts by discussing the conventional understanding of universal algebra, tracing back to G.D. Birkhoff’s definition in the 1930s, and then moves to explore how this definition can be expanded by considering sets with infinitary operations. Dissertation by Ernest Gene Manes (1967)

    • Limit Monads in Categories – The work introduces the notion that the category of complete categories is monadic over the category of all categories, utilizing a family of monads associated with various index categories to define “completeness.” A significant portion of the thesis is dedicated to defining associative and regular associative colimits, arguing for their naturalness and importance in category theory. Dissertation by Anders Jungersen Kock (1967)

    • On the concreteness of certain categories – This work discusses the concept of concreteness in categories, stating that a concrete category is one with a faithful functor to the category of sets, and must be locally-small. He highlights the homotopy category of spaces as a prime example of a non-concrete category, emphasizing its abstract nature due to the irrelevance of individual points within spaces and the inability to distinguish non-homotopic maps through any functor into concrete categories. Peter Freyd (1969)

    • V-localizations and V-triples – This work focuses on two primary objectives within category theory. The first goal is to define and study Y-localizations of Y-categories, using a model akin to localizations in ordinary categories, involving certain conditions related to isomorphisms and the existence of unique Y-functors. The second aim is to explore the relationship between Y-localizations and V-triples, presenting foundational theories and examples to elucidate these concepts. Harvey Eli Wolff’s dissertation (1970)

    • Symmetric closed categories – This work is an in-depth exploration of category theory, focusing on closed categories, monoidal categories, and their symmetric counterparts. It discusses foundational concepts like natural transformations, tensor products, and the structure of morphisms, emphasizing their additional algebraic or topological structures. W. J. de Schipper (1975)

    • Algebraic theories – Covers topics such as the fundamentals of algebraic theories, free models, special theories, the completeness of algebraic categories, and extends to more complex concepts like commutative theories, free theories, and the Kronecker product, among others. The notes also touch on the rings-theories analogy proposed by F. W. Lawvere, suggesting an insightful correlation between rings/modules and algebraic theories/models. Gavin C. Wraith (1975)

    Articles

    Bayesian/Causal inference

    Databases

    • Algebraic databases – Enhances traditional category-theoretic database models to better handle concrete data like integers or strings using multi-sorted algebraic theories by Patrick Schultz, David I. Spivak, Christina Vasilakopoulou and Ryan Wisnesky (2017)
    • Algebraic Model Management: A survey – We survey the field of model management and describe a
      new model management approach based on algebraic specification by Patrick Schultz, David I. Spivak, and Ryan Wisnesky (2017)
    • Functorial data migration – A database language based on categories and functors, where a schema is depicted as a category and its instance as a set-valued functor by David I. Spivak (2012)

    Data Types

    • Categories of Containers – Introduces containers as a mathematical model of datatypes with templated data storage, demonstrating their robustness under various constructions, including initial algebras and final coalgebras by Michael Abbot, horsten Altenkirch and Neil Ghani (2003)

    Deep Learning

    • Backprop as Functor – Describes a category for supervised learning algorithms that search for the best approximation of an ideal function using example data and update rules by Brendan Fong, David I. Spivak, Rémy Tuyéras (2017)
    • Categorical Foundations of Gradient-Based Learning – Categorical interpretation of gradient-based machine learning algorithms using lenses, parametrised maps, and reverse derivative categories (2021)
    • Categories of Differentiable Polynomial Circuits for Machine Learning – Reverse Derivative Categories (RDCs) as a framework for machine learning. We introduce ‘polynomial circuits’ as an apt machine learning model by Paul Wilson, Fabio Zanasi (2022)
    • Compositional Deep Learning – Category-theoretic structure for a class of neural networks like CycleGAN, using this framework to design a new neural network for image object manipulation, and showcases its effectiveness through tests on multiple datasets by Bruno Gavranović (2019)
    • Compositionality for Recursive Neural Networks – Simplified recursive neural tensor network model aligns with the categorical approach to compositionality, offering a feasible computational method and opening new research avenues for both vector space semantics and neural network models by Martha Lewis (2019)
    • Deep neural networks as nested dynamical systems – Argues that the common comparison between deep neural networks and brains is wrong, and proposes a new way of thinking about them using category theory and dynamical systems by David I. Spivak, Timothy Hosgood (2021)
    • Dioptics: a Common Generalization of Open Games and Gradient-Based Learners – Relationship between machine-learning algorithms and open games, suggesting both can be understood as instances of “categories of dioptics”. It expands on gradient-based learning, introducing a category that embeds into the category of learners (2019)
    • Learning Functors using Gradient Descent – A category-theoretic understanding of CycleGAN, a notable method for unpaired image-to-image translation by Bruno Gavranović (2020)
    • Lenses and Learners – Shows a strong connection between lenses, which model bidirectional transformations like database interactions, and learners, which represent a compositional approach to supervised learning by Brendan Fong, Michael Johnson (2019)
    • Neural network layers as parametric spans – Linear layer in neural networks, drawing on integration theory and parametric spans by Mattia G. Bergomi, Pietro Vertechi (2022)
    • Reverse Derivative Ascent – Reverse Derivative Ascent, a categorical counterpart to gradient-based learning techniques, formulated within the context of reverse differential categories by Paul Wilson, Fabio Zanasi (2021)

    Differentiable Programming / Automatic Differentiation

    Dynamical Systems

    • A categorical approach to open and interconnected dynamical systems – This paper presents a comprehensive graphical theory for discrete linear time-invariant systems, expanding on classical signal flow diagrams to handle streams with infinite pasts and futures, introduces a new structural view on controllability, and is grounded in the extended theory of props by Brendan Fong, Paolo Rapisarda and Paweł Sobociński (2015)

    Game Theory

    • A semantical approach to equilibria and rationality – This paper connects game theoretic equilibria and rationality to computation, suggesting that viewing processes as computational instances can offer new algebraic and coalgebraic methods to understand equilibrium and rational behaviors by Dusko Pavlovic (2009)
    • Compositional game theory – Open games offer a new foundation for economic game theory, enabling larger models through a compositional approach that uses “coutility” to represent games in relation to their environment, and can be visually represented with intuitive string diagrams, capturing key game theory outcomes by Jules Hedges, Neil Ghani, Viktor Winschel and Philipp Zahn (2016)
    • The game semantics of game theory – We reinterpret compositional game theory, aligning game theory with game semantics by viewing open games as Systems and their contexts as Environments; using lenses from functional programming, we then construct a category of ‘computable open games’ based on a specific interaction geometry by Jules Hedges (2019)

    Graph Neural Networks

    • Asynchronous Algorithmic Alignment with Cocycles – Current neural algorithmic reasoners use graph neural networks (GNNs) that often send unnecessary messages between nodes; in our work, we separate node updates from message sending, enabling more efficient and asynchronous computation in algorithms and neural networks (2023)
    • Graph Convolutional Neural Networks as Parametric CoKleisli morphisms – We categorically define Graph Convolutional Neural Networks (GCNNs) for any graph and connect it to existing deep learning constructs, allowing the GCNN’s adjacency matrix to be treated globally, shedding light on its inherent biases, and discussing potential generalizations and connections to other learning concepts by Bruno Gavranović, Mattia Villani (2022)
    • Graph Neural Networks are Dynamic Programmers – Using category theory and abstract algebra, we dive deeper into the presumed alignment between graph neural networks (GNNs) and dynamic programming, uncovering a profound connection, validating previous studies, and presenting improved GNN designs for specific tasks, hoping to bolster future algorithm-aligned GNN advancements by Andrew Dudzik, Petar Veličković (2022)
    • Learnable Commutative Monoids for Graph Neural Networks – Using the concept of commutative monoids, we introduce an efficient O(logV) depth aggregator for GNNs, offering a balance between speed and expressiveness by Euan Ong, Petar Veličković (2022)
    • Local Permutation Equivariance For Graph Neural Networks – Our Sub-graph Permutation Equivariant Networks (SPEN) method improves graph neural networks’ scalability and expressiveness by focusing on unique sub-graphs, proving competitive on benchmarks and saving GPU memory by Joshua Mitton, Roderick Murray-Smith (2021)
    • Natural Graph Networks – We introduce the concept of naturality in graph neural networks, offering a broader and efficient design alternative to traditional equivariance, with our design showing strong benchmark performance by Pim de Haan, Taco Cohen, Max Welling (2020)
    • Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs – Using cellular sheaf theory, we connect graph geometry to Graph Neural Network performance, leading to improved diffusion models that bridge algebraic topology and GNN studies (2022)
    • Sheaf Neural Networks for Graph-based Recommender Systems – Using Sheaf Neural Networks, we enrich recommendation systems by representing nodes with vector spaces, leading to significant performance improvements in collaborative filtering and link prediction across multiple datasets (2023)
    • Sheaf Neural Networks with Connection Laplacians – Using Riemannian geometry, we refine Sheaf Neural Network design, optimally aligning data points and reducing computational overhead, offering a bridge between algebraic topology and differential geometry for enhanced performance (2022)
    • Sheaf Neural Networks
    • Topologically Attributed Graphs for Shape Discrimination – We’ve developed attributed graphs that combine Mapper graph approximations with stable homology, enhancing shape representation and boosting classification results in graph neural networks (2023)

    Linguistics

    • Free compact 2-categories – The paper introduces the notion of a compact 2-category, and gives some examples, such as the 2-category of monoidal categories, the 2-category of bimodules over a ring, and the 2-category of finite-dimensional vector spaces by Joachim Lambek and Anne Preller (2007)
    • Mathematical foundations for a compositional distributional model of meaning – Using vector spaces and Lambek’s Pregroup algebra, we derive sentence meanings from words, enabling comparisons. Our model visually represents sentence construction and can adapt to Boolean semantics by Bob Coecke, Mehrnoosh Sadrzadeh and Stephen Clark (2010)
    • The Frobenius anatomy of word meanings I: subject and object relative pronouns – We use vectors and Frobenius algebras in a categorical approach to understand the semantics of relative pronouns. Two models are introduced: a truth-based and a corpus-based approach by Mehrnoosh Sadrzadeh, Stephen Clark and Bob Coecke (2014)

    Manufacturing

    • String diagrams for assembly planning – This paper introduces CompositionalPlanning, a tool that uses string diagrams to unify CAD designs with planning algorithms, optimizing assembly plans which are then tested in simulations, showcasing its efficiency in the LEGO assembly context by Jade Master, Evan Patterson, Shahin Yousfi, Arquimedes Canedo (2019)

    Metric Space Magnitude

    • Approximating the convex hull via metric space magnitude – This paper introduces CompositionalPlanning, a tool that uses string diagrams to unify CAD designs with planning algorithms, optimizing assembly plans which are then tested in simulations, showcasing its efficiency in the LEGO assembly context by Glenn Fung, Eric Bunch, Dan Dickinson (2019)
    • Magnitude of arithmetic scalar and matrix categories – We create tools that build categories from data and operate using scalar and matrix math, identifying features similar to outliers in various systems like computer programs and neural networks by Steve Huntsman (2023)
    • Practical applications of metric space magnitude and weighting vectors – The magnitude of a metric space quantifies distinct points and its weighting vector, especially in Euclidean spaces, offers new algorithms for machine learning, proven through benchmark experiments (2020)
    • The magnitude vector of images – We explore the metric space magnitude in images, revealing edge detection abilities, and introduce an efficient model that broadens its use in machine learning (2021)
    • Weighting vectors for machine learning: numerical harmonic analysis applied to boundary detection – Using the metric space magnitude’s weighting vector, we enhance outlier detection in Euclidean spaces and link it to efficient nearest neighbor SVM techniques (2021)

    Petri Nets

    • Generalized Petri Nets – We present Q-net, an extension of Petri nets using Lawvere theory Q, and offer a functorial approach to delineate their operational semantics across multiple net systems by Jade Master (2019)
    • The Mathematical Specification of the Statebox Language – The Statebox language is built on a solid mathematical foundation, synergizing theoretical structures for reliability; this document shares that foundation to aid understanding and auditing by Fabrizio Genovese, Jelle Herold (2019)

    Probability and Statistics

    Set Theory

    • Set theory for category theory – This paper compares set-theoretic foundations for category theory, exploring their implications for standard categorical usage, tailored for those with minimal logic or set theory background by Michael A. Shulman (2008)

    Topological Data Analysis

    Blogs

    Books

    • Category Theory – This book offers an in-depth yet accessible introduction to category theory, targeting a diverse audience and covering essential concepts; the second edition includes expanded content, new sections, and additional exercises by Steve Awodey (2010)
    • Categories for the Working Mathematician – The content is in-depth, and its mathematical aspects can be challenging for the reader. It’s advisable to explore this book after reading one or two of the more introductionary books. This book is a classic by Saunders Mac Lane (1971)
    • Category Theory for Programmers – This book introduces Category Theory at a level appropriate for computer scientists and provides practical examples (in Haskell) in the context of programming languages by Bartosz Milewski (2019)
    • Category Theory for the Sciences – An introduction to category theory as a rigorous, flexible, and coherent modeling language that can be used across the sciences by David I. Spivak (2014)
    • Conceptual Mathematics: A First Introduction to Categories – This book demonstrates the power of ‘category’ to make mathematics easier and more connected for anyone. It begins with basic definitions and creates simple categories, such as discrete dynamical systems and directed graphs, with examples, by Schanuel, Lawvere (2009)
    • Draft of “Categorical Systems Theory” – This draft book is about categorical systems theory, the study of the design and analysis of systems using category theory by Jaz Myers (2022)
    • Polynomial Functors: A General Theory of Interaction – This book offers an interdisciplinary approach to the categorical study of general interaction, aiming to bridge diverse fields under a unified language to understand interactive systems; it provides detailed explanations and resources for learning, but assumes a foundational knowledge of category theory and graph-theoretic trees by Spivak, Niu (2023)
    • Seven Sketches in Compositionality: An Invitation to Applied Category Theory – This book by David I. Spivak and Brendan Fong (2019) provides an introductory glimpse into Category Theory by covering 7 key topics. It highlights practical, real-world examples to give readers a feel for the abstract theoretical concepts
    • The Joy of Abstraction – The book by Eugenia Cheng (2022) is written in a clear and engaging style. Cheng is a gifted writer who is able to make complex mathematical concepts accessible to a general audience
    • Basic Category Theory – Tom Leinster’s (2014) book represents an edited version of his lecture notes. As such, it is a concise work that provides focused coverage of the Category Theory topics it addresses
    • Category Theory in Context – This text book by Emily Riehl (2016) is advanced and is suitable for diligent students who have mastered prior readings. It’s praised for its well-crafted prose on Category Theory. Initially, it adopts an example-based methodology before illustrating how category theoretical language can encapsulate the concepts
    • Categories for Quantum Theory: An Introduction – Monoidal category theory provides an abstract language to describe quantum theory, emphasizing intuitive graphical calculus, and explores structures modeling quantum phenomena, classical information, and probabilistic systems, with connections to other disciplines highlighted throughout by Chris Heunen, Jamie Vicary (2020)
    • From Categories to Homotopy Theory – by Birgit Richter (2020), gets advanced, but Part I ‘Category Theory’ is pretty accessible
    • An Introduction to Category Theory – This book offers a beginner-friendly introduction to category theory, a versatile conceptual framework used across various disciplines, detailing fundamental concepts, examples, and over 200 exercises, making it ideal for self-study or as a course text, by Harold Simmons (2011)

    Companies

    • Conexus – A start-up developing CQL, a generalization of SQL to data migration and integration that contains an automated theorem prover to rule out most semantic errors at compile time
    • Statebox – building a formally verified process language using robust mathematical principles to prevent errors, allow compositionality and ensure termination
    • IOHK – builds cryptocurrencies and blockchain solutions, based on peer reviewed papers; formally verified specifications in Agda, Coq and k-framework
    • RChain – blockchain ecosystem it’s foundational language – Rholang is implementation of rho-calculus wih deep roots in higher category theory and enriched Lawvere theories

    Community

    Conferences

    • ACT – Applied Category Theory Conference
    • Statebox Summit – An yearly gathering of category theorists and functional programmers
    • SYCO – Symposium on Compositional Structures

    Journals

    • Categories and General Algebraic Structures with Applications – Categories and General Algebraic Structures with Applications is an international biannual journal published by Shahid Beheshti University, Tehran, Iran, founded in 2013
    • Compositionality – Open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline
    • Theory and Applications of Categories – Theory and Applications of Categories (ISSN 1201 – 561X) is the all-electronic, refereed journal on Category Theory, categorical methods and their applications in the mathematical sciences.

    Lectures

    Meetups

    • Boston – This group is about applying category theory to problems in information management
    • New York – NYC Category Theory and Algebra is a group for people interested in studying Category Theory (CT) and/or Abstract Algebra together. One of our purposes is to meet and read basic texts in Category Theory.
    • San Francisco Bay Area – A meetup dedicated to teaching category theory, and especially applications, including functional programming, data management, block-chain, quantum computing, and AI.

    Podcasts

    Related

    Books

    Podcasts

    • Type Theory Forall – Podcast hosted by Pedro Abreu (Pronounced ‘Ahbrel’), PhD Student in Programming Languages at Purdue University
    • Lambda Cast – LambdaCast is a podcast about functional programming for working developers

    Software Libraries

    • Category Theory in Coq – Axiom-free formalization of category theory in Coq
    • Catlab.jl – Experimental framework for applied category theory
    • Lens – Lens: Lenses, Folds and Traversals
    • Semigroupoids – Semigroupoids: Category sans id
    • UniMath – Categories formalized using univalent mathematics, in Coq
    • copumpkin/categories – Categories parametrized by morphism equality, in Agda
    • free – Free monads are useful for many tree-like structures and domain specific languages
    • idris-ct – Formally verified category theory library written in Idris
    • Category Theory in Lean4 – Experimental category theory library for Lean
    • WildCats – Mathematica package for computational category theory

    Tools

    • Quiver – Modern commutative diagram editor for the web
    • Cartographer.id – Tool for string diagrammatic reasoning
    • Homotopy.io – Web-based proof assistant for finitely-presented globular n-categories
    • KdMonCat – Tool for drawing morphisms in monoidal categories
    • XyJax – Xy-pic extension for MathJax, that lets you draw commutative diagrams in browser

    Video Lectures

    Wiki

    • ncatlab – A wiki with content varying from pure category theory, to categorical perspectives on other areas of maths, to random unrelated bits of maths
    • Wikipedia – Has some good articles about category theory

    Visit original content creator repository
    https://github.com/madnight/awesome-category-theory

  • unstack.africa

    unStack Africa Virtual Conference.

    unStack Africa

    This repository contains information about unStack Africa which is an Open Source Conference or technical meetups for the tech talent across the globe.

    The unStack Virtual Conference hosted by unStack is focused towards empowering more developers throughout Africa and beyond in JavaScript, featuring world class speakers & core contributors to most used open source projects coming onboard to share their insights on things JavaScript.

    You can register for the conference here Register.

    Installation Guidelines

    1. Fork this repo. Please be sure to use the current master branch as your starting point:
       https://github.com/Developerayo/unstack.africa
    
    1. You’ll be redirected to:
    https://github.com/your-username/unstack.africa
    
    1. Clone it the repository:
    git clone https://github.com/your-username/unstack.africa.git
    
    1. Install the project dependencies:
    npm i 
    
    or 
    
    yarn add
    
    1. Open in the text editor of your choice
    2. Create New Branch:
       cd unstack-africa
       git branch new-branch
       git checkout new-branch
    
    1. Make your edits locally:
       git add -A
    
    1. Commit the changes:
       git commit -m "Commit Message Here"
    
    1. Submit a pull request:
       git push --set-upstream origin new-branch-name
    
    Visit original content creator repository https://github.com/Developerayo/unstack.africa
  • conventional-commits-semver-release

    Conventional commits semver release main

    GitHub Action for semantic versioning releases using conventional commits

    GitHub action using conventional commits to semantic versioning repository.

    Features:

    • detects version increment by commit message keywords: fix: to increase path version, feat: to increase minor and ! or BREAKING CHANGE to increment major,
    • exposes tag, version and released output useful for docker image or package versioning,
    • after detection version increase creates GitHub release and gives the option to upload files as release assets.

    Conventional commits

    Conventional commits allow project versioning using keywords in the commit message.

    The standard defines two specific keywords which presence in the commit message causes the version to increase: fix will increase path and feat: minor version number. Increasing the major number is done by adding ! to any keyword e.g. refactor!: or adding BREAKING CHANGE to the commit message.

    The standard does not limit keywords apart from fix: and feat:, the following are common: build:, chore:, ci:, docs:, style: , refactor:, perf:, test:.

    Apart from the simple form of the keyword e.g. refactor:, it is possible to add the component affected by the change e.g. refactor(payment): where payment is the component name.

    For more information visit conventional commits documentation.

    Action by example

    Let’s assume we have a project written in golang, and we want to version it.

    Before adding conventional-commits-semver-release, the action looks like this:

    name: main
    on:
      push:
        branches:
          - main
    
    jobs:
      build:
        runs-on: ubuntu-20.04
        steps:
          - name: checkout code
            uses: actions/checkout@v2
    
          - name: docker login
            uses: docker/login-action@v1
            with:
              registry: ghcr.io
              username: ${{ github.actor }}
              password: ${{ secrets.GITHUB_TOKEN }}
    
          - name: set up go 1.x
            uses: actions/setup-go@v2
            with:
              go-version: ^1.16
    
          - name: cache
            uses: actions/cache@v2
            with:
              path: ~/go/pkg/mod
              key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
              restore-keys: |
                ${{ runner.os }}-go-
    
          - name: build
            run: make build
    
          - name: docker build
            run: docker build -t my-repository/my-image
    
          - name: docker push
            run: |
              docker push my-repository/my-image

    Creating release

    Simple usage of conventional-commits-semver-release only for create releases:

          # ...
          - name: cache
            uses: actions/cache@v2
            with:
              path: ~/go/pkg/mod
              key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
              restore-keys: |
                ${{ runner.os }}-go-
    
          - name: semver
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # token is mandatory to create the release
    
          - name: build
            run: make build
    
          - name: docker build
            run: docker build -t my-repository/my-image
    
          - name: docker push
            run: |
              docker push my-repository/my-image

    Push new image only on the new version

    Making the step execution dependent on the new version release:

          # ...
          - name: semver
            id: semver # required to use the output in other steps
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    
          - name: build
            run: make build
    
          - name: docker build
            run: docker build -t my-repository/my-image
    
          - name: docker push
            if: ${{ steps.semver.outputs.released == 'true' }}
            run: |
              docker push my-repository/my-image

    Using version

    Version output from conventional-commits-semver-release could be used to add the version as a docker image tag:

          # ...
          - name: semver
            id: semver
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    
          - name: build
            run: make build
    
          - name: docker build
            run: docker build -t my-repository/my-image
    
          - name: docker push
            if: ${{ steps.semver.outputs.released == 'true' }} # check if a new version will be released
            # docker tag command is required to add version as the image tag
            run: |
              docker tag my-repository/my-image my-repository/my-image:${{ steps.semver.outputs.version }} 
              docker push my-repository/my-image:${{ steps.semver.outputs.version }}

    Output could be set to env:

          # ...
          - name: docker push
            if: ${{ steps.semver.outputs.released == 'true' }}
            env:
              VERSION: ${{ steps.semver.outputs.version }} # setting version as env simplify usage
            run: |
              docker tag my-repository/my-image my-repository/my-image:${VERSION}
              docker push my-repository/my-image:${VERSION}

    Uploading assets

    Adding dist files as release assets:

          # ...
          - name: semver
            id: semver
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            with:
              assets: dist/*
    
          - name: build
            run: make build
    
          - name: dist
            run: make dist
    
          - name: docker build
            run: docker build -t my-repository/my-image
          # ...

    Multiple assets:

          # ...
          - name: semver
            id: semver
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            with:
              assets: |
                dist/*darwin_amd64.zip
                dist/*linux_arm64.zip
          # ...

    All zip files assuming that dist has subdirectories:

          # ...
          - name: semver
            id: semver
            uses: grumpy-programmer/conventional-commits-semver-release@v1
            env:
              GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
            with:
              assets: |
                dist/**/*.zip
          # ...

    Action details

    The version and tag available in the output are set when declaring the action (main), release, and sending assets takes place after successful completion of all steps and detection of a new version (post).

    Input

    input type default description
    init-version string 0.1.0 initial version of the project
    tag-prefix string v tag prefix, useful for versioning multiple components in one repository
    assets multiline string list of files to be upload as assets

    Output

    output type example description
    tag string v1.2.3 tag as tag-prefix + version
    version string 1.2.3 new version or current version if not released
    version-major string 1 major part of version
    version-minor string 2 minor part of version
    version-patch string 3 patch part of version
    tag-prefix string v tag prefix the same as input
    released bool true true if new version was released

    Releasing actions with Conventional commits semver release

    This project is an example of how to implement releasing Github Actions. The main challenge is to commit built in the pipeline javascript code and updating major version tag.

    You can check main.yml pipeline.

    Visit original content creator repository https://github.com/grumpy-programmer/conventional-commits-semver-release
  • myMedia

    MyMedia (working title)

    The media library for people who hate using their browsers. Built with Svelte, Tauri, and Sqlite.

    Have you ever used sites like Letterboxd, Trakt, or Goodreads? There are plenty of web apps that do a great job of letting you track what media you consume, but they lack features for power users. myMedia is a cross-platform desktop app that lets you track your media consumption in a way that’s more powerful and customizable than any web app. Data is all stored locally, but should be syncable without much hassle using Syncthing or similar services, since it’s just SQlite.

    Currently, I’m working on the very basics of getting a Tauri + Svelte app working; I’ve never used Rust or Svelte, so I might make some mistakes; bear with me.

    I have a lot of long-term planning done already, at least for a v1.0.0 release and probably a v2.0.0 (v2 will probably be vector search, yet again something I have no experience with).

    Features

    Potential Future Features

    • Custom SQL queries? at your own risk, of course. (Similar to Obsidian Dataview)
    • Custom “Views” based on filter chains with custom columns and sorting (would also work with custom SQL)

    Roadmap

    (see the GitHub Project for a more detailed roadmap)

    (these are STC)

    • v0.0.1: (pre-pre-alpha) Basic Tauri + Svelte app that can display data from an Sqlite database. Read-only
    • v0.1.0: Editable data (UI/UX should be good) and a browse page
    • v0.2.0: Basic search, filtering, and sorting
    • v0.3.0: Connect to APIs (OMDB, IMDB, TMDB, AniDB, etc.) to fetch data
    • v0.4.0: Per-episode/per-chapter notes and (potentially) ratings and tags for episodes/chapters
    • v0.5.0: Full-text search, maybe more advanced filtering
    • v0.6.0: Importing/Exporting data
    • v1.0.0: Finalize UX, make sure docs are thorough, etc.
    • v2.0.0: Vector Search

    Priorities

    Sprint 1

    1. Basic frontend that mostly just shows off data, maybe no editing yet
    2. Very basic backend that can serve data to frontend

    Bugs:

    • Opening an external link in Tauri opens it in the same window instead of a browser.

    Documentation

    Installation

    Note: Some features may require newer versions of WebKit or WebView2. I can’t guarantee support for older operating systems.

    Full Docs

    Contributing & Code Standards

    Install js dependencies with pnpm install
    Then, install the tauri cli: cargo install tauri-cli

    Run with pnpm tauri dev

    Testing & Code Coverage

    8.5 seconds to run a single test? Playwright is just the best, isn’t it 🙂

    Visit original content creator repository
    https://github.com/lumitry/myMedia

  • WebAuthn

    Licensed under the MIT License Requires PHP 7.1.0 Last Commit

    WebAuthn

    A simple PHP WebAuthn (FIDO2) server library

    Goal of this project is to provide a small, lightweight, understandable library to protect logins with passkeys, security keys like Yubico or Solo, fingerprint on Android or Windows Hello.

    Manual

    See /_test for a simple usage of this library. Check webauthn.lubu.ch for a working example.

    Supported attestation statement formats

    • android-key ✅
    • android-safetynet ✅
    • apple ✅
    • fido-u2f ✅
    • none ✅
    • packed ✅
    • tpm ✅

    Note

    This library supports authenticators which are signed with a X.509 certificate or which are self attested. ECDAA is not supported.

    Workflow

             JAVASCRIPT            |          SERVER
    ------------------------------------------------------------
                             REGISTRATION
    
    
       window.fetch  ----------------->     getCreateArgs
                                                 |
    navigator.credentials.create   <-------------'
            |
            '------------------------->     processCreate
                                                 |
          alert ok or fail      <----------------'
    
    
    ------------------------------------------------------------
                          VALIDATION
    
    
       window.fetch ------------------>      getGetArgs
                                                 |
    navigator.credentials.get   <----------------'
            |
            '------------------------->      processGet
                                                 |
          alert ok or fail      <----------------'
    

    Attestation

    Typically, when someone logs in, you only need to confirm that they are using the same device they used during registration. In this scenario, you do not require any form of attestation. However, if you need additional security, such as when your company mandates the use of a Solokey for login, you can verify its authenticity through direct attestation. Companies may also purchase authenticators that are signed with their own root certificate, enabling them to validate that an authenticator is affiliated with their organization.

    no attestation

    just verify that the device is the same device used on registration. You can use ‘none’ attestation with this library if you only check ‘none’ as format.

    Tip

    this is propably what you want to use if you want secure login for a public website.

    indirect attestation

    the browser may replace the AAGUID and attestation statement with a more privacy-friendly and/or more easily verifiable version of the same data (for example, by employing an anonymization CA). You can not validate against any root ca, if the browser uses a anonymization certificate. this library sets attestation to indirect, if you select multiple formats but don’t provide any root ca.

    Tip

    hybrid soultion, clients may be discouraged by browser warnings but then you know what device they’re using (statistics rulez!)

    direct attestation

    the browser proviedes data about the identificator device, the device can be identified uniquely. User could be tracked over multiple sites, because of that the browser may show a warning message about providing this data when register. this library sets attestation to direct, if you select multiple formats and provide root ca’s.

    Tip

    this is probably what you want if you know what devices your clients are using and make sure that only this devices are used.

    Passkeys / Client-side discoverable Credentials

    A Client-side discoverable Credential Source is a public key credential source whose credential private key is stored in the authenticator, client or client device. Such client-side storage requires a resident credential capable authenticator. This is only supported by FIDO2 hardware, not by older U2F hardware.

    Note

    Passkeys is a technique that allows sharing credentials stored on the device with other devices. So from a technical standpoint of the server, there is no difference to client-side discoverable credentials. The difference is only that the phone or computer system is automatically syncing the credentials between the user’s devices via a cloud service. The cross-device sync of passkeys is managed transparently by the OS.

    How does it work?

    In a typical server-side key management process, a user initiates a request by entering their username and, in some cases, their password. The server validates the user’s credentials and, upon successful authentication, retrieves a list of all public key identifiers associated with that user account. This list is then returned to the authenticator, which selects the first credential identifier it issued and responds with a signature that can be verified using the public key registered during the registration process.

    In a client-side key process, the user does not need to provide a username or password. Instead, the authenticator searches its own memory to see if it has saved a key for the relying party (domain). If a key is found, the authentication process proceeds in the same way as it would if the server had sent a list of identifiers. There is no difference in the verification process.

    How can I use it with this library?

    on registration

    When calling WebAuthn\WebAuthn->getCreateArgs, set $requireResidentKey to true, to notify the authenticator that he should save the registration in its memory.

    on login

    When calling WebAuthn\WebAuthn->getGetArgs, don’t provide any $credentialIds (the authenticator will look up the ids in its own memory and returns the user ID as userHandle). Set the type of authenticator to hybrid (Passkey scanned via QR Code) and internal (Passkey stored on the device itself).

    disadvantage

    The RP ID (= domain) is saved on the authenticator. So If an authenticator is lost, its theoretically possible to find the services, which the authenticator is used and login there.

    device support

    Availability of built-in passkeys that automatically synchronize to all of a user’s devices: (see also passkeys.dev/device-support)

    • Apple iOS 16+ / iPadOS 16+ / macOS Ventura+
    • Android 9+
    • Microsoft Windows 11 23H2+

    Requirements

    Infos about WebAuthn

    FIDO2 Hardware

    Visit original content creator repository https://github.com/lbuchs/WebAuthn
  • WebAuthn

    Licensed under the MIT License Requires PHP 7.1.0 Last Commit

    WebAuthn

    A simple PHP WebAuthn (FIDO2) server library

    Goal of this project is to provide a small, lightweight, understandable library to protect logins with passkeys, security keys like Yubico or Solo, fingerprint on Android or Windows Hello.

    Manual

    See /_test for a simple usage of this library. Check webauthn.lubu.ch for a working example.

    Supported attestation statement formats

    • android-key ✅
    • android-safetynet ✅
    • apple ✅
    • fido-u2f ✅
    • none ✅
    • packed ✅
    • tpm ✅

    Note

    This library supports authenticators which are signed with a X.509 certificate or which are self attested. ECDAA is not supported.

    Workflow

             JAVASCRIPT            |          SERVER
    ------------------------------------------------------------
                             REGISTRATION
    
    
       window.fetch  ----------------->     getCreateArgs
                                                 |
    navigator.credentials.create   <-------------'
            |
            '------------------------->     processCreate
                                                 |
          alert ok or fail      <----------------'
    
    
    ------------------------------------------------------------
                          VALIDATION
    
    
       window.fetch ------------------>      getGetArgs
                                                 |
    navigator.credentials.get   <----------------'
            |
            '------------------------->      processGet
                                                 |
          alert ok or fail      <----------------'
    

    Attestation

    Typically, when someone logs in, you only need to confirm that they are using the same device they used during registration. In this scenario, you do not require any form of attestation. However, if you need additional security, such as when your company mandates the use of a Solokey for login, you can verify its authenticity through direct attestation. Companies may also purchase authenticators that are signed with their own root certificate, enabling them to validate that an authenticator is affiliated with their organization.

    no attestation

    just verify that the device is the same device used on registration. You can use ‘none’ attestation with this library if you only check ‘none’ as format.

    Tip

    this is propably what you want to use if you want secure login for a public website.

    indirect attestation

    the browser may replace the AAGUID and attestation statement with a more privacy-friendly and/or more easily verifiable version of the same data (for example, by employing an anonymization CA). You can not validate against any root ca, if the browser uses a anonymization certificate. this library sets attestation to indirect, if you select multiple formats but don’t provide any root ca.

    Tip

    hybrid soultion, clients may be discouraged by browser warnings but then you know what device they’re using (statistics rulez!)

    direct attestation

    the browser proviedes data about the identificator device, the device can be identified uniquely. User could be tracked over multiple sites, because of that the browser may show a warning message about providing this data when register. this library sets attestation to direct, if you select multiple formats and provide root ca’s.

    Tip

    this is probably what you want if you know what devices your clients are using and make sure that only this devices are used.

    Passkeys / Client-side discoverable Credentials

    A Client-side discoverable Credential Source is a public key credential source whose credential private key is stored in the authenticator, client or client device. Such client-side storage requires a resident credential capable authenticator. This is only supported by FIDO2 hardware, not by older U2F hardware.

    Note

    Passkeys is a technique that allows sharing credentials stored on the device with other devices. So from a technical standpoint of the server, there is no difference to client-side discoverable credentials. The difference is only that the phone or computer system is automatically syncing the credentials between the user’s devices via a cloud service. The cross-device sync of passkeys is managed transparently by the OS.

    How does it work?

    In a typical server-side key management process, a user initiates a request by entering their username and, in some cases, their password. The server validates the user’s credentials and, upon successful authentication, retrieves a list of all public key identifiers associated with that user account. This list is then returned to the authenticator, which selects the first credential identifier it issued and responds with a signature that can be verified using the public key registered during the registration process.

    In a client-side key process, the user does not need to provide a username or password. Instead, the authenticator searches its own memory to see if it has saved a key for the relying party (domain). If a key is found, the authentication process proceeds in the same way as it would if the server had sent a list of identifiers. There is no difference in the verification process.

    How can I use it with this library?

    on registration

    When calling WebAuthn\WebAuthn->getCreateArgs, set $requireResidentKey to true, to notify the authenticator that he should save the registration in its memory.

    on login

    When calling WebAuthn\WebAuthn->getGetArgs, don’t provide any $credentialIds (the authenticator will look up the ids in its own memory and returns the user ID as userHandle). Set the type of authenticator to hybrid (Passkey scanned via QR Code) and internal (Passkey stored on the device itself).

    disadvantage

    The RP ID (= domain) is saved on the authenticator. So If an authenticator is lost, its theoretically possible to find the services, which the authenticator is used and login there.

    device support

    Availability of built-in passkeys that automatically synchronize to all of a user’s devices: (see also passkeys.dev/device-support)

    • Apple iOS 16+ / iPadOS 16+ / macOS Ventura+
    • Android 9+
    • Microsoft Windows 11 23H2+

    Requirements

    Infos about WebAuthn

    FIDO2 Hardware

    Visit original content creator repository https://github.com/lbuchs/WebAuthn